It is currently Sat Aug 22, 2020 4:29 am


All times are UTC




Post new topic Reply to topic  [ 27 posts ]  Go to page Previous  1, 2, 3  Next
Author Message
 Post subject: Re: LargeVolume rather slow & wasting space?
PostPosted: Sat Jan 05, 2013 8:13 am 
Developer
User avatar

Joined: Sun May 04, 2008 6:35 pm
Posts: 1827
The real issue here is that your underlying data is not smooth, and so it's very difficult to generate a smooth mesh. From what you said earlier it seems you are using only one bit per voxel, whereas the Marching Cubes algorithm is desgned to operate on a smoothly changing density field. A range of density values from 0-255 (with a threshold of 128) will give you perfectly smooth terrain.

TheSHEEEP wrote:
You mean with a MeshDecimator?
Good idea, I could try that out.


Actually it was deprecated and has been removed from the version in Git. You're really better of performing smoothing on the volume data.


Top
Offline Profile  
Reply with quote  
 Post subject: Re: LargeVolume rather slow & wasting space?
PostPosted: Sat Jan 05, 2013 12:39 pm 

Joined: Thu Oct 06, 2011 2:26 pm
Posts: 46
Location: Berlin
Well, right now, each voxel is either 255 or 0 (so I use byte now).
So after applying the heightmap, each voxel is either completely solid or air..

I'm not exactly sure how I could smooth that out.
Would the following work?
Quote:
1. Apply heightmap as done before, setting voxels to be completely solid.
2. Iterate over the topmost voxel at each x/z coordinate.
2.2. Set the density of that voxel to an interpolated value of all the voxels around it.

That should basically lead to a layer of voxels above the fully solid ground that is not fully solid. Still not exactly "smooth", though.

But I must say I'm not 100% sure how the density value of a voxel influences the mesh creation. I'm used to voxels that are either fully solid or air, but I guess I'm too used to minecraft worlds :D

When did you remove MeshDecimator from git? I downloaded the latest develop branch yesterday and used the decimator today.

_________________
My site! - Have a look :)
Also on Twitter - with more ketchup


Top
Offline Profile  
Reply with quote  
 Post subject: Re: LargeVolume rather slow & wasting space?
PostPosted: Sun Jan 06, 2013 10:54 am 
Developer
User avatar

Joined: Sun May 04, 2008 6:35 pm
Posts: 1827
This is basically smoothing the volume data in only one direction, and I can't quite get my head around whether that will work :? Try it and see!

You can also make use of the LowPassFilter class to blur the data (there is a test which shows how it's used I think) but be aware that you need two volumes as it blurs from a source into a destination. So you'll probably need to work on a small volume.

TheSHEEEP wrote:
When did you remove MeshDecimator from git? I downloaded the latest develop branch yesterday and used the decimator today.


A few weeks ago... and it does appear to be gone: https://bitbucket.org/volumesoffun/poly ... at=develop

Any chance you are actually on Master or something?


Top
Offline Profile  
Reply with quote  
 Post subject: Re: LargeVolume rather slow & wasting space?
PostPosted: Sun Jan 06, 2013 11:08 am 

Joined: Thu Oct 06, 2011 2:26 pm
Posts: 46
Location: Berlin
David Williams wrote:
This is basically smoothing the volume data in only one direction, and I can't quite get my head around whether that will work :? Try it and see!

You can also make use of the LowPassFilter class to blur the data (there is a test which shows how it's used I think) but be aware that you need two volumes as it blurs from a source into a destination. So you'll probably need to work on a small volume.

Will try!

David Williams wrote:
A few weeks ago... and it does appear to be gone

Any chance you are actually on Master or something?

Well I downloaded the zip of "develop" from this page (as I'm using the code in my own repo, I don't want to mix different repositories within). It is possible that I mixed it up with the old headers, though, so I guess it's just me screwing up ;)

Edit: Yup, that was it. I forgot to remove the old source files. Guess it doesn't hurt,though, as it still works. Why did you remove it, anyway?

_________________
My site! - Have a look :)
Also on Twitter - with more ketchup


Top
Offline Profile  
Reply with quote  
 Post subject: Re: LargeVolume rather slow & wasting space?
PostPosted: Mon Jan 07, 2013 10:19 am 
Developer
User avatar

Joined: Sun May 04, 2008 6:35 pm
Posts: 1827
TheSHEEEP wrote:
Why did you remove it, anyway?


Firstly because it had some problems with performance and robustness (particularly when multiple materials were involved). Mesh simplification is an area where I don't have any previous experience and I ended up concluding that it could get quite complex, and that users would be better off using an external library such as OpenMesh.

More importantly, I'm increasingly of the opinion that LOD should be handled by downsampling the volume data rather than than simplifying the mesh. If you've been following our blog then you'll see we've had success doing this for both cubic and smooth terrain, and it seems a lot more robust.


Top
Offline Profile  
Reply with quote  
 Post subject: Re: LargeVolume rather slow & wasting space?
PostPosted: Mon Jan 07, 2013 12:23 pm 

Joined: Thu Oct 06, 2011 2:26 pm
Posts: 46
Location: Berlin
Ah, allright, thanks for the explanation!

So, I did some smoothing, and here is the result:
Image
This is with full smoothing i.e. iterating over each voxel (within limits of the highest and lowest set y to at least optimize a bit) and taking the medium value of all 27 neighbours.
This does look smooth, indeed, but as you probably can see, it took 46 seconds for the smoothing alone. Which is of course not acceptable in a real application.

When only building the medium of the 7 neighbours (so no diagonals in the 3x3x3 voxel cube), it takes "only" 13.5 seconds, but looks much less improved.

When building a full 27 neighbour medium, but only for those voxels that are full solid (meaning that air voxels remain unchanged), the process takes 21 second and looks even worse than the 7 neighbour check above.

So this is a very costly process, and remember, this is a 500x500x500 field only, far away from the 2000x1000x2000 I had originally planned ;)

The only thing I can think of right now would be to do the smoothing somehow during the application of the heightmap.
So, for example, instead of just applying the value at one x,y,z coordinate at a time, I could apply a 3x3x3 field at a time, which has contains values that are somehow smoothed taking the heightmap as input. Or, applying only one value at a time, but smoothing that with surrounding heightmap values. Don't know if that would yield a huge performance boost, though.

The good news is, that I just tried out to create a 200x200x200 field then scaling the resulting mesh to look like 500x500x500 resulted in a mesh that looks even better than the picture above and took less than 5 seconds in total (including full smoothing). Which is without any kind of optimization by threading. Nice.
Unfortunately, I don't know if that would still work when I want to "carve out" rivers, caves, etc. in the voxel data and then scale.

_________________
My site! - Have a look :)
Also on Twitter - with more ketchup


Top
Offline Profile  
Reply with quote  
 Post subject: Re: LargeVolume rather slow & wasting space?
PostPosted: Tue Jan 08, 2013 9:32 am 
Developer
User avatar

Joined: Sun May 04, 2008 6:35 pm
Posts: 1827
There are a couple of things which spring to mind that you could improve here.

Firstly, you are converting a heightmap to a volume by writing only 0 or 255, and then smoothing the result. But are you smoothing the whole volume? Remember, you only actually need to smooth the parts of the volume which are near to the surface because you won't see the effect of smoothing the other parts. What's more, for groups of 27 voxels which are distant from the surface they are probably all 0 or all 255, so the averaging will not be changing the values anyway.

Secondly I'd look into better ways of writing the correct data into the volume in the first place, to avoid the need for smoothing. I don't have an exact idea here, but rather than initially writing 0 or 255 you could instead check how far you are from the value in the heightmap and write values based on that. So if your current voxel is a long way below the heightmap value then write a large positive value, if it's a small distance above then write a small negative value, etc. Then run the extractor with the threshold set to zero.

In this case you would probably also want to 'compress' the density field around the isosurface. What I mean by this is that if your voxel is at height 57 and the value in your heightmap is 63 then you are below the surface and should write a positive value. But rather than just writing the difference of '6' you should scale this by some factor (maybe 10) and write 60 instead. The transition from fully negative to fully positive will then take place over a shorter distance. You will need to do some clamping to prevent wraparound. I think this compression will give a smoother surface but I'm not certain.

Also, is the use of heightmaps your final solution or just temporary thing? Because once you have converted it to a volume and done the blurring you can just save the volume to disk and give it to your users. No need to do the conversion each time (depending on your application...). Or if you are actually planning to use 3D Perlin noise then that is smooth anyway.


Top
Offline Profile  
Reply with quote  
 Post subject: Re: LargeVolume rather slow & wasting space?
PostPosted: Tue Jan 08, 2013 9:56 am 

Joined: Thu Oct 06, 2011 2:26 pm
Posts: 46
Location: Berlin
David Williams wrote:
Firstly, you are converting a heightmap to a volume by writing only 0 or 255, and then smoothing the result. But are you smoothing the whole volume? Remember, you only actually need to smooth the parts of the volume which are near to the surface because you won't see the effect of smoothing the other parts. What's more, for groups of 27 voxels which are distant from the surface they are probably all 0 or all 255, so the averaging will not be changing the values anyway.

I already "only" go through all voxels that are in the valid y range (I save the highest and lowest y value while applying the heightmap). That makes sense as I do not apply the heightmap over the complete height of the terrain. I need some non-affected ground to be able to carve out caves and the likes in a next step. But you are right, there are of course more ways to improve this.
I'd rather do the smoothing while applying the height map, though, if that turns out to be faster and/or better applicable.

David Williams wrote:
Secondly I'd look into better ways of writing the correct data into the volume in the first place, to avoid the need for smoothing. I don't have an exact idea here, but rather than initially writing 0 or 255 you could instead check how far you are from the value in the heightmap and write values based on that. So if your current voxel is a long way below the heightmap value then write a large positive value, if it's a small distance above then write a small negative value, etc. Then run the extractor with the threshold set to zero. [...+Compression]

That would work? Well, I gotta try that one out.
Sounds pretty nice, actually.

David Williams wrote:
Also, is the use of heightmaps your final solution or just temporary thing? Because once you have converted it to a volume and done the blurring you can just save the volume to disk and give it to your users. No need to do the conversion each time (depending on your application...). Or if you are actually planning to use 3D Perlin noise then that is smooth anyway.

Nothing is a final solution here, I'm trying out various things to see which one turns out to be best for my project idea. I'm mostly looking out for flexibility and performance. The resulting look is not that important at this point, as the final project will have much more in its graphics pipeline than just terrain.

I am definitely going for more deformed terrain with overhangs and caves.
Basically, there are 2 ways to achieve this:

1. Apply a heightmap, then deform the terrain. Probably with a lower resolution (like a 100x100x100 field for 400x400x400 terrain) 3D vector field that contains directions to move the voxels along.
Take a look at this paper. I must say I don't fully understand what they are doing (as usual with scientific papers), but that is where I got the inspiration from.

2. Create the terrain directly from 3D noise. This would work, of course, but IMO would also be the most time consuming approach. If I can somehow achieve a good result with 1., I'd prefer that. I will probably try this out anyway.

In any case, I will apply things caves, lakes, etc. in steps that come after creating the "base terrain". Which is another reason why I'd prefer to take route 1.
Also, in a final solution, I will use Simplex Noise instead of Perlin noise, as the results are the same (improved, actually), but the algorithm is simply faster.

Thanks for your support, btw. It makes things much easier for me :)

_________________
My site! - Have a look :)
Also on Twitter - with more ketchup


Top
Offline Profile  
Reply with quote  
 Post subject: Re: LargeVolume rather slow & wasting space?
PostPosted: Wed Jan 09, 2013 9:22 am 
Developer
User avatar

Joined: Sun May 04, 2008 6:35 pm
Posts: 1827
TheSHEEEP wrote:
That would work? Well, I gotta try that one out.
Sounds pretty nice, actually.

I don't know, but I think it's worth investigating things in that kind of direction. Basically you know know what your 'ideal' data looks like because you have seen the results of blurring the binary volume. After performing this blurring you have a single large voxel value under the terrain, and single small voxel value above theterrain, and a fast transition (probably only spanning a few voxels) from one to the other. I'm hoping the approach I described will be a faster way of creating this kind of data but without the overhead of blurring. I've not tried it though.
TheSHEEEP wrote:
Thanks for your support, btw. It makes things much easier for me

No problem, it's nice to see what people are doing with PolyVox.


Top
Offline Profile  
Reply with quote  
 Post subject: Re: LargeVolume rather slow & wasting space?
PostPosted: Fri Jan 18, 2013 9:12 am 

Joined: Thu Oct 06, 2011 2:26 pm
Posts: 46
Location: Berlin
Okay, I'm working on my research again today and first implemented the smoothing when writing the voxel data in the first place.

The result does look fine from afar, the unpleasant shadow "artifacts" are gone:
Click me!

The downside is that when you look at it closer, you notice that the terrain has a a certain, rather bad looking "terrace" effect and sharp edges:
Click me, too!
Come onnnn, just click me!

Of course, that is not acceptable, but undeniably an improvement in booth looks and speed. This is a 500x500x500 terrain and the whole process only took ~5-8 seconds (again, without me doing any kind of optimization with threading).

I experimented with a larger number of smoothing ranges, all in percentage of the total height of the volume. I tried many values from 1% to 15%, and they did have an impact on the look, but the terrace effect happens with all of them.

My guess is that this terrace effect happens due to the smoothing only happening in the y direction, being completely agnostic to the other direction neighbours. When applying the heightmap, I apply a certain smoothed y range. So if the heightmap would hit y=100 and the smoothing range is set set to 10 voxels, the voxels from y=90 to y=110 on that (x/z) position will be smoothed from 0 to 255.

So I do need a better smoothing than just in one direction, which still has an acceptable speed.

Well, that or dual marching cubes in PolyVox. AFAIK, sharp edges like that are prevented by that algorithm.

Edit:
Just as a test of how it would look, I added an additional smoothing step after the initial volume generation, which iterates over each voxel in the volume that is in the limit of the highest and lowest set voxel as described before.
But this time, the smoothing only happens in x/z direction (as y already is smoothed) and only if the voxel is not already fully solid (255) or air (0). This adds ~2 seconds to the whole process, but does yield pretty good looking results.
So what I will do now is try to implement that additional x/z smoothing into the initial volume creation so I do not have to iterate over each single voxel, instead doing a "how would the neighbouring voxel look". Maybe that will be a bit faster, but maybe not as I'm doing additional calculations for the neighbouring voxels that I would do anyway when setting them.

_________________
My site! - Have a look :)
Also on Twitter - with more ketchup


Top
Offline Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 27 posts ]  Go to page Previous  1, 2, 3  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 3 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group
Theme created StylerBB.net