It is currently Wed Dec 11, 2019 3:41 am


All times are UTC




Post new topic Reply to topic  [ 24 posts ]  Go to page 1, 2, 3  Next
Author Message
 Post subject: New Volume class with improved compression
PostPosted: Sun Feb 13, 2011 12:46 am 
Developer
User avatar

Joined: Sun May 04, 2008 6:35 pm
Posts: 1827
I've been working over the last week or two on improving the volume compression in PolyVox, as several people have requested this. I've been working in a seperate branch, but it's now in a state where it is fairly usable so I intend to merge it into the trunk soon. If you are interested in testing the new system then you can get it from SVN: https://thermite.svn.sourceforge.net/sv ... te/PolyVox

Or use the following snapshot: http://www.thermite3d.org/temp/PolyVox-RLE-rev1400.zip

The new system still stores the volume as a collection of blocks but does away with the prevous block-sharing system. Instead it performs compression on each block to reduce its memory footprint. The compression is currently a very simple RLE based approach but potentially this can change in the future. To avoid constantly compressing and uncompressing blocks as voxels are accessed, a caching mechanism is used to keep a number of recently accessed block in the uncompressed state.

The compression ratio varies a lot depending on the nature of the data. For minecraft style worlds which are just storing a material a compression rate of 100:1 seems quite common. When storing a smoothly changing density value the compression rate is much lower (even zip files only achieve about 7:1 compression on Perlin noise, and that is a much more advanced algorithm). If you are storing density values please see the notes in the Volume header about improving your data.

I've tried not to break anything (although the class interface has changed slightly) but there are parts of PolyVox which I no longer use myself. Therefore if anyone wants to test the code in their own project and report back it would be appreciated. I'm also curious how you find the performance, and in particular what compression ratio you achieve (call the calculateCompressionRatio() function).

One important point - I know some people are storing their world as a collection of volumes in order to page some in to and out of memory. Hopefully the improved compression will make this unnecessary, but if you do continue to use multiple volumes then be aware that the memory overhead per volume is greater because each volume has a block cache. You will probably want to set the block cache to be very small (maybe even to one) to minimise this.


Top
Offline Profile  
Reply with quote  
 Post subject: Re: New Volume class with improved compression
PostPosted: Sun Feb 13, 2011 6:20 pm 

Joined: Sun Feb 13, 2011 4:14 am
Posts: 1
Greetings, below is the result of a few tests.

Admin Note: Post shortened for clarity

  • Visual studio 2010 Express Edition.
  • PolyVox\build\Debug\
  • Volume : 32x32x32

  • uBlockSideLength = 16
  • Perlin
  • Size = 37328
  • Compression Ratio = 1.139160
  • Time = 687
    Detected memory leaks!
    Dumping objects ->
    {195} normal block at 0x02610068, 4096 bytes long.
    Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    {193} normal block at 0x00B0E730, 4096 bytes long.
    Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    {192} normal block at 0x00B0D6F0, 4096 bytes long.
    Data: < > F0 F0 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    {190} normal block at 0x00B0C6B0, 4096 bytes long.
    Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    {187} normal block at 0x00B0B670, 4096 bytes long.
    Data: < > F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 00 00 00 00 00
    {185} normal block at 0x00B0A630, 4096 bytes long.
    Data: < > F0 F0 F0 F0 00 00 00 00 00 00 00 00 00 00 00 00
    {183} normal block at 0x00B095F0, 4096 bytes long.
    Data: < > F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0
    {181} normal block at 0x00B085B0, 4096 bytes long.
    Data: < > F0 F0 F0 F0 F0 F0 F0 F0 F0 00 00 00 00 00 00 00
    {175} normal block at 0x00B07570, 4096 bytes long.
    Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    Object dump complete.

  • uBlockSideLength = 16
  • Empty
  • Size = 37328
  • Compression Ratio = 1.139160
  • Time = 140
    Detected memory leaks!
    Dumping objects ->
    {724} normal block at 0x026210A8, 4096 bytes long.
    Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    {698} normal block at 0x02620068, 4096 bytes long.
    Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    {625} normal block at 0x00B0E568, 4096 bytes long.
    Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    {599} normal block at 0x00B0D4B8, 4096 bytes long.
    Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    {333} normal block at 0x00B0C478, 4096 bytes long.
    Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    {307} normal block at 0x00B0B3E0, 4096 bytes long.
    Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    {232} normal block at 0x00B0A100, 4096 bytes long.
    Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    {205} normal block at 0x00B09030, 4096 bytes long.
    Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    {175} normal block at 0x00B07570, 4096 bytes long.
    Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    Object dump complete.

  • uBlockSideLength = 32
  • Perlin
  • Size = 65684
  • Compression Ratio = 2.004517
  • Time = 687
    Detected memory leaks!
    Dumping objects ->
    {167} normal block at 0x011280A8, 32768 bytes long.
    Data: < > F0 F0 F0 F0 F0 F0 F0 F0 F0 00 00 00 00 00 00 00
    {161} normal block at 0x01120068, 32768 bytes long.
    Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    Object dump complete.

  • uBlockSideLength = 8
  • Perlin
  • Size = 36440
  • Compression Ratio = 1.112061
  • Time = 781
    Detected memory leaks!
    Dumping objects ->
    {368} normal block at 0x02612468, 512 bytes long.
    Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    {367} normal block at 0x02612228, 512 bytes long.
    Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    .
    .
    .
    Data: < > F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0
    {293} normal block at 0x00B08B40, 512 bytes long.
    Data: < > F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0
    {287} normal block at 0x00B087D8, 512 bytes long.
    Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    Object dump complete.

  • uBlockSideLength = 4
  • Perlin
  • Size = 45656
  • Compression Ratio = 1.393311
  • Time = 1938
    Detected memory leaks!
    Dumping objects ->
    {1460} normal block at 0x0123D660, 64 bytes long.
    Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    {1459} normal block at 0x0123D5E0, 64 bytes long.
    Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    .
    .
    .
    {1190} normal block at 0x01234960, 64 bytes long.
    Data: < > F0 F0 00 00 F0 F0 00 00 F0 00 00 00 F0 00 00 00
    {1184} normal block at 0x012347B8, 64 bytes long.
    Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    Object dump complete.

Good day.


Top
Offline Profile  
Reply with quote  
 Post subject: Re: New Volume class with improved compression
PostPosted: Sun Feb 13, 2011 8:27 pm 
Developer
User avatar

Joined: Sun May 04, 2008 6:35 pm
Posts: 1827
Thanks for taking the time to test. But actually, the volume and block sizes you are using are not very realistic, which is giving very poor compression ratios (a ratio greater than 1.0 means the compressed form is actually larger than the raw form). Because the volume and block sizes are small the per-volume and per-block overhead is outweighing the benefit of the compression.

Of course, that is according to my own definition of what is a realistic size, if you have some particular need for such small volumes then I'd be interested to know about it. In general I'm working with 512x512x512 volumes in my own project, and I know several users want to go much larger. My block size is 32x32x32 though I've also experimented with 64x64x64.

I'll keep the memory leaks in mind though, I can't say I'm terribly suprised but I should get them fixed. I've been meaning to plug in Valgrind at some point and see what it tells me.

I'm mostly interested in whether the new scheme causes any problems for people existing projects, and how it handles their real world data. The biggest problem I've found when using it in Thermite is the lack of thread-safety. The old version of the Volume class was not thread safe either but in practice you could get away with it... the new version is a lot more sensitive because the threads interfere with the caching mechanism. Is anyone in a position where they need to access the volume from more than one thread at a time? If so, can you implement the access mechanisms in your own code or do you need them bult into PolyVox?


Top
Offline Profile  
Reply with quote  
 Post subject: Re: New Volume class with improved compression
PostPosted: Mon Feb 14, 2011 12:07 am 

Joined: Sat Sep 18, 2010 9:45 pm
Posts: 189
In my project I currently have "infinite terrain." It's implemented in a way where I'm using threads to generate multiple page regions for each Volume. This wouldn't be an issue if the region do not touch WRT to shared blocks (each region contain all it's shared blocks). My requirement makes it so I need to access at least one neighbor's shared block for each region because I want to get rid of borders between these chunks. So long story short--I'm not doing this in a thread-safe manner but I minimize the chance of that by way in which I order the access of shared blocks. Plus, I currently have a solution to this by explicitly ordering shared block access using atomic variable when I access these borders (because I can explicitly order how I generate my borders). The atomic variable is used to indicate whether something has already been generated, i.e. if a value on a border has been generate, no need to generate it again. Another possible solution is that I may be able to work with the explicit ordering these access, in such a manner that it is known a prior that they will never touch (but I think I will just go with the atomic variable border generation thing).

Anyway, I gather from your post that this new compression scheme will have some thread-safety issues. But I figure it may not require that much effort on my part to make it thread-safe because of my plans for explicit control of shared-block access. So my only question is that the new system still works with shared_blocks, right? So I can assume as long as a thread does not touch another thread's shared_blocks in a none thread-safe manner then I can be assured of thread-safety?

Other than that I'm cool with this. I still plan to have multiple volumes because I want to to support really large streaming worlds. I don't necessarily need "infinite" terrain, just terrain that are large enough that it requires paging from disk (mainly the difference here is between infinite procedural terrain and 'large' terrain with precomputed elements).

BTW: I'm downloading the branch right now will check it out ASAP.


Top
Offline Profile  
Reply with quote  
 Post subject: Re: New Volume class with improved compression
PostPosted: Mon Feb 14, 2011 5:56 am 

Joined: Sat Sep 18, 2010 9:45 pm
Posts: 189
I'm having some issues with threading. I read your comments in Volume.h regarding thread-safety and that's definitely my problem. So I guess the solution is to put a mutex on accessing the volume. Before I was going to use atomic variables to guard border access, but I don't think that will work now. So I may have to put a full blown mutex system in place. I haven't thought about how to do this yet.

BTW if I force my paging system to become sequential then there is no crash.


Top
Offline Profile  
Reply with quote  
 Post subject: Re: New Volume class with improved compression
PostPosted: Mon Feb 14, 2011 10:19 pm 

Joined: Sat Sep 18, 2010 9:45 pm
Posts: 189
I looked at the Volume code a little bit and I have my own solution for getting around the block cache vector not being concurrent safe (i only gleamed the code so I'm not sure I understand all the implications yet).

My solution involves using intel TBB library AND modifying PolyVox::Volume's source. Basically, I would use TBB's concurrent_vector as a replacement of std::vector for block cache. How do I assure that there is concurrent safe access of the code? It's based off this article regarding safe concurrent traversal of concurrent_vector: http://software.intel.com/en-us/blogs/2 ... correctly/

So I will use min(allocated(), size()) for size(), instead. Using this means that I will only traverse the guaranteed constructed elements when doing LRU cache replacement.. The cache block allocation code remains the same. So, the only change is to the LRU replacement code.

Plus, under normal usage I do not expect to ever write to the same value (except the case where I need to write border values; I have solution for this; see post above); concurrent safety is achieved with TBB concurrent vector.


Top
Offline Profile  
Reply with quote  
 Post subject: Re: New Volume class with improved compression
PostPosted: Mon Feb 14, 2011 10:22 pm 
Developer
User avatar

Joined: Sun May 04, 2008 6:35 pm
Posts: 1827
Thanks, I was hoping you would test it because you are (to my knowledge) the person who is pushing PolyVox the hardest and doing some unexpected things (like the paging).

Of course, one of my main motivations for implementing this new system is to allow much larger volumes so that people don't have to perform paging. You're aware of the issues this causes on the block boundaries for example. On the other hand, the volumes cannot yet be considered infinite and even if they were it would take a fair bit of work on your part to move across to the new system.

I think one solution might be a compatibility mode for the volumes. This would work by making sure that the block cache was large enough to hold all the blocks in the volume so that they never need to be compressed/decompressed at runtime. Essentially it disables the compression scheme, which should make it a lot more thread safe. You can add the following function to the Volume class and call it to achieve this:
Code:
template <typename VoxelType>
void Volume<VoxelType>::useCompatibilityMode(void)
{
   setBlockCacheSize(m_uNoOfBlocksInVolume * 2);

   for(uint32_t z = 0; z < m_uDepthInBlocks; z++)
   {
      for(uint32_t y = 0; y < m_uHeightInBlocks; y++)
      {
         for(uint32_t x = 0; x < m_uWidthInBlocks; x++)
         {
            getUncompressedBlock(x,y,z);
         }
      }
   }
}

I've also added this to the branch. You can call this new function immediatly after you create the volume, before you start filling it with data.

Longer term I will have a think about whether PolyVox should handle thread saftey issues properly, or whether these should be left to the user (which is what the C++ STL does - the standard library is not thread safe). If I do implement proper thread saftey then it requires more dependancies on C++0x/Boost, and also may be slow than letting the user make threading choices at a high level.

Let me know if the above fix helps.

Edit: We posted at the same time, I'll just read you other post...


Top
Offline Profile  
Reply with quote  
 Post subject: Re: New Volume class with improved compression
PostPosted: Mon Feb 14, 2011 10:43 pm 
Developer
User avatar

Joined: Sun May 04, 2008 6:35 pm
Posts: 1827
Ok, I read your other post. Rather than the vector (or, as well as the vector), I think the real issue is the fact that the compression/decompression can be called by more than one thread simultaneously. For example, one thread tries to access a block, finds it's not in the cache, and starts to decompress it. Then a second thread comes along and starts to decompress it, not knowing the first thread has already started. There are asserts for this in the code and this is what I see being triggered.

It's easy to fix - you simply need a locking mechanism around the code that handles the LRU list and calls compress()/uncompress(). The whole idea of the caching is that these functions should be called too often so fighting over the lock is unlikely to be a problem. However, I'm reluctant to take responsibility for threading at this point because it requres some effort to guarentee thread saftey, should be done in all parts of the library, and introduces further dependancies. Of course that doesn't stop you doing it in your local copy, and if you find it works well then it can be ported back to the trunk.


Top
Offline Profile  
Reply with quote  
 Post subject: Re: New Volume class with improved compression
PostPosted: Tue Feb 15, 2011 12:24 am 

Joined: Sat Sep 18, 2010 9:45 pm
Posts: 189
that's cool. Thanks for the hints. I think I will just implement this myself for now and if you like it we want merge it back.

What sort of dimensions do you think the large volume supports now with the new compression scheme. I need more than 1024 x 1024. Another reason I use threads is so I have background loading, improving interactivity.

Another thing I may try is to go back to one terrain chunk one volume and not worry about borders since I'm fill-rate bound. This would still allow for threaded generation and having the new compression scheme. Actually, I will try this first.


Top
Offline Profile  
Reply with quote  
 Post subject: Re: New Volume class with improved compression
PostPosted: Tue Feb 15, 2011 9:51 pm 
Developer
User avatar

Joined: Sun May 04, 2008 6:35 pm
Posts: 1827
beyzend wrote:
that's cool. Thanks for the hints. I think I will just implement this myself for now and if you like it we want merge it back.


Yep, sounds good. For now I will continue to make no promises about thread safety and will leave it to user code to enforce. Or, you can add your own thread safety to PolyVox which is what you are suggesting. At some point in the future I probably will make PolyVox thread safe, possibly based on your changes.

beyzend wrote:
What sort of dimensions do you think the large volume supports now with the new compression scheme. I need more than 1024 x 1024. Another reason I use threads is so I have background loading, improving interactivity.


Your data is very simple (almost a heightmap, despite being in a volume). This will compress very well. I just did a test of 2D perlin terrain in a 8192x8192x256 volume and it used about 50Mb. This is a pretty big volume, though not quite infinite ;-) It's a step in the right direction though - now that there are timestamps and an LRU scheme it should be possible to extend this mechanism to cache the really old blocks to disk and free memory. Essentially it's then a three level hierarchy - uncompressed blocks (fastest), compressed blocks, and blocks on disk (slowest). We'll get there in time :-)

beyzend wrote:
Another thing I may try is to go back to one terrain chunk one volume and not worry about borders since I'm fill-rate bound. This would still allow for threaded generation and having the new compression scheme. Actually, I will try this first.


Sure, if you are sticking with multiple volumes then you'll probably want to reduce the block cache size. It defualts to 256 which is probably more than you need and gives an overhead of several megabytes per volume. Use setBlockCacheSize() to change this.

Anyway, there's no urgency to merge this stuff to the trunk. I'll see how it goes with your project, make some tweaks myself, and merge in a week or two.


Top
Offline Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 24 posts ]  Go to page 1, 2, 3  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 0 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group
Theme created StylerBB.net