Freakazo wrote:
Just a small update, I've been having issues with the geometry shader, I'm not sure wether it's the game engine(panda3d) or the the driver or my codes fault, and don't have access to nvidia hardware to test it out with different drivers. If I can't get it working soon I'll try it out with GamePlay.
Perhaps you could try running another aplicaion which uses geometry shaders? Some of the Ogre examples use them, otherwise you could just search for OpenGL geometry shader tutorials. This should at least let you work out whether they are supported properly by your card. Again, if you have Ogre installed you could try:
http://www.ogre3d.org/forums/viewtopic. ... 3&start=25 Then you should be able to convert it to Panda3D
Freakazo wrote:
I would like to aim for both (1 bit and opaque), I wasn't aware of the sorting problem before now. I have read a bit about it, but I'm unsure how to sort when using hardware instancing, will definitely need to research some more.
The sorting problem only affects full-range transparency, for 1-bit alpha there is no need to sort because the depth buffer can take care of deciding what is visible. For this reason 1-bit alpha is much easier and is what is used by Minecraft for windows, ladders, etc. If you just get this working then it will already be an achievement. But still you should start by worrying about completely solid blocks first... i.e. get you geometry shaders working as expected.
Full-range transparency would be fun though... it's also something which is on my mind for the CubicSurfaceExtractor.
Freakazo wrote:
Regarding whether the transparent vs. solids has to be in seperate meshes (objects), yes actually everything that has a different mesh (Material?) will need to be separate. I just assumed this splitting will be done in user code, but it might be better to extract only a specific materials' points at a time, or the extractor returning an array of meshes (point lists) one for each material reducing user code needed.
The answer here is not entirely clear to me, but I think it depends how the user want to render the data. For example, if they always render a cube (regardless of the material) and just use the material to decide how to texture it then I think in this case all the points can stay in the same array. The material can be passed though to the GPU (similar to how the other extractors work) and used to look up into a texture atlas for example.
On the other hand, if the user wishes to draw a different mesh based on the material then I think they will need to split the array of points into seperate arrays based on the material. This is probably their problem rather than ours, but I think we could provide a utility function for this purpose. But again, I think we can come back to this once we have the basic rendering working.
Freakazo wrote:
In regards with wrapping, I'm starting to wonder if wrapping the entire project vs. wrapping a simple C interface is worth it. It seems that C++ bindings are really hard to write, for both automated and manual wrappers, whereas C interfaces is much easier to work with.
Actually I wonder whether some parts of PolyVox even need to be in classes in the first place. For example, to use the cubic surface extractor you first have to create a CubicSurfaceExtractor object and then call it 'execute' function. But why not just replace this with a single function called 'extractCubicSurface'? I think that C++ (function overloading, templates, etc) is useful here but classes are not. On the other hand, some things (such as volumes) do make sense as classes. I think that in general we should take an STL-like approach where data structures are classes but algorithms are just functions.
Our
mind map mentions 'Convert Raycast into function' as I was planning to use this as a test for this process, but if you want to try changing your algorithm into a regular function instead of a class then you are welcome to give it a go.