AndiNo wrote:
I somehow managed to get the voxels on the screen using the CubicSurfaceExtractor. At first I had difficulties finding them in world space as the sphere of voxels was very small. That brings me to another question:
I assumed the "uBlockSideLength" parameter of the Volume constructor was meant to adjust the size of every voxel, however I think that's wrong. Should I just scale the ManualObject to which the "voxels" are attached?
The uBlockSideLength is an advanced parameter which controls the memory management of the volume. If you want more information then the Volume class is quite well documented in Volume.h/.inl. PolyVox always generates voxel of size 1 unit, so you can scale this in Ogre if you want it bigger.
AndiNo wrote:
Oh and second: I'm not sure if you know this but how can I achieve the voxel texturing as it is in Minecraft? I think I'll have to use a texture atlas, but how do I actually get the textures on the voxels? Someone at the Ogre forums seems to use a shader, is that the only way?
The way you handle texturing is completly seperate from PolyVox, for example PolyVox does not even generate texture coordinates. However, it does copy the 'MaterialID' from the voxel into the generated mesh. You can use this to choose which texture to apply.
For a mincraft style game I guess you will want texture coordinates, and the best approach is to generate them in a shader. You might want to research 'Triplanar texturing' and look in GPU Gems 3 Chapter 1 which is available on NVidia's website.
I use a similar approach in my current work in Thermite, though I'm applying a normal map rather than a texture map. Some useful (though work in progress) Ogre/shader code:
Ogre material:Code:
material ColouredCubicVoxel
{
technique
{
pass Light0
{
// Vertex program reference
vertex_program_ref ColouredCubicVoxelVP
{
}
// Fragment program
fragment_program_ref ColouredCubicVoxelFP
{
param_named_auto lightPosition light_position 0
param_named_auto lightColour light_diffuse_colour 0
}
texture_unit
{
texture ColorMap.png 1d
filtering point
}
texture_unit
{
texture voxel-normalmap.png 2d
filtering anisotropic
max_anisotropy 16
}
}
pass Light1
{
scene_blend add
// Vertex program reference
vertex_program_ref ColouredCubicVoxelVP
{
}
// Fragment program
fragment_program_ref ColouredCubicVoxelFP
{
param_named_auto lightPosition light_position 1
param_named_auto lightColour light_diffuse_colour 1
}
texture_unit
{
texture ColorMap.png 1d
filtering point
}
texture_unit
{
texture voxel-normalmap.png 2d
filtering anisotropic
max_anisotropy 16
}
}
}
}
Ogre program:Code:
vertex_program ColouredCubicVoxelVP cg
{
source ColouredCubicVoxel.cg
entry_point ColouredCubicVoxelVP
profiles vs_3_0 vs_2_x vs_2_0 vs_1_1 vp40 vp30 vp20 arbvp1
default_params
{
param_named_auto world world_matrix
param_named_auto viewProj viewproj_matrix
}
}
[b]Cg shader code:[/b]
fragment_program ColouredCubicVoxelFP cg
{
source ColouredCubicVoxel.cg
entry_point ColouredCubicVoxelFP
profiles ps_3_x ps_3_0 ps_2_x ps_2_0 ps_1_4 ps_1_3 ps_1_2 ps_1_1 fp40 fp30 fp20 arbfp1
default_params
{
param_named_auto ambientLightColour ambient_light_colour
param_named_auto cameraPosition camera_position
}
}
Code:
void ColouredCubicVoxelVP(
float4 inPosition : POSITION,
float4 inNormal : NORMAL,
float2 inMaterial : TEXCOORD0,
out float4 outClipPosition : POSITION,
out float4 outWorldPosition : TEXCOORD0,
out float4 outWorldNormal : TEXCOORD1,
out float2 outMaterial : TEXCOORD2,
uniform float4x4 world,
uniform float4x4 viewProj
)
{
//Compute the world space position
outWorldPosition = mul(world, inPosition);
//Just pass though the normals without transforming them in any way. No rotation occurs.
outWorldNormal = inNormal;
//Compute the clip space position
outClipPosition = mul(viewProj, outWorldPosition);
//Pass through the material
outMaterial = inMaterial;
}
void ColouredCubicVoxelFP(
float4 inPosition : POSITION,
float4 inWorldPosition : TEXCOORD0,
float4 inWorldNormal : TEXCOORD1,
float2 inMaterial : TEXCOORD2,
uniform float4 ambientLightColour,
uniform float4 lightPosition,
uniform float4 lightColour,
uniform float4 cameraPosition,
uniform sampler1D colorMap : TEXUNIT0,
uniform sampler2D heightMap : TEXUNIT1,
out float4 result : COLOR)
{
//Sample the normal map three times and choose between them using triplanar texturing. This is a little wasteful of
//texture samples as we only end up using one of them. In the future, think about sampling only once and then rotating
//the result into the correct coordiante system?
//Note: The multiplication by -1 flips the texture which otherwise has the origin at top left instead of bottom right
//We also may flip the x texture coordinate depending which side if the cube we are looking at (according to normal)
float3 normalSampleXY = tex2D(heightMap, inWorldPosition.xy * float2(inWorldNormal.z, -1.0f) + float2(0.5f,0.5f)).xyz;
float3 normalSampleZY = tex2D(heightMap, inWorldPosition.zy * float2(inWorldNormal.x, -1.0f) + float2(0.5f,0.5f)).zyx;
float3 normalSampleXZ = tex2D(heightMap, inWorldPosition.xz * float2(inWorldNormal.y, -1.0f) + float2(0.5f,0.5f)).xzy;
//Decompress, and zero two to the sampes (according to normal);
float3 normalDecompXY = normalize( normalSampleXY * 2.0f - 1.0f ) * abs(inWorldNormal.z);
float3 normalDecompZY = normalize( normalSampleZY * 2.0f - 1.0f ) * abs(inWorldNormal.x);
float3 normalDecompXZ = normalize( normalSampleXZ * 2.0f - 1.0f ) * abs(inWorldNormal.y);
//For the back sides of the cubes the normals are flipped
normalDecompXY.xz *= inWorldNormal.z;
normalDecompZY.xz *= inWorldNormal.x;
normalDecompXZ.xy *= inWorldNormal.y;
//It seems that because of the texture coordinate origin top-peft vs bottom-left issue, we have to
//flip the y of the normal (notice that this is differnt from flipping the y of the texture coordinate).
normalDecompXY.y *= -1.0;
normalDecompZY.y *= -1.0;
normalDecompXZ.z *= -1.0;
//Our final normal;
float3 worldNormal = normalDecompXY + normalDecompZY + normalDecompXZ;
//Diffuse
float3 lightDir = normalize(lightPosition.xyz - (inWorldPosition.xyz * lightPosition.w));
float4 diffuseLightColour = lightColour * max(dot(lightDir, worldNormal), 0.0);
//Specular
float shininess = 10;
float Ks = 0.3;
float3 V = normalize(cameraPosition.xyz - inWorldPosition.xyz);
float3 H = normalize(lightDir + V);
float specularLight = pow(max(dot(worldNormal, H), 0), shininess);
float3 specularLightColor = Ks * lightColour * specularLight;
//Used to attenuate the light when the surface is pointing away from the
//light source, but the adjusted normal is still pointing towards it because
//of the normal map. Maybe check for a better way to handle this...
float diffSpecScaleFactor = dot(lightDir, inWorldNormal.xyz);
diffSpecScaleFactor = sign(diffSpecScaleFactor);
diffSpecScaleFactor = (diffSpecScaleFactor + 1.0f) * 0.5f;
float u = (inMaterial.x / 256.0f) + (1.0f / 512.0f);
float4 sample = tex1D(colorMap, u);
//sample *= height.x;
sample.xyz *= ((specularLightColor + diffuseLightColour.xyz) * diffSpecScaleFactor + ambientLightColour.xyz);
result = sample;
}
AndiNo wrote:
Third: If I divide my game world into multiple small chunks like Minecraft does, would I create a single Volume for every chunk there is? At first I thought that Regions would have something to do with this but I'm not sure...
In Thermite I create one volume but generate a seperate mesh for each region. When a voxel changes you will need to regenerare the mesh for the region the voxel is in. It's basically your responsibility to handle all this, though you might find the code in 'VolumeChangeTracker' useful. This class may disappear in the future but consider it a starting point.
AndiNo wrote:
Fourth: I've read in another forum (and tested myself) that the Volume class does NOT accept anything else than the MaterialDensityPair class. Isn't it supposed to accept chars, for example, too?
That were more questions than I initially thought

This is a bit of a mess and needs sorting out, but basically you can write a new class with the same interface as MaterialDensityPair. It just needs 'getDensity()' and 'getMaterial()' functions. The density controls whether a voxel is 'solid' based on whether it is above or below a threshold. The material controls what material id gets copied into the mesh for a solid voxel.
beyzend wrote:
--Texture coordinates--
In order to use the texture atlas correctly, you need your texture coordinate setup. If you look at CubicSurfaceExtractor you will see the ordering which the triangles are created with. Basically... Let me just post some code. Also this is just the basics to get it to work. It's not 100% correct because I still have some unresolved texturing issues but you will get textures and the back-face culling stuff is correct.
http://pastebin.com/uhLQ4KTrIf you don't understand the ordering of the faces you need to look at CubicSurfaceExtractor for the correct ordering. THe comments in my code are outdated.
I didn't read your code but actually I wouldn't recommend this approach. You'll probably find it easier to just use the world space fragment position as a texture coordinate. For example, if you are shading a fragment with world space position (5.5, 6.7, 0.0) and with a normal of (0,0,1) then you can just use (5.5, 6.7) as your texture coordinate. I guess you could do this in C++ code but your better off doing it in a shader so that you have less data to pass to the graphics card.
Also, the CubicSurfaceExtractor is very inefficient at the moment. It generates 4 vertices and 2 triangles for each face, whereas in practice vertices should be shared and coplanar triangles should be merged. Once I do this (not for a while) I imagine your current approach will get more tricky.
As for your texturing issues I looked at your screenshots and it's probably due to mixing linear filtering with texture atlases. This means at the edge of a texture you actually get some of the neighbouring texture bleeding in. Texture atlases do not play well with linear filtering and mipmaps.
beyzend wrote:
What I'm doing right now is I have Volume objects which are the Chunks. I want to do this with Regions, but to do that I need to modify Volumes to support that. Why do I need this? I'm doing streaming. Right now the "chunks" are volumes. To use regions as chunks I have to make Volume support paging.
Yep, given that PolyVox does not support streaming that's probably how I would have implemented it.