I am trying to implement rendering points on top of geometry as in Blender. The algorithm is as follows: rendering is done through PointList without depth test. The depth test is done manually in the shader. We have a point. (Let’s say a 5×5 square). And I compare the depth of the center of this square (vertex depth) with the depth of the scene. I get the scene depth through the depth texture sampler.
I am facing a problem while comparing the depth. The data doesn’t add up.
Here is an example of the most common case:
Depth from texture: 0.978166640
NDC depth: 0.978173256
I realize that this problem can be solved in principle by adding epsilon, but in some cases the difference in comparison is significant:
Depth from texture: 0.616887
NDC depth: 0.622718
This is where the 2nd digit differs.
In the sampler I tried using both linear filtering and nearest filtering. At first I thought it was the sampler. First of all, I made an additional color attachment in which I manually entered gl_FragCoord.z
additionally logging the values that are added. And I found out that the values in the manual color attachment are the same as in the normal depth. And the values in gl_FragCoord.z
are the same as in manual depth calculation. So the problem is obviously not in the calculations. Secondly, I opened my project in RenderDoc and looked at the output depth of the scene. Unfortunately, RenderDoc rounds values. So I exported the texture as a raw data buffer. Looked at the values per pixel and found that the values are about the same as what I get through the sampler. I emphasize that I was not looking at the sampled image, but at the output depth buffer.
I also tried temporarily (as part of testing) to use input attachments instead of sampler to eliminate such reasons as sampling and interpolation of data when accessing through UV. Still the data is different.
It turns out that when writing the data is one, but in the texture itself they are mixed. Depth texture format: D32SFLOAT. But I have tried other ones.
So far I have an idea to store SSBO buffer and manually fill it, but I don’t think it’s optimal.
The question is: is there any way to provide the same values when forming the depth texture as the original ones?
Example code:
#version 460
layout(location = 0) in vec3 position;
#extension GL_EXT_debug_printf : enable
layout(set = 0, binding = 0) uniform CameraUBO {
mat4 projection;
mat4 view;
} camera;
layout(push_constant) uniform Push {
mat4 model;
} mesh;
layout(set = 1, binding = 0) uniform sampler2D sDepth;
layout(location = 0) flat out uint visible;
void main() {
gl_Position = camera.projection * camera.view * mesh.model * vec4(position, 1.0);
gl_PointSize = 1;
vec3 ndc = gl_Position.xyz / gl_Position.w;
vec2 uv = ndc.xy * 0.5 + 0.5;
ivec2 texelCoord = ivec2(uv * textureSize(sDepth, 0));
float depth = texelFetch(sDepth, texelCoord, 0).r;
if (ndc.z <= depth) visible = 1;
else
{
visible = 0;
debugPrintfEXT("[Vertex, Points] Depth: %.9f, NDC: %.9fn", depth, ndc.z);
}
}
I would also like to add that I do not certify that my hike is correct. I will be glad to receive any other ideas on realization of point rendering.