1

In my 3D program, I compute, model, view & projection matrices that I give to OpenGL in the Vertex shader:

// Position of the vertex as seen from the current camera
gl_Position = projection * modelview * vec4(VertexPosition, 1.0);

Considering that I have a list of points, I'd like to be able to write these point names aside their geometry. Therefore, I've got this code snippet:

for(auto const& _3Dpoint : model_->getPoints()) {
    Vector3D projected = (projection_* cameraview_.inversedMultiplication(_3Dpoint.second->getPosition()));

    projected.normalize();

    renderText( projected[0] / projected[3],
                projected[1] / projected[3],
                projected[2] / projected[3],
                _3Dpoint.second->getName());
}

I expect my projected coordinate to be expressed in window coordinates. But it is actually approximately in [-7; 7], moreover positive. This explains my normalization. Unexpectedly, this works well until I apply a rotation or translation to my view matrix. Then my point's name does not really stick my point's geometries. I do not really understand this behavior because when I apply a transformation to the camera, the transformation is applied both on the final coordinates ...

Any ideas?

Quentin Tealrod
  • 309
  • 3
  • 14
  • 1
    why are you using inversedMultiplication? if your vertex is in worldspace, und you multiply it with projection * viewmatrix, you should get out the screenspace [-1 1] (if you also divide by w component... but why the inveredMultiplication? Or did I got the question wrong? – Thomas May 31 '17 at 11:54
  • History question. The mat4 was inversed before. Consider it's written as following : Vector3D projected = (projection_* cameraview_ * _3Dpoint.second->getPosition()); // And consider that the model matrix is already applied to the _3Dpoint.second->getPosition()'s vector – Quentin Tealrod May 31 '17 at 15:13
  • 1
    Also you don't need to normalize the position. If you need the pixelposition you can make the projection * view * vertexposition, afterwarts if the projectionmatrix is perspective, devide by w component. Use the xy Parameters devide by 2, - vector(0.5 , 0.5) and multiply by your screensize (width, height). Then you will get out (0 - width, 0 - height) pixels – Thomas Jun 01 '17 at 05:22
  • 1
    Also you should consider switching to QOpenGLWidget, as QGLWidget is deprecated and removed in Qt5.something. You'd have to draw texts yourself then or switch to QML or Qt3D which looks pretty good in current Qt versions. – Bim Jun 01 '17 at 08:46
  • @Thomas: My w coordinate is actually always 1.0, and dividing by w never change my coordinates... The reason why I tended to the normalization which doesn't work when I zoom. when is w meant to change ? – Quentin Tealrod Jun 01 '17 at 11:51
  • @Bim. QOpenGLWidget is breaking my picking based on backbuffer writting. Ididn't manage to make it work. glReadPixels always gives me the front buffer value and then I don't have any picking working. QGLWidget is stille working with Qt 5.8 – Quentin Tealrod Jun 01 '17 at 11:53
  • @QuentinTealrod the w component is only changing, when you have a perspective projectionmatrix, if you have an orthographic projectionmatrix your w component will always be 1.0 – Thomas Jun 01 '17 at 12:34
  • @QuentinTealrod I guess you're going the "render id to color buffer and back-project z"-way for picking... QOpenGLWidget is FBO-backed. It renders to an FBO, then does compositing. Getting the depth buffer attachment / content is a bit hacky though. If you want to roll your own class with more control, see my answer [here](https://stackoverflow.com/questions/31323749/easiest-way-for-offscreen-rendering-with-qopenglwidget). – Bim Jun 01 '17 at 17:32

0 Answers0