Computing ray origin and direction from Model View Projection matrices for raymarching

When performing raymarching (as well a raycasting and raytracing) using fragment shaders in OpenGL, it's required to compute ray's origin and direction.
Here we will give solution for finding it from the Model View Projection matrices in OpenGL, implemented in openFrameworks.

Using it, you can combine raymarching/raycasting/raytracing with forward OpenGL rendering, so obtain hybrid rendering modes.
Also, you can use it for creating VE applications based on raymarching or hybrid rendering.
(Actually, we checked what this approach is corrent by creating VR rendering app in HTC Vive).

The algorithm can be used on any OpenGL platform with any language supporting matrix inversion computation (in our case it's C++ with GLM library, included in openFrameworks 0.10.1).

The approach is based on this hint by  GClements in this discussion on the topic: https://community.khronos.org/t/ray-origin-through-view-and-projection-matrices/72579/4

The solution is based on the following steps:

1. Render the rectangle covering the whole view area.

2. Pass this rendering via custom vertex and fragment shaders.

3. In the vertex shader pass "original" ModelViewProjection matrix matrix_original, which is intended to render rectangle on the whole view area.
Using this matrix, we set vertex shader's output position gl_Position in homogeneous coordinates. It will be used for rasterising the rectangle.

4. Compute pos - it's a 2-dimensional vector of a rectangle's vertex, given in normalized device coordinates XY.
Normalized device coordinates it's a cube [-1,1]x[-1,1]x[-1,1], so that Z=-1 is a near clip plane and Z=1 is a far clip plane.
So 4-dimensional vectors vec4(pos,-1,1) and vec4(pos,1,1) are starting and finishing positions of the ray we desire, but in normalized homogeneous device coordinates.

5. To obtain coordinates of this points in the model coordinate space, we compute inversion of this vectors using inverted View Projection Matrix inverse_matrix, which we pass into vertex shader too.
As a result, we obtain vectors near_4 and far_4 - homogeneous coordinates of the ray's points in a model space.

6. This two points are passes to fragment shader and interpolated (they are out parameters of vertex shader).

7. In fragment shader, we project near_4 and far_4 to 3D space as origin and far3 3-dimensional vectors.
Then, use origin as a ray's start and difference of far3 and origin as direction (by normalizing it, weobtain ray's direction).

C++ code

//Set currentViewProjectionMatrix - some matrix use want to use for rendering scene,
//for example of VR:
ofMatrix4x4 currentViewProjectionMatrix = openVR.getCurrentViewProjectionMatrix(nEye);
//dimensions of viewport:
float W = 1512;   //HTC Vive rendering size
float H = 1680;

//Compute inverted matrix
auto viewproj = glm::mat4(currentViewProjectionMatrix);
auto inverse_matrix = glm::inverse(viewproj);

//Get original matrix for rendering rectangle on the view area.
//In using in normal openFrameworks application, you should put this line in the very beginning of the ofApp::draw, before ofPushMatrix of camera.begin() command.
auto viewproj_original = ofGetCurrentMatrix(OF_MATRIX_PROJECTION) * ofGetCurrentMatrix(OF_MATRIX_MODELVIEW);

//Rendering rectangle passing it via shaders
shader.begin();
shader.setUniformMatrix4f("inverse_matrix", inverse_matrix);
shader.setUniformMatrix4f("matrix_original", viewproj_original);

ofDrawRectangle(0, 0, W, H);

shader.end();

Vertex shader code


#version 330precision mediump float;

//input rectangle's vertices
in vec4 position;

//original Model View Projection matrix, used for rendering rectangle on the whole view area
uniform mat4 matrix_original;       

//inverted Model View Projection matrix used desired camera for rendering scene - for ray computation
uniform mat4 inverse_matrix;

//coordinates of the ray's start and end in homogeneous world coordinates
//for using further in fragment shader for computing origin and the direction of the ray
out vec4 near_4;   
out vec4 far_4;

//out vec4 uni_coord;
//uni_coord is a point inside cube [-1,1]x[-1,1]x[-1,1], z=-1,1 - near/far clip
//- its in a normalized device coordinates coordinates,
//see for details https://community.khronos.org/t/ray-origin-through-view-and-projection-matrices/72579/4

void main(){
    //gl_Position = modelViewProjectionMatrix * position;

    //compute final rectangle's vertex position
    gl_Position = matrix_original * position;       

    //get 2D projection of this vertex in normalized device coordinates
    vec2 pos = gl_Position.xy/gl_Position.w;
   
    //compute ray's start and end as inversion of this coordinates
    //in near and far clip planes
    near_4 = inverse_matrix * (vec4(pos, -1.0, 1.0));       
    far_4 = inverse_matrix * (vec4(pos, +1.0, 1.0));

}

Fragment shader code


#version 330
precision mediump float;
out vec4 FragColor;
in vec4 near_4;    //for computing rays in fragment shader
in vec4 far_4;

//------------------- example of raymarching algoritm, from https://www.shadertoy.com/view/lss3zr
//                      adopted to render just sphere

//Scene SDF definition
float scene(vec3 p)
{   
    //Sphere
    return 1.0-length(p-vec3(30,20,0.5))*0.05;
}

void mainImage( out vec4 fragColor, in vec3 org, in vec3 dir ) {
{
    vec4 color=fragColor;
    const float zMax         = 60.;

    const int nbSample = int(zMax);
    const int nbSampleLight = 6;

    float step         = zMax/float(nbSample);
    float zMaxl         = 20.;
    float stepl         = zMaxl/float(nbSampleLight);
    vec3 p             = org;
    float T            = 1.;
    float absorption   = 100.;
    vec3 sun_direction = normalize( vec3(1.,.0,.0) );
   
    for(int i=0; i<nbSample; i++)
    {
        float density = scene(p);
        if(density>0.)
        {
            float tmp = density / float(nbSample);
            T *= 1. -tmp * absorption;
            if( T <= 0.01)
                break;
             //Light scattering
            float Tl = 1.0;
            for(int j=0; j<nbSampleLight; j++)
            {
                float densityLight = scene( p + normalize(sun_direction)*float(j)*stepl);
                if(densityLight>0.)
                    Tl *= 1. - densityLight * absorption/float(nbSample);
                if (Tl <= 0.01)
                    break;
            }
           
            //Add ambient + light scattering color
            color += vec4(1.)*50.*tmp*T +  vec4(1.,.7,.4,1.)*80.*tmp*T*Tl;
        }
        p += dir*step;
    }   

    fragColor = color;
}


//-----------------------------------------------------------------------------
//Main shader's function
void main( void ) {       
    vec3 origin = near_4.xyz/near_4.w //ray's origin
    vec3 far3 = far_4.xyz/far_4.w;
    vec3 dir = far3 - origin;
    dir = normalize(dir);        //ray's direction
    //calling raymarching algorithm
    mainImage(color, origin/20, dir);    //Note! we divide origin by 20 - this constant will depend on the coordinates used in your raymarching algorithm           
    color.w = 1.0;
    FragColor = color;
}


Comments

Popular posts from this blog

Forward and backward alpha blending for raymarching

Forward, Deferred and Raytracing rendering in openFrameworks and web