Creating panoramic images with raymarching
If we have raymarch shader, it's easy to create a panoramic image with it.
Here I will explain how to do it by modifying a GLSL fragment shader which performs raymarching (raycasting, raytracing).
1) In the vertex shader, generate normalized 2D coordinates pos_normalized at [-1,1]x[-1,1] of the rendered pixel:
//Declare variable which will be interpolated and passed into fragment shader
out vec2 pos_normalized; //position [-1,1]x[-1,1]
//Inside main() shader's function:
gl_Position = modelViewProjectionMatrix * position; //... some normal code
vec2 pos_normalized = gl_Position.xy/gl_Position.w;
vec2 pos_normalized = gl_Position.xy/gl_Position.w;
2) In the fragment shader, add uniform for passing center of panoram and also optional rotation of panoram horizontally in degrees; and of course pos_normalized input:
uniform vec3 head_position = vec3(0,0,0);
uniform float panoramic_render_angle = 0; //rotating panoramic image, in degrees
in vec2 pos_normalized ; //pixel's position [-1,1]x[-1,1]
3) Also define PI constant:
const float PI = 3.14159265;
3) Inside the shader's code, set ray's origin and diraction by the following code:
vec3 origin = head_position; //ray's origin is a head position
vec3 dir; //ray's direction - we will compute it now
float lat = pos_normalized.y*PI/2;
float lon = pos_normalized.x*PI + panoramic_render_angle * PI/180.0;
dir.x = cos(lat)*cos(lon);
dir.z = cos(lat)*sin(lon);
dir.y = sin(lat);
float lon = pos_normalized.x*PI + panoramic_render_angle * PI/180.0;
dir.x = cos(lat)*cos(lon);
dir.z = cos(lat)*sin(lon);
dir.y = sin(lat);
That's all, you may pass origin and dir to the further raymarching computations.
In the CPU-part of the code, set uniform head_position to the desired 3D position of the panoram's center in the space and panoramic_render_angle for rotating the resulted panoram.
Creating panoramic video
Doing the steps describe above, you will render a panoramic image.
Let's describe how to obtain video which can be uploaded to youtube.
Set desired image sizes:
int W = 8192; //(This size is recommended by youtube)
int H = 4096;
1) Make the rendering to ofFbo fbo buffer.
Allocate it by fbo.allocate(W, H, GL_RGB),
render the image there:
fbo.begin();
shader.begin()... //enable your shader and set its uniforms
ofDrawRectangle(0,0,W,H); //render the scene
shader.end();
fbo.end();
(Also you can draw it on the screen to check what's going on:
ofSetColor(255);
fbo.draw(0,0,ofGetWidth(), ofGetHeight());
)
2) Read it to the pixels array:
ofPixels pix;
fbo.readToPixels(pix);
and save it, say, as a TIF image:
static int frame_num = 0;
ofSaveImage(pix, ofToString(frame_num++, 5, '0') + ".tif);
Now after running, you will render some image sequence.
3) Now in a video editor convert this image sequence to video file (MP4 or MOV)
4) Use "Spacial metadata injector" free app to add to the video information it's 360 panoramic video.
5) Upload to Youtube.
(Also, default video viewer in Windows allows to see panoramic videos too).
The final note about realtime and non-realtime rendering
Rendering 8K images can be very slow. So, I prepare required scene in realtime (with non-panoramic drawing, or panoramic, but with smalles image size), record all needed data to a textfile, and next replay it for rendering hi-quality panorams in non-realtime (offline) mode.
,
Comments
Post a Comment