Posts

Showing posts from May, 2019

Raymarching - a way to liberation of generative graphics

Image
I published this article on openFrameworks forum also. Recently I was so fascinating to work with raymarching technology in oF, that I decide to write this detailed post about it. Such technology allows to render non-rigid objects such as clouds, create dynamic and truly geneative volumes without meshes, and, additionally, very simply render in in VR or 360-degrees panoramic images. It was not so fast to collect all information, so I sure it will be useful for somebody desired to work with in in a modern oF/GLSL. We (art-duet Endless Attractions Museum ) recently released VR art project "Night sleep was taken from my eyes, V.2". In the project we are thinking, how are our memories formed? Is it possible to visualize the amnesia process? What will the storage look like in the future? Wearing the VR HMD, a viewer found himself flying in a cloudy space. Some clouds contain inside transformed panoramic photos illustrating one day in the life of

Creating panoramic images with raymarching

If we have raymarch shader, it's easy to create a panoramic image with it. Here I will explain how to do it by modifying a GLSL fragment shader which performs raymarching (raycasting, raytracing). 1) In the vertex shader, generate normalized 2D coordinates  pos_normalized at [-1,1]x[-1,1] of the rendered pixel: //Declare variable which will be interpolated and passed into fragment shader     out vec2  pos_normalized; //position [-1,1]x[-1,1] //Inside main() shader's function:     gl_Position = modelViewProjectionMatrix * position; //... some normal code     vec2 pos_normalized = gl_Position.xy/gl_Position.w; 2) In the fragment shader, add uniform for passing center of panoram and also optional rotation of panoram horizontally in degrees; and of course pos_normalized input:     uniform vec3 head_position = vec3(0,0,0);     uniform float panoramic_render_angle = 0; //rotating panoramic image, in degrees     in vec2  pos_normalized

Creating many additional 5V and GND outputs on Arduino without soldering

When creating schemes with Arduino, I often required many 5V and GND contacts for connecting devices such as potentiometers. For low-power devices (potentiometers, other sensors, low-power LEDs), we can use digital pins Arduino for a such purpose. For example, to output GND on digital pin 3, write the following:     pinMode(3, OUTPUT);     digitalWrite(3, LOW); To output 5V on the digital pin 4, write the following:     pinMode(4, OUTPUT);     digitalWrite(4, HIGH); Now we have two additional pins with GND and 5V, which we can use for connecting! Note: using PWM pins and analogWrite() function, we can set any voltage value between 0 and 5V.

Generating seamless repeated 3D-textures (repeated voxel objects)

When creating 3D textures (voxel objects), it's may be required to make it "seamless repeated", that means that if the textures placed joined together, the gap between them are smooth (invisible). For example, it's useful for filling the whole 3D space with repeated texture in a raymarching-based project. Hint: Such whole 3D space filling by a repeated 3D texture gives feeling of repeating in the space and may be desirable. But, if you need to eliminate such repeatng feeling, for example, for rendering clouds using 3D-texture, you may mix two same textures but with different scales (1 and 1.2225) and shifted. Maximal color -  max(texture1, texture2) works good for the clouds. Here we explain method for making any 3D texture (voxel object) seamless repeated. Note, the resulted object will be smaller due this procedure. It's based on idea of mixing border voxels with changed mixing value. Normally such technique is used for creating smooth music lo

Using 3D texture in OpenGL/openFrameworks with programmable pipeline

Here we explain how to work with 3D textures using modern OpenGL, verson >= 3.2, using C++ code with openFrameworks. For preparing this text we used several sources: https://stackoverflow.com/questions/13459770/glsl-sampler3d-in-vertex-shader-texture-appears-blank https://en.sfml-dev.org/forums/index.php?topic=23871.0 https://community.khronos.org/t/opengl-3-x-glenable-gl-texture-2d-gl-invalid-enum/61405 https://github.com/tiagosr/ofxShadertoy We will performs the folliwing steps to create and 3D texture for openFrameworks 0.10.1: 1. Enable programmable pipeline 2. Define the function for printing GL errors 3. Create some volume data 4. Create and upload 3D texture 5. Setting 3D texture to shader 6. Create shader files 1. Enable programmable pipeline Be sure you are using programmable pipeline, so in main.cpp, main() use the following code: int main() {     ofGLWindowSettings settings;     settings.setGLVersion(3, 2);   

Forward and backward alpha blending for raymarching

When implementing raymatching by scanning a ray to accumulating scene's colors, containing semi-transparent colors (for example, scene with clouds), it's required to properly combine colors using alpha blending method. Here we discuss how to do it and obtain corresponding formula. (So, here we propose that objects has "uniform" ambient lighting and some transparency and all we need to accumulate their colors). The forward alpha blending formula, used "everywhere" - in OpenGL, VJ software, photo and video editors, can be formulated in the following way: Formula for "forward" alpha blending ---------------------------------- Let C is a current RGB color, c is a new RGB color and alpha is a new color's opaqueness (alpha is a float value from 0 to 1). Then updating rule for C is the following:     C = (1- alpha )* C + alpha * c    ---------------------------------- So, the more alpha , the more c affec

Getting maximum possible texture size and GPU memory available

Getting maximum possible texture size Material is based on OpenGL Wiki page: https://www.khronos.org/opengl/wiki/Textures_-_more     int max_tex_size;     glGetIntegerv(GL_MAX_TEXTURE_SIZE, &max_tex_size);       cout << "GPU maximal 2D texture size: " << max_tex_size << endl; For 3D texture: GL_MAX_3D_TEXTURE_SIZE For cubemap texture:GL_MAX_CUBE_MAP_TEXTURE_SIZE Getting total and available GPU memory for NVidia cards Material is based on the article How to Know the Graphics Memory Size and Usage In OpenGL by JEGX: https://www.geeks3d.com/20100531/programming-tips-how-to-know-the-graphics-memory-size-and-usage-in-opengl/     #define GL_GPU_MEM_INFO_TOTAL_AVAILABLE_MEM_NVX 0x9048    #define GL_GPU_MEM_INFO_CURRENT_AVAILABLE_MEM_NVX 0x9049     GLint total_mem_kb = 0;     glGetIntegerv(GL_GPU_MEM_INFO_TOTAL_AVAILABLE_MEM_NVX, &total_mem_kb);     cout << "Nvidia GPU total memory: " << total_mem_kb / 1024

Computing ray origin and direction from Model View Projection matrices for raymarching

When performing raymarching (as well a raycasting and raytracing) using fragment shaders in OpenGL, it's required to compute ray's origin and direction. Here we will give solution for finding it from the Model View Projection matrices in OpenGL, implemented in openFrameworks. Using it, you can combine raymarching/raycasting/raytracing with forward OpenGL rendering, so obtain hybrid rendering modes. Also, you can use it for creating VE applications based on raymarching or hybrid rendering. (Actually, we checked what this approach is corrent by creating VR rendering app in HTC Vive). The algorithm can be used on any OpenGL platform with any language supporting matrix inversion computation (in our case it's C++ with GLM library, included in openFrameworks 0.10.1). The approach is based on this hint by  GClements in this discussion on the topic: https://community.khronos.org/t/ray-origin-through-view-and-projection-matrices/7257