Posts from the “OpenGL” Category

Convert Sphere Vertices from Model Space to UV Space in GLSL

Vertices in model space are composed in cartesian coordinates in 3D, where the domain of each dimension ranges from -1 to 1.

To convert sphere vertices from model space to UV space, a simple way is converting theta of a vertex in spherical space, and z value of the vertex in model space, to the UV space.

Converting theta of a vertex in spherical space to the u value in UV space

Theta can be calculated using:

float theta = acos(x / sqrt(x * x + y * y));

The range of the calculated theta is [0, PI], because positive and negative y values yield the same result when calculating the radius (denominator). Thus, to expand the theta to [0, 2PI], we just need to extend theta when y is negative:

if (y < 0.0) theta = 2PI – theta;

Now we can convert theta to u:

float u = theta / 2PI;

The result of u can be visualized in the red channel of the fragment. Looking from the top of the sphere, we get the following result:

u

Converting z value of a vertex in model space to the v value in UV space

v can be easily calculated by converting z from domain [-1, 1] to [0, 1]:

float v = 0.5 * (z + 1.0);

Visualize v in the blue channel of the fragment and looking at the sphere horizontally:

v

Matrix for 2D Polynomial Surface

Given the polynomial coefficients of a 4 * 4 polynomial surface,

f(x,y) = p00 + p10*x + p01*y + p20*x^2 + p11*x*y + p02*y^2 + p30*x^3 + p21*x^2*y + p12*x*y^2 + p03*y^3 + p40*x^4 + p31*x^3*y + p22*x^2*y^2 + p13*x*y^3 + p04*y^4

how to conveniently compute f(x, y) in a shader?

If we just do a multiplication of 2 vectors each consists of 15 elements, it’s easy to make mistakes when composing the vectors. So I did a practice to use matrix to represent the equation for a 2 * 2 polynomial:

f(x,y) = p00 + p10*x + p01*y + p20*x^2 + p11*x*y + p02*y^2 =
poly33

Much cleaner right? Now we can use a generic matrix to represent a n * n polynomial:

f(x, y) = YPX,

where P is a (n + 1) * (n + 1) matrix of the coefficients:

P = [p_00, p_10, p_20, ............., p_n0
     p_01, p_11, p_21, ......, p_(n-1)1, 0
     p_02, p_12, p_22, ..., p_(n-2)2, 0, 0
     ...
     p_0n, 0, 0, ......................, 0],

Y is a 1 * (n + 1) vector for y values:

Y = [1, y, y^2, ..., y^n],

X is a (n + 1) * 1 vector for x values:

X = [1, x, x^2, ..., x^n]^T.

How Does Multisampling Work in OpenGL Rendering Pipeline

After doing some reading and online search, I finally got the idea how multisampling works in opengl pipeline.

The idea is, every primitive is sampled multiple times per pixel. When it comes to present the final image, all of the samples for the pixel are resolved to determine the final pixel’s color.

In the pipeline, multisampling works as follows:

1. The rasterizer generates separate depth and stencil values for samples per pixel.

2. When calculating the colors of the samples, the pipeline invokes the fragment program if any of the samples are located within the primitive. In the fragment program, the color is calculated using the sample located in the center of the pixel. Now we get a single color from the fragment program, this color is coped to all the samples within the primitive.

3. The colors of the samples are stored in a multisampled texture. Each sample may cover multiple primitives, but the final multsampled texture only store the nearest one. Thus, when updating an existed sample, depth test is excuted to determine whether to update.

4. Final stage: average the multisamped texture to generate a single color corresponding to its pixel.