Posts from the “Computer Graphics” Category

Atmospheric Scattering Highlights

In short words, atmospheric scattering is the process of scattering away light when it’s traveling from a light source to a point. The light arriving the point is the result of multiplication of light at the light source and the transmittance between the light source and the point. Transmittance is related to the average atmospheric density (optical depth) between the light source and the point and the scattering constants, i.e. the exponential of optical depth multiplies scattering constants.

Atmospheric scattering is used to simulate sky color. Simulating sky color integrates atmospheric scattering to the light traveling process from sun to any view point in the atmosphere. Specifically, the sky color at any view direction from a view point in the atmosphere is the integration of in-scattered light at each sample on the view ray (starts from the view point and cast towards the view direction) to the view point. The light at each sample is the in-scattered light from sun to the sample. Chaining the process with a ray marching algorithm, the sky color at a specific view direction from any view point in the atmosphere can be approximated in the following steps:

  1. For each sample in the view ray:
  2. exp(scattering constants multiplies optical depth between sun and sample) -> transmittance between sun and sample
  3. exp(scattering constants multiplies optical depth between sample and view point) -> transmittance between sample and view point
  4. Sun light * transmittance between sun and sample  -> light arriving the sample
  5. light arriving the sample * phase function -> light reflected on the view ray
  6. atmospheric density at the sample * scattering constants -> scattering constants at the sample
  7. light reflected on the view ray * scattering constants at the sample -> remaining light after it’s scattered by the sample
  8. remaining light after it’s scattered by the sample * transmittance between sample and view point -> light arriving the view point from the sample, i.e. sky color from the sample
  9. accumulate the sky color from the sample to the final sky color

After all samples in the view ray direction are iterated by the ray marching algorithm, the final sky color is obtained from the 9th (ix) step.

Simulate Hand-Drawn Strokes

stroke

Figure 1: a C++ program simulating hand-drawn strokes.

sky

Figure 2: reference of hand-drawn strokes. From movie “The Little Prince”

So, it’s possible to “draw” crappy strokes by programming! Figure 1 is generated by a C++ program I wrote to simulate the strokes in Figure 2.

The idea is simple: define the size of a stroke with width and height, then randomly generate the starting point and direction of the stroke within the size. Finally, draw the stroke in an image by rasterizing the line. While drawing the stroke, jitter the pixel to be rasterized, and draw more pixels stretching towards sides of the jittered pixel with a random width. The intensities of these pixels are randomized.

Figure 1 is generated by drawing 128 strokes sizing 400/50 in an image of size 800/600.

Global Illumination in Brain

June 10 > Light scattering effects seem like closely related to impression. Reading references:

  • Learning from Failure: a Survey of Promising, Unconventional and Mostly Abandoned Renderers for ‘Dreams PS4’, a Geometrically Dense, Painterly UGC Game. http://advances.realtimerendering.com/s2015/mmalex_siggraph2015_hires_final.pdf

 

May 30 > That may related to impressionism – understanding the whole image instead of separating the image into pixels.

May 19 > While rendering global illumination takes long time, why don’t we think about the essential effect of global illumination in our brain and only render what matters?

Write GLSL Code Fast in CPU

Since gpu doesn’t have compilation error check when writing glsl shaders, it’s slow to find compilation errors by purely looking at the code. I found that we can write the glsl code in cpu with the aid of cpu compilation error check, then turn to gpu to run. PS. when writing on cpu, cpu should have a copy of the glsl builtin functions, such as vec2, normalization, et.

Light Bleeding of Soft Shadow Mip-Maps

Light bleeding happens in umbra area where the mipmap layers are filtered right away, so that the inconsistent filter sizes break the accumulation  law of of blocked light in the filter areas – not all the blocked light are added to 1.

But the things is, if we can prevent the fully-blocked fragment being filtered, we would fix light bleeding. Idea: check if the fragment is fully blocked in layers blocking any light. If all the layers block the fragment, don’t filter.

The idea theoretically works but when I implemented it, light bleeding doesn’t seem fixed :_(

Soft Shadow Mip-Maps vs Percentage-Closer Soft Shadows

So, I wanna present my MS thesis in a better way. My adviser came out with such a beautiful idea that I wasn’t smart enough to fully understand and enjoy until now. It’s artistic and beautiful.

So, what’s wrong with PCSS (Percentage-Closer Soft Shadows)? Blocker estimation. Blockers are estimated by blocker search in a custom defined search size. The custom defined search size is incorrect and doesn’t make sense. It brings wrong blockers in the shadowmap and count them as blockers, that result in wrong average blocker distance, which eventually produces wrong visibility.

So, we fixed the blocker estimation. By, directly looking up the blocker’s depth from the corresponding texel’s value. That gives the explicit blocker depth. With the explicit blocker depth, we would expect to get a penumbra size proportional to the blocker depth. There’s a good fact that, penumbra size is proportional to texel size of a mipmap layer. That means two things. One, a large penumbra size can be filtered with a bunch of texels with large size, which take less samples than with texels with small size. Less samples means less computation cost. Two, the blocker depth is proportional to the texel size. We can find a mipmap layer that matches the relationship (proportion) of the explicit blocker distance and the texel size. Then that mipmap layer is the one to be filtered to get the visibility and the computation cost is constant in terms of looking up samples.

Unit Test in Rendering

Although it’s impossible to directly test the result of a rendering algorithm, as the result is an image, there’s always biases of each pixel value, according to the algorithm and hardware which determine the accuracy of numbers, we can use unit test to test the atomic functions of the rendering algorithm. For example, the transformation functions are testable. Randomness, such as Russian Roulette in ray marching, can be visualized so we can see if the distribution is even, or use Fourier Transformation to see if the variation is small enough.

Debugging GLSL Shader in Blender

The only primitive way is visualizing the value being tested by passing the value to one or more fragment-color channels.

Same struggling as 2 years ago with soft shadows algorithms in glsl but don’t feel much struggle instead seeing it as a challenge. contemplate. test possible steps and guess what’s wrong.

:_(

Space Transformation between Atmospheric Points During Ray Marching

img_5154

Based on the definition of coordinate system of ray marching point in atmosphere in a previous article, this article talks about how to transform the space from one point to another during ray marching.

For a given point p, whose coordinate in its own space is (0, r + h). Starting from p, if we move p to p_prime along a direction with vertical angle theta (in p’s space), we get p_prime in p’s space. Then how to get p_prime’s coordinate and the direction in p_prime’s space?

  • 1) Compute p_prime’s coordinate in p’s space. Given p (x, y), the direction (x_d, y_d) = (cos(theta), sin(theta)), and the distance to move to p_prime: d, we can get p_prime’s coordinate in p’s space by: (x_prime, y_prime) = p + dist * direction = (x, y) + d * (x_d, y_d) = (x + d * x_d, y + d * y_d).
  • 2) Transform p_prime’s coordinate from p’s space to p_prime’s space. Get the distance from p_prime in p’s space to the origin: distance_to_origin = sqrt(x_prime^2 + y_prime^2). Then p_prime’s coordinate in its own space is: (x_prime_own, y_prime_own) = (0, distance_to_origin).
  • 3) Compute the direction in p_prime’s space. To get the direction in p_prime’s space, we just need to get theta_prime as shown in the figure. theta_prime in p’s space can be obtained by theta_prime = acos(dot_product((x_d, y_d), (x_prime, y_prime))).

So that’s the basic idea. In practical (i.e. in glsl shader), the input is p represented by a normalized altitude and a normalized vertical angle. The output should be p_prime with its normalized altitude and normalized vertical angle in its own space. To do this, we need to:

  • 4) Compute p’s coordinate and the direction in p’s space, then
  • 5) Follow the above three steps to get p_prime’s coordinate and the direction in p_prime’s space.
  • 6) Compute p_prime’s normalized altitude and the normalized vertical angle with the result from the previous step.

Now let’s do 4) and 6).

  • 4) Compute p’s coordinate and the direction in p’s space.
    • Compute earth’s relative radius based on the ratio of earth/outter-atmospheric-depth and the normalized outter-atmospheric-depth: ratio * 1.0 = 106. (=6360/(6420-6360). Simulating the Colors of Sky)
    • Compute p’s coordinate (x, y) = (0, 1280 + normalized-altitude).
    • Compute theta (domain is [0, PI]) with the normalized vertical angle (domain is [0, 1]): theta = normalized-vertical-angle * PI.
  • 5) Compute p_prime’s normalized altitude and the normalized vertical angle.
    • p_prime’s normalized altitude = y_prime_own (i.e. in 2)) – earth’s relative radius = y_prime_own – 1280.
    • normalized vertical angle = theta_prime / PI (NOTE: theta_prime is calculated from 3)).

In conclusion, the steps for computing p_prime’s normalized altitude and normalized vertical angle in its own space with p’s normalized altitude and normalized vertical angle are:

4), 1), 2) 3), 5).

Simulate the Altitude and Vertical of Eye Rays in Outer Atmosphere with Sky Dome

Sky dome in practice is a hemisphere (left in orange), and the outer atmosphere is part of a sphere (right in orange):

image1

#1 In practice, what we have is on the left: a sky dome holding vertices of the sky and the position of the eye.

#2 What we want from #1, is getting the altitude and vertical angle ‘theta’ of the eye as shown on the right. So, how to get the right from the left??

#3 The simple answer of #2 is getting the value of the y axis of the eye position in the sky dome space as the altitude, and getting the vertical angle of the eye ray starting from the eye position in the sky dome’s space to a sky vertex. This would result in the following graph as the colors of the sky, and arise an problem:

img_4998

When the eye is located at the sky dome center, with a given field of view, i.e. fov, the fragment color is obtained by sampling the center of the fragment. This gives detailed sky color when the eye is looking towards the apex of the sky dome, and coarse sky color when looking towards the center of the sky dome, because of the ellipse-shaped sky is sampled evenly by the same-sized fragments. The color detail of the sky reduces as the the eye direction goes from the apex to the center. This situation applies the same when the eye is located above the sky dome center.

One solution to enrich the color details close to the center of the sky dome is sampling the fragments close to the center of the sky dome. The number of samples per fragment is proportional to the angle between the eye ray and the vertical axis, i.e. Y axis. However, sampling the fragment is optional as the quality of the sky color could be good enough with the regular sampling of the fragments in GLSL.

#4 With the discussion in #3, we can get the altitude and vertical angle of an eye ray by:

  • #4.1 Transforming the eye position in world space to the sky-dome space
    • #4.1.1 Compute the matrix transforming world-space point to the sky-dome space.
    • #4.1.2 Multiply the eye position by the matrix to get the eye position in the sky-dome space.
  • #4.2 Getting the eye ray in sky-dome space for a sky fragment
    • #4.2.1 Subtract the eye position (in sky-dome space) by the position of the sky fragment (in sky-dome space)
    • #4.2.2 Normalize the eye ray
  • #4.3 Getting the altitude of the eye ray: the distance between the eye position and the origin of the sky-dome space (in sky-dome space)
  • #4.4 Getting the vertical angle of the eye ray: the acosine value of the dot product of the eye ray (normalized, in sky-dome space) with the vertical axis (eye position in sky-dome space – origin of sky-dome space; normalized, in sky-dome space).

So far, the only problem left is computing the matrix as discussed in #4.1.1. The known facts are:

  • Eye position in world space: X_e (x_e, y_e, z_e)
  • Sky dome’s position in world space: X_s (x_s, y_s, z_s)
  • Sky dome’s radius in world space: r

Then the eye position in the sky-dome space is: (X_e – X_s) / r. The sky-dome space shares the same directions of the axis of the world space. The result can be approached by the multiplication of a scaling matrix, S, and a translation matrix, T: ST. Where:

  • T = [1, 0, 0, -x_s; 0, 1, 0, -y_s; 0, 0, 1, -z_s; 0, 0, 0, 1]
  • S = [1/r, 0, 0, 0; 0, 1/r, 0, 0; 0, 0, 1/r, 0; 0, 0, 0, 1]
  • Hence, ST = [1/r, 0, 0, -x_s/r; 0, 1/r, 0, -y_s/r; 0, 0, 1/r, -z_s/r; 0, 0, 0, 1]

T and S in this case is row major. With glsl the matrix should be converted to column major, which is:

  • T =

[1, 0, 0, 0,

0, 1, 0, 0,

0, 0, 1, 0,

-x_s, -y_s, -z_s, 1]

  • S =

[1/r, 0, 0, 0,

0, 1/r, 0, 0,

0, 0, 1/r, 0,

0, 0, 0, 1]

  • ST =

[1/r, 0, 0, 0,

0, 1/r, 0, 0,

0, 0, 1/r, 0,

-x_s/r, -y_s/r, -z_s/r, 1]