Posts from the “Sky Rendering” Category

Draw Strokes in Sky Rendering in GLSL

General framework

  • Pass a list of strokes to the shader as an array
  • For each fragment, compute its stroke force by:
    • Transform the fragment coordinate to the stroke coordinate system (similar to uv space in texture mapping)
    • Find the strokes the fragment belongs to
    • For each stroke the fragment belongs to:
      • Compute the fragment’s distance to the main stroke
      • If the distance is within certain threshold of the width of the main stroke, randomize the fragment’s stroke force. Otherwise, the fragment doesn’t have stoke force.
      • Accumulate the stroke force of current stroke to the fragment’s stroke force.

Transform the fragment coordinate to the stroke coordinate system (similar to uv system)

https://gypsyshen.wordpress.com/2017/07/07/convert-sphere-vertices-from-model-space-to-uv-space-in-glsl/

 

Color-Emotion References

Blue Color from Apex After Sunset?

IMG_6088

As shown in the above figure, the color of sky after sunset from apex to center is blue, orange, blue. With only the sun as the light source, there shouldn’t be blue from the apex. Because, as shown in the following figure, sunlight diminishes as photon’s traveling distance increases (e.g. with the green-blue sun, the traveling distance increase from a to b), which changes from blue to red. My question is, why’s there still blue color from apex? Are there other light sources other than the sun?

FullSizeRender

Space Transformation between Atmospheric Points During Ray Marching

img_5154

Based on the definition of coordinate system of ray marching point in atmosphere in a previous article, this article talks about how to transform the space from one point to another during ray marching.

For a given point p, whose coordinate in its own space is (0, r + h). Starting from p, if we move p to p_prime along a direction with vertical angle theta (in p’s space), we get p_prime in p’s space. Then how to get p_prime’s coordinate and the direction in p_prime’s space?

  • 1) Compute p_prime’s coordinate in p’s space. Given p (x, y), the direction (x_d, y_d) = (cos(theta), sin(theta)), and the distance to move to p_prime: d, we can get p_prime’s coordinate in p’s space by: (x_prime, y_prime) = p + dist * direction = (x, y) + d * (x_d, y_d) = (x + d * x_d, y + d * y_d).
  • 2) Transform p_prime’s coordinate from p’s space to p_prime’s space. Get the distance from p_prime in p’s space to the origin: distance_to_origin = sqrt(x_prime^2 + y_prime^2). Then p_prime’s coordinate in its own space is: (x_prime_own, y_prime_own) = (0, distance_to_origin).
  • 3) Compute the direction in p_prime’s space. To get the direction in p_prime’s space, we just need to get theta_prime as shown in the figure. theta_prime in p’s space can be obtained by theta_prime = acos(dot_product((x_d, y_d), (x_prime, y_prime))).

So that’s the basic idea. In practical (i.e. in glsl shader), the input is p represented by a normalized altitude and a normalized vertical angle. The output should be p_prime with its normalized altitude and normalized vertical angle in its own space. To do this, we need to:

  • 4) Compute p’s coordinate and the direction in p’s space, then
  • 5) Follow the above three steps to get p_prime’s coordinate and the direction in p_prime’s space.
  • 6) Compute p_prime’s normalized altitude and the normalized vertical angle with the result from the previous step.

Now let’s do 4) and 6).

  • 4) Compute p’s coordinate and the direction in p’s space.
    • Compute earth’s relative radius based on the ratio of earth/outter-atmospheric-depth and the normalized outter-atmospheric-depth: ratio * 1.0 = 106. (=6360/(6420-6360). Simulating the Colors of Sky)
    • Compute p’s coordinate (x, y) = (0, 1280 + normalized-altitude).
    • Compute theta (domain is [0, PI]) with the normalized vertical angle (domain is [0, 1]): theta = normalized-vertical-angle * PI.
  • 5) Compute p_prime’s normalized altitude and the normalized vertical angle.
    • p_prime’s normalized altitude = y_prime_own (i.e. in 2)) – earth’s relative radius = y_prime_own – 1280.
    • normalized vertical angle = theta_prime / PI (NOTE: theta_prime is calculated from 3)).

In conclusion, the steps for computing p_prime’s normalized altitude and normalized vertical angle in its own space with p’s normalized altitude and normalized vertical angle are:

4), 1), 2) 3), 5).

Simulate the Altitude and Vertical of Eye Rays in Outer Atmosphere with Sky Dome

Sky dome in practice is a hemisphere (left in orange), and the outer atmosphere is part of a sphere (right in orange):

image1

#1 In practice, what we have is on the left: a sky dome holding vertices of the sky and the position of the eye.

#2 What we want from #1, is getting the altitude and vertical angle ‘theta’ of the eye as shown on the right. So, how to get the right from the left??

#3 The simple answer of #2 is getting the value of the y axis of the eye position in the sky dome space as the altitude, and getting the vertical angle of the eye ray starting from the eye position in the sky dome’s space to a sky vertex. This would result in the following graph as the colors of the sky, and arise an problem:

img_4998

When the eye is located at the sky dome center, with a given field of view, i.e. fov, the fragment color is obtained by sampling the center of the fragment. This gives detailed sky color when the eye is looking towards the apex of the sky dome, and coarse sky color when looking towards the center of the sky dome, because of the ellipse-shaped sky is sampled evenly by the same-sized fragments. The color detail of the sky reduces as the the eye direction goes from the apex to the center. This situation applies the same when the eye is located above the sky dome center.

One solution to enrich the color details close to the center of the sky dome is sampling the fragments close to the center of the sky dome. The number of samples per fragment is proportional to the angle between the eye ray and the vertical axis, i.e. Y axis. However, sampling the fragment is optional as the quality of the sky color could be good enough with the regular sampling of the fragments in GLSL.

#4 With the discussion in #3, we can get the altitude and vertical angle of an eye ray by:

  • #4.1 Transforming the eye position in world space to the sky-dome space
    • #4.1.1 Compute the matrix transforming world-space point to the sky-dome space.
    • #4.1.2 Multiply the eye position by the matrix to get the eye position in the sky-dome space.
  • #4.2 Getting the eye ray in sky-dome space for a sky fragment
    • #4.2.1 Subtract the eye position (in sky-dome space) by the position of the sky fragment (in sky-dome space)
    • #4.2.2 Normalize the eye ray
  • #4.3 Getting the altitude of the eye ray: the distance between the eye position and the origin of the sky-dome space (in sky-dome space)
  • #4.4 Getting the vertical angle of the eye ray: the acosine value of the dot product of the eye ray (normalized, in sky-dome space) with the vertical axis (eye position in sky-dome space – origin of sky-dome space; normalized, in sky-dome space).

So far, the only problem left is computing the matrix as discussed in #4.1.1. The known facts are:

  • Eye position in world space: X_e (x_e, y_e, z_e)
  • Sky dome’s position in world space: X_s (x_s, y_s, z_s)
  • Sky dome’s radius in world space: r

Then the eye position in the sky-dome space is: (X_e – X_s) / r. The sky-dome space shares the same directions of the axis of the world space. The result can be approached by the multiplication of a scaling matrix, S, and a translation matrix, T: ST. Where:

  • T = [1, 0, 0, -x_s; 0, 1, 0, -y_s; 0, 0, 1, -z_s; 0, 0, 0, 1]
  • S = [1/r, 0, 0, 0; 0, 1/r, 0, 0; 0, 0, 1/r, 0; 0, 0, 0, 1]
  • Hence, ST = [1/r, 0, 0, -x_s/r; 0, 1/r, 0, -y_s/r; 0, 0, 1/r, -z_s/r; 0, 0, 0, 1]

T and S in this case is row major. With glsl the matrix should be converted to column major, which is:

  • T =

[1, 0, 0, 0,

0, 1, 0, 0,

0, 0, 1, 0,

-x_s, -y_s, -z_s, 1]

  • S =

[1/r, 0, 0, 0,

0, 1/r, 0, 0,

0, 0, 1/r, 0,

0, 0, 0, 1]

  • ST =

[1/r, 0, 0, 0,

0, 1/r, 0, 0,

0, 0, 1/r, 0,

-x_s/r, -y_s/r, -z_s/r, 1]

Altitude of Ray Marching Point for Optical Depth Integration in Atmospheric Rendering

The context of this article is based on the parameter calculation mentioned in the sky rendering article where most concepts stem from O’Neil’s atmospheric rendering article.

fullsizerender-7

Optical depth is an integration of outscattering of a ray, which is at a given altitude h_0 whose direction is deviated from the vertical direction of angle theta. The integration is done by sampling the ray whose starting point is p_0 and end point p_1, which is the intersection of the ray with the outer atmosphere, i.e. sky dome. For every sampling point p_i, the altitude needs to be estimated. That is the topic of this article.

Assume the coordinate is dominated by the starting point p_0, we can set up the initial coordinate system by having p_0 placed right above the earth. So the coordinate of p_0 is (0, r + h_0). For any sampling point p_i, the coordinate is p_0 + i * ray_marching_interval * ray_direction, which would produce (x, y). With (x, y) the altitude of p_i is sqrt(x*x + y*y) – r.

Note that the domain of the altitude in the context is [0, 1]. So the earth radius needs to be scaled according to the ratio of the earth radius upon atmospheric thickness. According to the Simulating the Colors of the Sky blog, the ratio is 6360/(6420-6360) = 106. So the earth radius used in this context is 106 * 1.0 = 106, where 1.0 is the maximum value of the domain of altitude.

GLSL Sky Shading on Blender

The simplest way is using a sky dome, i.e. sphere or hemisphere, and coloring the sphere by linearly interpolating two gradient colors according to the fragment’s height, i.e. z axis. The gradient colors consists of the apex color and center color.

Here’s the result:

sky

Instead of linear interpolation based on height, using eye angle upon the sea level (a) is closer to real: apex * sin(a) + center * cos(a):

sky-refine

Although using cosine and sin of the eye angle makes a nice blending, it doesn’t take account the fact that light is scattered based on light-traveling distance from the outer atmosphere to the eye.

So far the vertex blending trick has been working well. But when it comes with light scattering, the theory gets a bit complex. The discretized light transport process goes as follows:

fullsizerender-2
  • The light along the camera ray is integrated by analyzing the in-scattering light in each point p
  • At each point p, calculate the in-scattering light by counting for the light scattered away from the sun to point p, and from point p to the camera. Phase function is use to attenuate the scattered light towards the camera in the very beginning. Here’s the in-scattering equation:
0257equ01

Image Credit: Sean O’Neil, “Accurate Atmospheric Scattering”, GPU Gems 2, within which the phase function F(theta, g) and out-scattering funtion t(P_aP_b, lamda) are included

However.. I don’t think it’s right to attenuate the entire light by the phase function.. The phase function should only be used to attenuate light when the light direction deviates from the direction to the camera:

fullsizerender-3

Evaluating the two out-scattering integrals and the one in-scattering integral per sample point during rendering is expensive. The amount of evaluation for every camera ray is: N_{in-scattering} * (N_{out-scattering_pp_c} + N_{out-scattering_pp_a}). Considering the wavelengths of the three color channels (i.e. red, green, blue), and the two scattering (Rayleigh and Mie Scattering), the computation per camera ray is 2 * 3 * N_{in-scattering} * (N_{out-scattering_pp_c} + N_{out-scattering_pp_a}). To make the rendering result accurate, the value of the Ns have to go high.

One way to save the scattering evaluation per camera ray is pre-computing and storing the optical depth (the integral part of out-scattering) and atmospherical density at a specific height to a 2D lookup table which consists of altitute and angle towards the sun as the dimensions that respectively define the staring point and direction of a ray. Each of the rays reflected in the 2D lookup table starts from a sample point in the atmosphere, and exits in sky vertex, if the ray doesn’t hit any objects.

With the pre-computed 2D lookup table, the optical depths of all rays from the sample point to the sun (pp_c ) can be directly found in the table. The optical depths from the sample points to the camera, if the camera is in the atmosphere, need two steps to be obtained:

  • If the camera ray towards the sample point doesn’t intersect with the ground, the optical depth between the camera ray and the sample point is the substracted result of the camera ray towards the sky vertex and the sample point towards the sky vertex.
  • Otherwise, if the camera ray towards the sample point intersect with the ground, the optical depth between the camera ray and the sample point is the substracted result of the sample point towards the sky vertex and the camera ray towards the sky vertex.

Now if we implement the shader, we need to pre-calculate the 2D lookup table (stored in a texture) and pass the texture to the shader. Given that using texture on blender needs a few manual configuration which I’m trying to avoid, is there a way of squeezing the values in the 2D table to a math equation which takes the altitude and angle then returns the optical depth? Sean O’Neil plotted the curves of the 2D lookup table and found the curves fitting the values. That way, the computation per camera ray is reduced from 2 * 3 * N_{in-scattering} * (N_{out-scattering_pp_c} + N_{out-scattering_pp_a}) to 2 * 3 * N_{in-scattering} * (1 + 1). That’s a good approach but I can’t use the result since I modified the in-scattering equation a little bit. So I’ll plot the values in the 2D lookup table using the revised equation and see if there’s a curve fits the values.

Using Matlab, the fitted surface turns out a polynomial representation with vertical angle and altitudes. The final result looks like the following:

sunset-glsl

Capture Capture

References