Posts from the “Sky Rendering” Category

Aesthetics Possibilities of Real-Time Sky Rendering

With GLSL implementation of atmospheric scattering, it’s easy to change aesthetics of sky and see the effect on the fly.

Normal Sky:

normal

Dreamy Sky with Aesthetic Changes:

  • Sky color: red and yellow
  • Painting layers: two layers with one red sky and one yellow sky mixed together. Math operation: add
  • Painting effect: turbulent view directions
  • Sky pattern: big blocks
dreamy

Lastly, a sunrise animation created in a few seconds benefited from the real-time performance of GLSL:

Advertisements

Atmospheric Scattering Highlights

In short words, atmospheric scattering is the process of scattering away light when it’s traveling from a light source to a point. The light arriving the point is the result of multiplication of light at the light source and the transmittance between the light source and the point. Transmittance is related to the average atmospheric density (optical depth) between the light source and the point and the scattering constants, i.e. the exponential of optical depth multiplies scattering constants.

Atmospheric scattering is used to simulate sky color. Simulating sky color integrates atmospheric scattering to the light traveling process from sun to any view point in the atmosphere. Specifically, the sky color at any view direction from a view point in the atmosphere is the integration of in-scattered light at each sample on the view ray (starts from the view point and cast towards the view direction) to the view point. The light at each sample is the in-scattered light from sun to the sample. Chaining the process with a ray marching algorithm, the sky color at a specific view direction from any view point in the atmosphere can be approximated in the following steps:

  1. For each sample in the view ray:
  2. exp(scattering constants multiplies optical depth between sun and sample) -> transmittance between sun and sample
  3. exp(scattering constants multiplies optical depth between sample and view point) -> transmittance between sample and view point
  4. Sun light * transmittance between sun and sample  -> light arriving the sample
  5. light arriving the sample * phase function -> light reflected on the view ray
  6. atmospheric density at the sample * scattering constants -> scattering constants at the sample
  7. light reflected on the view ray * scattering constants at the sample -> remaining light after it’s scattered by the sample
  8. remaining light after it’s scattered by the sample * transmittance between sample and view point -> light arriving the view point from the sample, i.e. sky color from the sample
  9. accumulate the sky color from the sample to the final sky color

After all samples in the view ray direction are iterated by the ray marching algorithm, the final sky color is obtained from the 9th (ix) step.

Draw Strokes in Sky Rendering in GLSL

General framework

  • Pass a list of strokes to the shader as an array
  • For each fragment, compute its stroke force by:
    • Transform the fragment coordinate to the stroke coordinate system (similar to uv space in texture mapping)
    • Find the strokes the fragment belongs to
    • For each stroke the fragment belongs to:
      • Compute the fragment’s distance to the main stroke
      • If the distance is within certain threshold of the width of the main stroke, randomize the fragment’s stroke force. Otherwise, the fragment doesn’t have stoke force.
      • Accumulate the stroke force of current stroke to the fragment’s stroke force.

Transform the fragment coordinate to the stroke coordinate system (similar to uv system)

https://gypsyshen.wordpress.com/2017/07/07/convert-sphere-vertices-from-model-space-to-uv-space-in-glsl/

 

Simulate Hand-Drawn Strokes

stroke

Figure 1: a C++ program simulating hand-drawn strokes.

sky

Figure 2: reference of hand-drawn strokes. From movie “The Little Prince”

So, it’s possible to “draw” crappy strokes by programming! Figure 1 is generated by a C++ program I wrote to simulate the strokes in Figure 2.

The idea is simple: define the size of a stroke with width and height, then randomly generate the starting point and direction of the stroke within the size. Finally, draw the stroke in an image by rasterizing the line. While drawing the stroke, jitter the pixel to be rasterized, and draw more pixels stretching towards sides of the jittered pixel with a random width. The intensities of these pixels are randomized.

Figure 1 is generated by drawing 128 strokes sizing 400/50 in an image of size 800/600.

Color-Emotion References

Blue Color from Apex After Sunset?

IMG_6088

As shown in the above figure, the color of sky after sunset from apex to center is blue, orange, blue. With only the sun as the light source, there shouldn’t be blue from the apex. Because, as shown in the following figure, sunlight diminishes as photon’s traveling distance increases (e.g. with the green-blue sun, the traveling distance increase from a to b), which changes from blue to red. My question is, why’s there still blue color from apex? Are there other light sources other than the sun?

FullSizeRender

Space Transformation between Atmospheric Points During Ray Marching

img_5154

Based on the definition of coordinate system of ray marching point in atmosphere in a previous article, this article talks about how to transform the space from one point to another during ray marching.

For a given point p, whose coordinate in its own space is (0, r + h). Starting from p, if we move p to p_prime along a direction with vertical angle theta (in p’s space), we get p_prime in p’s space. Then how to get p_prime’s coordinate and the direction in p_prime’s space?

  • 1) Compute p_prime’s coordinate in p’s space. Given p (x, y), the direction (x_d, y_d) = (cos(theta), sin(theta)), and the distance to move to p_prime: d, we can get p_prime’s coordinate in p’s space by: (x_prime, y_prime) = p + dist * direction = (x, y) + d * (x_d, y_d) = (x + d * x_d, y + d * y_d).
  • 2) Transform p_prime’s coordinate from p’s space to p_prime’s space. Get the distance from p_prime in p’s space to the origin: distance_to_origin = sqrt(x_prime^2 + y_prime^2). Then p_prime’s coordinate in its own space is: (x_prime_own, y_prime_own) = (0, distance_to_origin).
  • 3) Compute the direction in p_prime’s space. To get the direction in p_prime’s space, we just need to get theta_prime as shown in the figure. theta_prime in p’s space can be obtained by theta_prime = acos(dot_product((x_d, y_d), (x_prime, y_prime))).

So that’s the basic idea. In practical (i.e. in glsl shader), the input is p represented by a normalized altitude and a normalized vertical angle. The output should be p_prime with its normalized altitude and normalized vertical angle in its own space. To do this, we need to:

  • 4) Compute p’s coordinate and the direction in p’s space, then
  • 5) Follow the above three steps to get p_prime’s coordinate and the direction in p_prime’s space.
  • 6) Compute p_prime’s normalized altitude and the normalized vertical angle with the result from the previous step.

Now let’s do 4) and 6).

  • 4) Compute p’s coordinate and the direction in p’s space.
    • Compute earth’s relative radius based on the ratio of earth/outter-atmospheric-depth and the normalized outter-atmospheric-depth: ratio * 1.0 = 106. (=6360/(6420-6360). Simulating the Colors of Sky)
    • Compute p’s coordinate (x, y) = (0, 1280 + normalized-altitude).
    • Compute theta (domain is [0, PI]) with the normalized vertical angle (domain is [0, 1]): theta = normalized-vertical-angle * PI.
  • 5) Compute p_prime’s normalized altitude and the normalized vertical angle.
    • p_prime’s normalized altitude = y_prime_own (i.e. in 2)) – earth’s relative radius = y_prime_own – 1280.
    • normalized vertical angle = theta_prime / PI (NOTE: theta_prime is calculated from 3)).

In conclusion, the steps for computing p_prime’s normalized altitude and normalized vertical angle in its own space with p’s normalized altitude and normalized vertical angle are:

4), 1), 2) 3), 5).

Simulate the Altitude and Vertical of Eye Rays in Outer Atmosphere with Sky Dome

Sky dome in practice is a hemisphere (left in orange), and the outer atmosphere is part of a sphere (right in orange):

image1

#1 In practice, what we have is on the left: a sky dome holding vertices of the sky and the position of the eye.

#2 What we want from #1, is getting the altitude and vertical angle ‘theta’ of the eye as shown on the right. So, how to get the right from the left??

#3 The simple answer of #2 is getting the value of the y axis of the eye position in the sky dome space as the altitude, and getting the vertical angle of the eye ray starting from the eye position in the sky dome’s space to a sky vertex. This would result in the following graph as the colors of the sky, and arise an problem:

img_4998

When the eye is located at the sky dome center, with a given field of view, i.e. fov, the fragment color is obtained by sampling the center of the fragment. This gives detailed sky color when the eye is looking towards the apex of the sky dome, and coarse sky color when looking towards the center of the sky dome, because of the ellipse-shaped sky is sampled evenly by the same-sized fragments. The color detail of the sky reduces as the the eye direction goes from the apex to the center. This situation applies the same when the eye is located above the sky dome center.

One solution to enrich the color details close to the center of the sky dome is sampling the fragments close to the center of the sky dome. The number of samples per fragment is proportional to the angle between the eye ray and the vertical axis, i.e. Y axis. However, sampling the fragment is optional as the quality of the sky color could be good enough with the regular sampling of the fragments in GLSL.

#4 With the discussion in #3, we can get the altitude and vertical angle of an eye ray by:

  • #4.1 Transforming the eye position in world space to the sky-dome space
    • #4.1.1 Compute the matrix transforming world-space point to the sky-dome space.
    • #4.1.2 Multiply the eye position by the matrix to get the eye position in the sky-dome space.
  • #4.2 Getting the eye ray in sky-dome space for a sky fragment
    • #4.2.1 Subtract the eye position (in sky-dome space) by the position of the sky fragment (in sky-dome space)
    • #4.2.2 Normalize the eye ray
  • #4.3 Getting the altitude of the eye ray: the distance between the eye position and the origin of the sky-dome space (in sky-dome space)
  • #4.4 Getting the vertical angle of the eye ray: the acosine value of the dot product of the eye ray (normalized, in sky-dome space) with the vertical axis (eye position in sky-dome space – origin of sky-dome space; normalized, in sky-dome space).

So far, the only problem left is computing the matrix as discussed in #4.1.1. The known facts are:

  • Eye position in world space: X_e (x_e, y_e, z_e)
  • Sky dome’s position in world space: X_s (x_s, y_s, z_s)
  • Sky dome’s radius in world space: r

Then the eye position in the sky-dome space is: (X_e – X_s) / r. The sky-dome space shares the same directions of the axis of the world space. The result can be approached by the multiplication of a scaling matrix, S, and a translation matrix, T: ST. Where:

  • T = [1, 0, 0, -x_s; 0, 1, 0, -y_s; 0, 0, 1, -z_s; 0, 0, 0, 1]
  • S = [1/r, 0, 0, 0; 0, 1/r, 0, 0; 0, 0, 1/r, 0; 0, 0, 0, 1]
  • Hence, ST = [1/r, 0, 0, -x_s/r; 0, 1/r, 0, -y_s/r; 0, 0, 1/r, -z_s/r; 0, 0, 0, 1]

T and S in this case is row major. With glsl the matrix should be converted to column major, which is:

  • T =

[1, 0, 0, 0,

0, 1, 0, 0,

0, 0, 1, 0,

-x_s, -y_s, -z_s, 1]

  • S =

[1/r, 0, 0, 0,

0, 1/r, 0, 0,

0, 0, 1/r, 0,

0, 0, 0, 1]

  • ST =

[1/r, 0, 0, 0,

0, 1/r, 0, 0,

0, 0, 1/r, 0,

-x_s/r, -y_s/r, -z_s/r, 1]