Posts from the “Unrealistic Rendering” Category

Stroke Function

How to calculate the colors of pixels of a stroke given a pixel’s position on the canvas???

From experience of real painting, strokes has a general nature of randomization determined by paper and painting tool. On top of basic randomization, the appearance varies on the density and size of randomization as a result of different painting forces driven by painter’s emotion.

So, to briefly simulate aesthetic-driven strokes connected to our heart / emotion / mind, we could use a basic randomization as the raw model of the stroke, then modify the density and size of the basic randomization to reflect emotion:

  • Stroke equation: randomization equation with configurable density and size
  • randomization equation: noise function with evenly distributed density and size
  • configure density: apply a density equation to the noise function
  • configure size: apply a size equation to the noise function

Stroke equation:

Stroke = density equation * size equation * basic randomization

An illustration of what the stroke equation will produce: in the following image, the top stroke is drawn with strong force, hence corresponds to high density and big size. The middle and bottom strokes are drawn with light force, which are close to the look produced by basic randomization with identity density and size.

IMG_0539

Lastly.. Happy Thursday! Nov 16, 2017

IMG_0540

Simulate Hand-Drawn Strokes

stroke

Figure 1: a C++ program simulating hand-drawn strokes.

sky

Figure 2: reference of hand-drawn strokes. From movie “The Little Prince”

So, it’s possible to “draw” crappy strokes by programming! Figure 1 is generated by a C++ program I wrote to simulate the strokes in Figure 2.

The idea is simple: define the size of a stroke with width and height, then randomly generate the starting point and direction of the stroke within the size. Finally, draw the stroke in an image by rasterizing the line. While drawing the stroke, jitter the pixel to be rasterized, and draw more pixels stretching towards sides of the jittered pixel with a random width. The intensities of these pixels are randomized.

Figure 1 is generated by drawing 128 strokes sizing 400/50 in an image of size 800/600.

GLSL Sky Shading on Blender

The simplest way is using a sky dome, i.e. sphere or hemisphere, and coloring the sphere by linearly interpolating two gradient colors according to the fragment’s height, i.e. z axis. The gradient colors consists of the apex color and center color.

Here’s the result:

sky

Instead of linear interpolation based on height, using eye angle upon the sea level (a) is closer to real: apex * sin(a) + center * cos(a):

sky-refine

Although using cosine and sin of the eye angle makes a nice blending, it doesn’t take account the fact that light is scattered based on light-traveling distance from the outer atmosphere to the eye.

So far the vertex blending trick has been working well. But when it comes with light scattering, the theory gets a bit complex. The discretized light transport process goes as follows:

fullsizerender-2
  • The light along the camera ray is integrated by analyzing the in-scattering light in each point p
  • At each point p, calculate the in-scattering light by counting for the light scattered away from the sun to point p, and from point p to the camera. Phase function is use to attenuate the scattered light towards the camera in the very beginning. Here’s the in-scattering equation:
0257equ01

Image Credit: Sean O’Neil, “Accurate Atmospheric Scattering”, GPU Gems 2, within which the phase function F(theta, g) and out-scattering funtion t(P_aP_b, lamda) are included

However.. I don’t think it’s right to attenuate the entire light by the phase function.. The phase function should only be used to attenuate light when the light direction deviates from the direction to the camera:

fullsizerender-3

Evaluating the two out-scattering integrals and the one in-scattering integral per sample point during rendering is expensive. The amount of evaluation for every camera ray is: N_{in-scattering} * (N_{out-scattering_pp_c} + N_{out-scattering_pp_a}). Considering the wavelengths of the three color channels (i.e. red, green, blue), and the two scattering (Rayleigh and Mie Scattering), the computation per camera ray is 2 * 3 * N_{in-scattering} * (N_{out-scattering_pp_c} + N_{out-scattering_pp_a}). To make the rendering result accurate, the value of the Ns have to go high.

One way to save the scattering evaluation per camera ray is pre-computing and storing the optical depth (the integral part of out-scattering) and atmospherical density at a specific height to a 2D lookup table which consists of altitute and angle towards the sun as the dimensions that respectively define the staring point and direction of a ray. Each of the rays reflected in the 2D lookup table starts from a sample point in the atmosphere, and exits in sky vertex, if the ray doesn’t hit any objects.

With the pre-computed 2D lookup table, the optical depths of all rays from the sample point to the sun (pp_c ) can be directly found in the table. The optical depths from the sample points to the camera, if the camera is in the atmosphere, need two steps to be obtained:

  • If the camera ray towards the sample point doesn’t intersect with the ground, the optical depth between the camera ray and the sample point is the substracted result of the camera ray towards the sky vertex and the sample point towards the sky vertex.
  • Otherwise, if the camera ray towards the sample point intersect with the ground, the optical depth between the camera ray and the sample point is the substracted result of the sample point towards the sky vertex and the camera ray towards the sky vertex.

Now if we implement the shader, we need to pre-calculate the 2D lookup table (stored in a texture) and pass the texture to the shader. Given that using texture on blender needs a few manual configuration which I’m trying to avoid, is there a way of squeezing the values in the 2D table to a math equation which takes the altitude and angle then returns the optical depth? Sean O’Neil plotted the curves of the 2D lookup table and found the curves fitting the values. That way, the computation per camera ray is reduced from 2 * 3 * N_{in-scattering} * (N_{out-scattering_pp_c} + N_{out-scattering_pp_a}) to 2 * 3 * N_{in-scattering} * (1 + 1). That’s a good approach but I can’t use the result since I modified the in-scattering equation a little bit. So I’ll plot the values in the 2D lookup table using the revised equation and see if there’s a curve fits the values.

Using Matlab, the fitted surface turns out a polynomial representation with vertical angle and altitudes. The final result looks like the following:

sunset-glsl

Capture Capture

References