Posts from the “Computer Graphics” Category

Thoughts on Global Illumination

Global illumination is about, gathering photon distribution. photon distribution is largely a result of geometry and participating media, resulting in different amount of photons in volume and surface.

Is it possible to transform global illumination to a different space, which is easy to compute photon distribution, then transform the result to the 3D domain?

Is there a math model to capture the characteristics of global illumination, which is efficient to compute? 🙂

Or, what’s the characteristics of global illumination? It’s the result of light transport in participating media and surfaces.

TODO:

  • Understand light transport, from a different perspective than space-time – rather, from the perspective of universe!
  • Simplify light transport to capture essential characteristics of light transport
  • Understand geometry
  • Simplify geometry representation: is it possible to represent it in a different domain?
  • Understand participating media
  • Simplify participating media: is it possible to represent it in a different domain?

Ideas:

  • Grind down geometry representation to see how it looks in a different space

Light transport is observed in scientific efforts of capturing the statistics of light transport. For example, light transports in straight line is a conclusion of statistics of photon transportation. Is it possible to abstract the light transport mechanism?

GLSL: Two Shader Programs in Three Viewports

119 013- 0 039

Steps to render the above sunrise rendering from one viewport to three viewports, to compare three elements of sunrise – Rayleigh, Mie, Rayleigh and Mie:

  • In fragment shaders of the skydome and the area light, set three outputs
  • In CPU, configure three textures for a framebuffer, each of which is rendered from a color attachment [1]. The three color attachments take values of the three outputs from the fragment shaders [2]
  • In the framebuffer, render the two fragment shaders of the skydome and area light to the three textures
  • Render the three textures to three viewports on the screen [2]
image from ios

References

[1] Fragment Shader Output Buffers. https://www.khronos.org/opengl/wiki/Fragment_Shader#Output_buffers

[2] Render To Texture. http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-14-render-to-texture

Real-Time Global Illumination

Path tracing is too expensive, radiosity seems like the right way.

Questions to step forward:

  • Is there a way to transform geometry representation into a different form, so that light hits the geometry in a more computational efficient way to get the energy on that surface?
    • Transform the surface to a different space
    • Get the energy to that surface

Next:

  • Do a thorough evaluation of global illumination
  • Understand radiosity
  • Understand geometry representation
    • Check “Geometric Algebra for Computer Science” to find a transformed geometric representation in a different space than in 3D space, so that the energy (light) received by the geometry stays invariant or commutative in both spaces, yet is more computational efficient in the “different space”. if such a transformed geometric representation can be found, computer graphics would not need the expensive path-tracer for global illumination

Note of An Image Synthesizer

Ken Perlin. An Image Synthesizer.

The paper proposes the initial algorithm of Perlin Noise, and a few variations of the algorithm to simulate a variety of randomnesses in nature.

The technique produces organic randomness appears in nature. It proposes an algorithm to produce random look, which is made uniformly distributed by a narrow bandpass limit in frequency. This statistic character of the randomness maintains unchanged with varying domains. Because the algorithm satisfies statistical invariances under rotation and translation.

The most beautiful part of the paper is the description of Perlin Noise: an organized array, when associated with the gradient of the organized array and an augmentation value, both are stochastic, is transformed to a stochastic value which is used together with the gradient to interpolate areas between the stochastic value of the current organized array and the ones of other organized arrays. NOTE: the augmentation arguments for all organized arrays determines the generated stochastic values. The distribution of the augmentation arguments of all organized arrays determines the uniformness of the generated stochastic values.

Note of Simulating and Analyzing Jackson Pollock’s Painting

Sangwon Lee, Sven C. Olsen, Bruce Gooch. Simulating and Analysing Jackson Pollock’s Painting

The paper devises a system to allow user create Pollock-like painting with a painting material simulator which allows user paint without using real material, and with real-time feedback of fractal properties of the ongoing painting, which gives user an awareness of the similarity between the fractal dimension of the ongoing painting and those of Pollock’s painting. That means, the user is capable of creating Pollock-like painting by keeping the similarity high.

The paper points out that fractal dimension is incapable of distinguishing fractal and nonfactual images, and proposes an new metric, uniformity, to alleviate the limitation. Uniformity indicates the similarity between the fractal dimension of a subregion and those of the entire painting. But, in what way does uniformity distinguish fractal and nonfactual images better than fractal dimension does?

Notes of Empathic Painting: Interactive Stylization Through Observed Emotional State

Maria Shugrina, Margrit Betke, John Collomoss. Empathic Painting: Interactive stylization through observed emotional state

Emotion is majorly expressed from a combination of color and stroke. The paper analyzes color, stroke denotation style, and region turbulence to transform them to emotions. It can be found in Section 4.2 Rendering Parameters:

Express Emotion in Region Turbulence

  • Section 4.2.1 Region Turbulence

Express Emotion in Color

  • Section 4.2.2 Tonal Variation, and

Express Emotion in Stroke Denotation Style

  • Section 4.2.3 Stroke Denotation Style

Notes of The Art of Journey

Characters are segmented to make it possible to animate.

Scary faces comes from a question that “what if the character you control is on the dark side?”

Principles of expressing emotions using landscape composition:

  • Openness, happiness: if the fog is pushed away from the player, the view becomes clear, which makes player feel happier; on the other hand, if the fog is close to the player, the blurry view makes the player feel scary
  • Blue sky is used to award the player at the end of the game. Green skies were used earlier as a random alternative of the blue sky, from what the artist said
  • Dreamy and high quality light is used to make the player feel magical at the end of the game

Book link:

https://www.amazon.com/Art-Journey-Matthew-Nava/dp/0985902213

Atmospheric Scattering Highlights

In short words, atmospheric scattering is the process of scattering away light when it’s traveling from a light source to a point. The light arriving the point is the result of multiplication of light at the light source and the transmittance between the light source and the point. Transmittance is related to the average atmospheric density (optical depth) between the light source and the point and the scattering constants, i.e. the exponential of optical depth multiplies scattering constants.

Atmospheric scattering is used to simulate sky color. Simulating sky color integrates atmospheric scattering to the light traveling process from sun to any view point in the atmosphere. Specifically, the sky color at any view direction from a view point in the atmosphere is the integration of in-scattered light at each sample on the view ray (starts from the view point and cast towards the view direction) to the view point. The light at each sample is the in-scattered light from sun to the sample. Chaining the process with a ray marching algorithm, the sky color at a specific view direction from any view point in the atmosphere can be approximated in the following steps:

  1. For each sample in the view ray:
  2. exp(scattering constants multiplies optical depth between sun and sample) -> transmittance between sun and sample
  3. exp(scattering constants multiplies optical depth between sample and view point) -> transmittance between sample and view point
  4. Sun light * transmittance between sun and sample  -> light arriving the sample
  5. light arriving the sample * phase function -> light reflected on the view ray
  6. atmospheric density at the sample * scattering constants -> scattering constants at the sample
  7. light reflected on the view ray * scattering constants at the sample -> remaining light after it’s scattered by the sample
  8. remaining light after it’s scattered by the sample * transmittance between sample and view point -> light arriving the view point from the sample, i.e. sky color from the sample
  9. accumulate the sky color from the sample to the final sky color

After all samples in the view ray direction are iterated by the ray marching algorithm, the final sky color is obtained from the 9th (ix) step.

Simulate Hand-Drawn Strokes

stroke

Figure 1: a C++ program simulating hand-drawn strokes.

sky

Figure 2: reference of hand-drawn strokes. From movie “The Little Prince”

So, it’s possible to “draw” crappy strokes by programming! Figure 1 is generated by a C++ program I wrote to simulate the strokes in Figure 2.

The idea is simple: define the size of a stroke with width and height, then randomly generate the starting point and direction of the stroke within the size. Finally, draw the stroke in an image by rasterizing the line. While drawing the stroke, jitter the pixel to be rasterized, and draw more pixels stretching towards sides of the jittered pixel with a random width. The intensities of these pixels are randomized.

Figure 1 is generated by drawing 128 strokes sizing 400/50 in an image of size 800/600.

Global Illumination in Brain

June 10 > Light scattering effects seem like closely related to impression. Reading references:

  • Learning from Failure: a Survey of Promising, Unconventional and Mostly Abandoned Renderers for ‘Dreams PS4’, a Geometrically Dense, Painterly UGC Game. http://advances.realtimerendering.com/s2015/mmalex_siggraph2015_hires_final.pdf

 

May 30 > That may related to impressionism – understanding the whole image instead of separating the image into pixels.

May 19 > While rendering global illumination takes long time, why don’t we think about the essential effect of global illumination in our brain and only render what matters?