### Posts from the “Computer Graphics” Category

So the explainer video is https://www.youtube.com/watch?v=TZJLtujW6FY, and paper https://disney-animation.s3.amazonaws.com/uploads/production/publication_asset/122/asset/pman.pdf

The idea is to have the 3D model carry its 3D Silhouette Ribbons. So the artist’s 2D drawing can be mapped to the 3D Silhouette Ribbons. When the model moves, the 3D Silhouette Ribbons follows and carries the corresponding 2D drawing.

The process seems simplified by 2D. First data of 3D Silhouette Ribbons and motion field in 2D screen space are precomputed. Then artist draws lines, computer maps it to 3D Silhouette Ribbons. Motion is directed by the 2D motion field in screen space. The 3D Silhouette Ribbons mapped to screen is moved by the 2D motion field. During a move, the 2D drawing of its 3D Silhouette Ribbon is moved by <> (don’t understand :_( ). The artist is responsible to draw new lines for hidden 3D Silhouette Ribbons appears into the screen.

Next:

Figure out this thing:

Thinking about GI again. Found a paper online for understanding the core of global illumination, light scattering:

Lars Øgendal. Light Scattering Demystified Theory and Practice. https://www.nbi.dk/~ogendal/personal/lho/lightscattering_theory_and_practice.pdf

• quantitative light scattering computation: use complex number; simply and elegantly; page 28, 2.6 Scattering from one small, composite object

Based on IAN FAILES’s article of ‘Klaus’ technique [1], I feel the tool SPA studio uses for “Klaus” gives a lot controls to artist to surpass 2D hand-drawn limitations. The control is created from artists’ lens. Because of that, it cultivates the artistic creation process.

Lighting makes the 2D drawing 3D like. This is because applying lighting is based on physically-based lighting – how Pixar does lighting for their films. The artist would specify areas for different types of light in the 2D drawing, the tool then automatically apply 3D lighting to the specified area.

Following are some thoughts of the article:

The quest to do something different in 2D

? Lack of technical details of limitation of 2D hand drawn animation.

It looks like the limitation is intuitive artistic control.

A production solution

Seems the solution is a tracking system to hand-drawn lines. I guess the tracking system helps convert the snapshot of the film – hand-drawn lines – to the final film. The tracking system makes artistic details consistent for finalizing artistic details for a static frame and moving artistic details between frames.

The process before lighting

• Artist: storyboard with Storyboard Pro from Toon Boom
• Artist: remove lines based on artistic sense

The lighting breakthrough

Lighting (automatic, or theoretically could be to artist): break down the lighting of a scene in a convincing way, the same way that concept artists do every day.

Texturing: level of grain

Comp: texturing to corresponding lighting

References

[1] IAN FAILES. HERE’S WHAT MADE THE 2D ANIMATION IN ‘KLAUS’ LOOK ‘3D’. Before & Afters.  https://beforesandafters.com/2019/11/14/heres-what-made-the-2d-animation-in-klaus-look-3d/

Global illumination is about, gathering photon distribution. photon distribution is largely a result of geometry and participating media, resulting in different amount of photons in volume and surface.

Is it possible to transform global illumination to a different space, which is easy to compute photon distribution, then transform the result to the 3D domain?

Is there a math model to capture the characteristics of global illumination, which is efficient to compute? 🙂

Or, what’s the characteristics of global illumination? It’s the result of light transport in participating media and surfaces.

TODO:

• Understand light transport, from a different perspective than space-time – rather, from the perspective of universe!
• Simplify light transport to capture essential characteristics of light transport
• Understand geometry
• Simplify geometry representation: is it possible to represent it in a different domain?
• Understand participating media
• Simplify participating media: is it possible to represent it in a different domain?

Ideas:

• Grind down geometry representation to see how it looks in a different space

Light transport is observed in scientific efforts of capturing the statistics of light transport. For example, light transports in straight line is a conclusion of statistics of photon transportation. Is it possible to abstract the light transport mechanism?

Steps to render the above sunrise rendering from one viewport to three viewports, to compare three elements of sunrise – Rayleigh, Mie, Rayleigh and Mie:

• In fragment shaders of the skydome and the area light, set three outputs
• In CPU, configure three textures for a framebuffer, each of which is rendered from a color attachment [1]. The three color attachments take values of the three outputs from the fragment shaders [2]
• In the framebuffer, render the two fragment shaders of the skydome and area light to the three textures
• Render the three textures to three viewports on the screen [2]

### References

[2] Render To Texture. http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-14-render-to-texture

Path tracing is too expensive, radiosity seems like the right way.

Questions to step forward:

• Is there a way to transform geometry representation into a different form, so that light hits the geometry in a more computational efficient way to get the energy on that surface?
• Transform the surface to a different space
• Get the energy to that surface

Next:

• Do a thorough evaluation of global illumination
• Understand geometry representation
• Check “Geometric Algebra for Computer Science” to find a transformed geometric representation in a different space than in 3D space, so that the energy (light) received by the geometry stays invariant or commutative in both spaces, yet is more computational efficient in the “different space”. if such a transformed geometric representation can be found, computer graphics would not need the expensive path-tracer for global illumination

Ken Perlin. An Image Synthesizer.

The paper proposes the initial algorithm of Perlin Noise, and a few variations of the algorithm to simulate a variety of randomnesses in nature.

The technique produces organic randomness appears in nature. It proposes an algorithm to produce random look, which is made uniformly distributed by a narrow bandpass limit in frequency. This statistic character of the randomness maintains unchanged with varying domains. Because the algorithm satisfies statistical invariances under rotation and translation.

The most beautiful part of the paper is the description of Perlin Noise: an organized array, when associated with the gradient of the organized array and an augmentation value, both are stochastic, is transformed to a stochastic value which is used together with the gradient to interpolate areas between the stochastic value of the current organized array and the ones of other organized arrays. NOTE: the augmentation arguments for all organized arrays determines the generated stochastic values. The distribution of the augmentation arguments of all organized arrays determines the uniformness of the generated stochastic values.

Sangwon Lee, Sven C. Olsen, Bruce Gooch. Simulating and Analysing Jackson Pollock’s Painting

The paper devises a system to allow user create Pollock-like painting with a painting material simulator which allows user paint without using real material, and with real-time feedback of fractal properties of the ongoing painting, which gives user an awareness of the similarity between the fractal dimension of the ongoing painting and those of Pollock’s painting. That means, the user is capable of creating Pollock-like painting by keeping the similarity high.

The paper points out that fractal dimension is incapable of distinguishing fractal and nonfactual images, and proposes an new metric, uniformity, to alleviate the limitation. Uniformity indicates the similarity between the fractal dimension of a subregion and those of the entire painting. But, in what way does uniformity distinguish fractal and nonfactual images better than fractal dimension does?

Maria Shugrina, Margrit Betke, John Collomoss. Empathic Painting: Interactive stylization through observed emotional state

Emotion is majorly expressed from a combination of color and stroke. The paper analyzes color, stroke denotation style, and region turbulence to transform them to emotions. It can be found in Section 4.2 Rendering Parameters:

Express Emotion in Region Turbulence

• Section 4.2.1 Region Turbulence

Express Emotion in Color

• Section 4.2.2 Tonal Variation, and

Express Emotion in Stroke Denotation Style

• Section 4.2.3 Stroke Denotation Style

Characters are segmented to make it possible to animate.

Scary faces comes from a question that “what if the character you control is on the dark side?”

Principles of expressing emotions using landscape composition:

• Openness, happiness: if the fog is pushed away from the player, the view becomes clear, which makes player feel happier; on the other hand, if the fog is close to the player, the blurry view makes the player feel scary
• Blue sky is used to award the player at the end of the game. Green skies were used earlier as a random alternative of the blue sky, from what the artist said
• Dreamy and high quality light is used to make the player feel magical at the end of the game