Appealing Business Model of Animation Films

The typical business model of making animation film is hit-driven, which starts from investment before production, and ends with returns from cinema [1]. One problem of this business model is the uncertainty of returns with high investment mainly cost by the high-budget and long production process, which evolves a lot labor work at design, texturing, modeling, etc. The uncertainty is caused by the uncertainty of the quality of the story due to insufficient time and budget on story development, and by the high cost of time and endeavor on the uncontrollable production process.

If we could replace the labor work of the production process with technology, the production period would be shortened from 3 or 4 years to less than 2 years, and the budget could be cut into half. With the fast production time, the studio has the control over iterately modifying the film in the production process with story changes, hence has more flexibility and less time-consumption on converting story changes into production. With the quality control and fast development of production with technology doing labor work, the studio has more certainty of succeeding at a movie. Investors would also be more confident to invest a movie with small budget.

Now the problem is, what are the specific labor work at animation production studios? If we could identify the labor work, then devise solutions to them, the approach of shortening production period is within reach.

Is it possible to attract investment for an automatic production pipeline if we build a mvp with automatic production pipeline, and test with animated shorts by animation studios to prove the efficiency of the pipeline (short time, high-quality, low budget compared with previous movies)?

To further the approach to an animation studio, does it help if I work with animation studio to design specific automation production pipeline for each movie?


[1] Redesign Business Model of Animation Films

Perception and 3D Rendering

how would it look like if we blend human perception in 3D rendering? what if we configure parameters of physically-based rendering based on human perception – random thought from my walking at evening in a moist weather, chilly, and autumn foliages.

无邪 Innocent

你的笑 像一個童話













Your innocent smiling eyes

feel like a fairy tale

as difficult as it could be, to chase a dream

Like a crystal morning dew

It’s your clear soul

All these, feel so beautiful

The chasing feels like a poem

When you set your feel on the desert of a dream

That feeling warms your heart like home

All along the road

There is a clear spring

It is your eyes





Lasting Dream

hm.. it was a dream a long time ago.. it never disappeared in my heart, as much time has passed since I had that dream. It’s so clear. always. So I started making that dream via an animation shorts. Because I love watching animations. I have no professional animation skills, but I’m doing it anyway.



Our imagination is built on our inner emotion and perception of the real world. Whatever we see in the physical world is distorted in our mind by our mood, emotion, etc..

With this fact, any of human’s creative work (making animation films, photography, etc..) is a result of blending our inner emotion with real world.

It would be interesting to visualize our perception. It could be done by applying emotion, mood to 3D physically-based rendering.

This technology could be applied to …?

Short Animie

Just like literature, poem, quote, short animie could have a large impact. But it has to be the perfect combination of story, art, and sound to intriguing and inspire the audience’s full mind. Writing this to correct the bias of animie on a lot people’s mind.

Atmospheric Scattering Highlights

In short words, atmospheric scattering is the process of scattering away light when it’s traveling from a light source to a point. The light arriving the point is the result of multiplication of light at the light source and the transmittance between the light source and the point. Transmittance is related to the average atmospheric density (optical depth) between the light source and the point and the scattering constants, i.e. the exponential of optical depth multiplies scattering constants.

Atmospheric scattering is used to simulate sky color. Simulating sky color integrates atmospheric scattering to the light traveling process from sun to any view point in the atmosphere. Specifically, the sky color at any view direction from a view point in the atmosphere is the integration of in-scattered light at each sample on the view ray (starts from the view point and cast towards the view direction) to the view point. The light at each sample is the in-scattered light from sun to the sample. Chaining the process with a ray marching algorithm, the sky color at a specific view direction from any view point in the atmosphere can be approximated in the following steps:

  1. For each sample in the view ray:
  2. exp(scattering constants multiplies optical depth between sun and sample) -> transmittance between sun and sample
  3. exp(scattering constants multiplies optical depth between sample and view point) -> transmittance between sample and view point
  4. Sun light * transmittance between sun and sample  -> light arriving the sample
  5. light arriving the sample * phase function -> light reflected on the view ray
  6. atmospheric density at the sample * scattering constants -> scattering constants at the sample
  7. light reflected on the view ray * scattering constants at the sample -> remaining light after it’s scattered by the sample
  8. remaining light after it’s scattered by the sample * transmittance between sample and view point -> light arriving the view point from the sample, i.e. sky color from the sample
  9. accumulate the sky color from the sample to the final sky color

After all samples in the view ray direction are iterated by the ray marching algorithm, the final sky color is obtained from the 9th (ix) step.