As the cost of computing power drops, computer animation is coming within the grasp of
more and more people. Although it is becoming conceivable for an individual to carry
out all phases of movie production, few people are interested or talented in all the aspects of
Some people have a story they want to tell, but can't choose their own
clothes very well, let alone choose effective costumes for their cast.
What's needed is a framework in which creative contributions can be swapped in and out independently. As with version control systems for computer software source code, it should be possible to have multiple variations (or branches) available simultaneously for comparison.
These are some aspects of movie production that should be allowed to vary independently: overall story, the breaking up of the story into scenes, scene selection and order, overall mood and architectural style, how the characters move around within scenes, character design, costumes, lighting.
It should be possible to package aspects of a production for re-use in other productions, e.g. there should be a large library of human, animal and alien models to draw on for characters. Think about models as if they were typographical characters. Just as text can be emboldened or italicised, character models should have parameters for plumpness, health, hair pattern and colour, skin tone, etc. Some parameters such as "health" would control multiple model parameters. It should also be possible to package common modifications in such a way that they can be manipulated independently and applied to a greater or lesser degree to characters. For instance, let's say some of the characters on a show come down with a disease; they get purple spots in a certain pattern, and appear increasingly aged as the disease progresses. Abstracting the disease as a pattern that can be moved from character to character and even to other movies is powerful.
In the same way, complex series of actions should also be abstracted. Some examples of actions are:
character applies geisha makeup (thus self-modifying)
character moves sheets aside and gets out of bed
character eats an oyster
One of the central goals of computational semiotics is to provide an author with the tools to create messages that have desired effects in the minds of the messages' viewers.
Over the last few yers it has seemed that the goal of computer graphics as seen in games and movies has been to create increasingly realistic portrayals of alternative realities. I would like to shift the focus to creating increasingly vivid emotions and ideas in the viewer's mind.
Currently, computers create moving pictures of solid objects, e.g. in video games by conceiving of each object as a group of simple solids. Each pixel on the screen is generated by following a ray from the viewer's eye through that pixel to the first solid it hits, then determining the appropriate colour for the pixel based on the shape's orientation and relationship to light sources.
This approach is generates progressively better results as processors become more powerful and each object can be modelled with more polygons. However, at the time of this writing, a skilled animator can still create images that are in some ways more vivid and attractive than ray-traced images. What if the computer used the same process as a human animator to produce images? Shapes would be thought of in terms of their boundary lines, which would be drawn actively in the scene, instead of passively considering each pixel. (It might be desirable to generate some pixel elements one way, some the other.)