Making of Orion Tear

Материал из Blender3D.

Перейти к: навигация, поиск


Orion Tear

by Rogério Perdiz



For those of you who are followers of the BlenderArt Magazine, I’m sure that at least one of the characters in the picture isn’t a complete stranger to you. That’s Orion, who was presented for the first time in issue #4 — Character Design. From that issue almost until the present day, he was working as one of the main actors of my 10 minute short animation, Orion Tear!

Orion Tear: Orion Tear is my first short animation movie. It was made with only Blender, Gimp, Inkscape, one workstation and me during 19 months and 7 days of non stop, from dawn till dusk, work. Although all the graphical part of the movie is done, the sound is still being developed by Telmo Cavaleiro, one local musician and sound EFX enthusiast, in his free time. It's release date is still undefined, but everything points to late Winter/early Spring 2008.

Origin: Ever since my eyes glazed watching the full motion videos of a video game called Final Fantasy 8, from the former company, SquareSoft, I knew that was the type of thing I wanted to do with my life. Considering that, at the time, I found the task of drawing a square with a ruler extremely hard... I also knew that I was in trouble!

Two years after that, I found an article in a Portuguese videogames magazine, describing the wonders of one revolutionary 3D application especially created for game development, called Blender 2.0. That precious 1Mb of pure creative power came in the CD, so I said to myself: «Well, I don’t care about making games, but, what the hell; this is my chance to discover if I can do movies!»

And guess what? From that day on, I continually improved... until the day some guys made an open movie called Elephants Dream (ED) with the very same software I’ve learned to love so much.

At the time I still had alot to learn about animation movie making, but due to the open source spirit of ED, there was now a lot of learning material available... and also, like Aristotle said, «The things we have to learn before we can do them, we learn by doing them.» So on the day 10-04-2006 the production of Orion Tear begun. Primary project purpose: Learn.

The article:

In this article I’ll try to describe the steps that I’ve taken to accomplish the finalization of this Dante’s feat, my difficulties, my thoughts, solutions and every thing else I can fit in, so stay tuned... if you can!


Sketches/Conceptual Drawings:

Although it doesn’t seem that way, this was probably one of the most difficult tasks of the project or at least one of the most time consuming. Everything needs to be created from nothing.

For Orion (img.1-1) I had one basic idea. I wanted him to have the same body proportions of Zidane from Final Fantasy 9 and the look of my favorite character designer (Tetsuya Nomura) characters... mixed with my one drawing style.

For Dark, I didn’t know what to do, but I knew what I didn’t want to do... and that would be the classical Angel of Death look. Luckily at the time of creation, I saw one of Sergio Leone’s western movies, and there you go, a dagger in a western pistol holder belt. Then I had, and still have, a cell phone that has 2 blinking, glowing red lights in the side and there you have the eyes. The belt in the neck was a mistake. I accidentally moved the layer of the belt up... and I thought to my self: «Hey! It kind of looks cool.» The rest came from my natural character design style; I like gloves and unsymmetrical clothing.

The sceneries simply evolved. I started to model them right away.

I wanted something that had kind of a natural feel with a subtle Fantasy look. Basically I’ve started from the look of one real beach place, near to my house, and then mixed it with some valley type of place. But something was missing, it simply wasn’t catchy enough. So one weekend I grabbed my bicycle and went for a ride, without a fixed destination, hoping to find something to improve it. I still don't know how, but I lost myself and while trying to find my way back I found instead one very old wind mill. First I almost completely ignored it and kept on my way, still looking for something for my scenery but, after a while, I start to think:

«Darn!!! I haven’t seen one of those mills working, for decades... funny to find one today... Hum! A mill... Mill!?»

I immediately squeezed the brake handles starting a sudden 180º breaking stop, returned as fast as my bike allowed, took a bunch of pictures and after finding my way home I modeled it and there you have it.

After that I made some very rough sketches of the final scenery layout, having Mills as the theme (img.1-2) and then modeled a much too detailed version that turned out to be impossible to use... more of that in the next chapters.

After that I made some very rough sketches of the final scenery layout, having Mills as the theme (img.1-2) and then modeled a much too detailed version that turned out to be impossible to use... more of that in the next chapters.

Note: During this article, by scenery, I will be referring just to the main
Mills scenery (img. 2-3). The movie has 3 sceneries, one in the mountains
(img. 6-1) and another one that is a surprise!

Modeling Characters:

My model skills at the time were already pretty good, so, because I had defined all the details during the conceptual phase, I’ve just loaded the images to the blender viewports and start to model.

I started Orion with his Head, eyes and hair; then the body and finally the clothes. By the way, the Orion model that appears in the issue #4 had to be 80% remodeled due to problems in rigging, more on that in the rigging chapter.

For all the body and shoes I’ve used the point by point technique, meaning I’ve positioned all the points in space, based on the hand drawings, and only then joined them making faces.

The cloths were made initially by several nurbs circles that I’ve joined forming tubes (sadly I lost the screenshots of that and this is kind difficult to explain). For example: basically a shirt is only a subdivided tube that starts in one hand and ends in the other.

Then converting the nurbs tubes to meshes I could delete the faces that didn’t matter. For example, the ones in the place where the body and neck passes through. After some adjusting and loop cutting what you get is a very good mesh topology shirt that deforms pretty good.



The scenery was my first problem. The first version was much too detailed, with lots of unnecessary polygons. I really didn’t know about the limitations of my workstation, so I made tons of stuff until eventually everything moved excruciatingly slow. I thought: «well, it’s slow but it should do the trick»! But it didn’t. I had generated 2 millions polygons. Happy and full of joy with my self, I pressed Render and guess what? Right! 1 hour of render time which doesn’t work very well for animation using only one PC, besides that Windows O.S. has an annoying 1.5 GB memory limitation and from some angles, rich in polygons, it didn’t render at all.

So full of sadness with my ignorance I removed all the stuff that didn’t matter, meaning every thing that is never visible, like the bottom or the inside of the models, simplified them to the maximum and ended up with 150’000 polygons (img. 2-3) that rendered in 5 minutes.

But the funny part is that the high poly scenery wasn’t a waste of time at all because I’ve rendered all the textures from it and applied them in the low poly models. More on that in the next chapter...



Well texturing was allot of fun! After I had modeled something I would, usually, texture it right away and only then pass to the next model and repeat the operation.

I didn’t follow any specific method, besides my one method that still applies today. First thing I unwrap* the model and save the layout. Then I’ll get out and take pictures of the real thing if it’s easily available, things like concretes, woods, skies, cloths, etc. Then, normally, I’ll first use them as reference and try to replicate them painting in Gimp or with blender procedurals.

For example: For the Orion Jacket (img. 3-1) I’ve first took photos of one real jacket, then in Gimp, based on the real jacket photos and in the UV Unwrap layout, I’ve painted all the stitches and jeans recognizable stuff, as also all the stone washed parts.

Although it already looked cool, it mainly looked what it was, painted. For it to look real I took one small square sample of the real jacket jeans texture and duplicated it a bunch of times until all the unwrap layout was covered, making sure at the same time, that it look seamless.

After that I’ve multiplied the painting on top of the texture and the final result is what you may see. All the clothes were textured this way.

About the scenery:

Most of the textures of the low poly set are renderings of the high poly set enhanced in Gimp :) For ex.: the house (img. 3-2); from the high poly house I’ve rendered, only with Ambient Occlusion (still no baking at the time :P), one wall, then in Gimp I’ve added texture details, repeating the same procedure for all the walls, roof, etc., and them used it on the low poly model.

Almost everything was painted in Gimp, using after that the texture sample like I did for the clothes.

One thing that didn’t quite work out has I expected were the far far away mountains (img. 2-3). They are made of one gigantic mate painting applied to a tube that surrounds the scenery. If you focus in them you will note that when the camera moves, a really bad, not wanted, distortion effect occurs. After some research I’ve learned that is a known issue in the world of 3DCGi, too big textures used that way seem to be propitious to cause that effect.

Lots of people ask me how I made the sky: Well the sky is just 3 blender procedural cloud textures mixed, and then the rest is a compositing trick. I also made allot of testing to try to have some movement on the clouds, but, it wasn’t working at all so I moved on and left them static.

Most of the wood textures are procedurals based on the ones of the Blender materials Library, v 1.01 from Zsolt Stefan enhanced by me in Gimp and UV mapped to the objects.

*UV Unwrapping (from blender wiki): During the UV unwrapping process, you tell Blender to map the faces of your object to a flat image in the UV/Image Editor window.



This was the troublesome one. When I started I didn’t know anything about rigging, but thanks to Bassam Kurdali I was able to learn everything!

Bassam made one full rigged character called Mancandy and shared it with everyone. Considering that, at the time, it was the most complex rig and at same time easy to use I’ve ever seen I didn’t had any doubt... But I didn’t want to just import it to my characters; I wanted to learn, so I spent 2 weeks doing reverse engineering and creating the first Orion Rig. Well, after 2 weeks I had Orion rigged, but, I still didn’t really Know how to make a rig, so every thing was a bit clunky. It was enough for walkcycles and basic movements, but I wanted them to be able to compete with Jackie Chan!

So I’ve started again another rig, this time for Dark, expanding my research to Emo and Proog rigs from ED. After another week disassembling bones, trying to figuring out all that, I suddenly started to see all the logic of the thing and progressively everything begun to make sense. Another 3 weeks passed until I successfully completed the Dark rig. After that I returned to Orion and because the first Orion mesh topology was pretty bad, for deforming, I had to almost remodel everything and remade the rig. After another month I had them completely ready for action.

I will describe now the rigging procedure used for Orion Tear:

  • The first step is valid for all type of characters:
  • We make the bone armature (img. 4-1);
  • It’s necessary to define witch vertex are influenced by each bone, we do that assigning vertex groups and using weight painting for fine tuning (img. 4-2);
  • Corrective shape keys are added on top of everything to ensure proper deformation, mainly, on places like joins and pelvis.

The second step refers to the facial expressions:

  • A reasonable number of expression shape keys are created;
  • Control bones are added (img. 4-3);
  • The shape keys are set to be driven by the control bones.

The third step refers to everything related to the dynamics like cloth and hair movement: For the cloths:

  • I made a low poly version of the Jacket, that handles the softbody simulation
  • I added Empties, one for each vertex of the softbody mesh and made them vertex parent of their correspondent vertex;
  • Another armature constrained to the body armature was required. The tip of the bones of that armature coincides with the Empties location and has a 1chain IK constrain applied to them.
  • A softbody deflector is required (the orange objects in img. 4-1) to avoid the cloth with body intersection. I made each deflector parent of the respective bone.

Control bones are added (img. 4-3);

Basically this will make the Empties follow the softbody movement, the bones follow the Empties and the cloth mesh follows the bones, originating an almost real time cloth simulation system.

The hair is a simple softbody with vertex painted influence control. That’s why it isn’t good, in fact I don’t like the hair at all... not even the modeling, the texture nor the movement; I simply don’t like it. But after spending so much time in it I didn’t have the courage to remade it... and now some people tell me that they love the hair, so hehe!


Of all the other tasks of 3D animation movie making this one is my favorite.

I had made some animations on the past, but nothing with rigged characters. So I made the scenes by animation difficulty order... I started by the «easy ones» where the characters hardly move and ended up in the fight scene where you can see them jumping, fighting, rolling, falling, doing stunts etc.

Although the ones I thought to be the easy ones were in fact extremely hard. Just because someone doesn’t move that doesn’t mean they are frozen! I had to create tension movement, meaning the little muscle relaxation movements that we tend to make when we are stop. Also the wind in the cloths sleeves was key frame animated. The Orion jacket and the Dark Hood use a combination of dynamics simulation, with force fields deflectors, and hand animation. Most of the times I had to select the previously mentioned Empties and key frame them manually to correct their unrealistic movement.

The standard method used and still in use by me is:

  • Make a little crappy storyboard with the movements for that particular scene;
  • Go outside and film the reference moves;
  • Import the reference film to the blender sequencer and make sure to have it in a little square preview window;
  • Based on the reference video, set the basic poses;
  • Work on the timing until everything looks reasonably natural;
  • Fine tune the animation;
  • Add the secondary movements like, sleeves, accessories, etc.
  • Run the simulation for the Hair and Cloth dynamics;
  • Correction of the dynamics by key animating the Empties;
  • Small adjustments.


The fight scene:

The most difficult scene of this film it must have been the fight scene (img. 5-3). It has 2190 frames (1:28 minutes) of continuous action without cuts. This is where the Non Linear Animation (NLA) features of Blender came in handy.

Thanks to it I was able to make 11 individual Actions sequences (6 for Dark and 5 for Orion) and Join them seamless in the NLA making it look a big one take Action. The camera was animated in only one continuous action, seamlessly completing the illusion.

It took me long time to do, and awfully longer time to render. This one scene uses all the tricks I’ve learned in all the others previous scenes; more on that in the next chapter...


For me compositing is all about converting crappy renderings in stunning final images by whatever tricks you can come up with ;)

I really don’t have any pre-established technique, this kind of thing I do almost by instinct I guess. Some people could call it luck... but that means I had luck 70 times (the number of scenes)!

The first thing needed is layers to mix. Because I couldn’t render everything at the same time, nor really wanted to, most of the times I had 5 Base Layers to work with: scenery (to simplify the things a bit, the set, that is composed by more then 10 layers, renders just to one), shadows, ambient occlusion, characters and volumetric effects.

All the layers were rendered to OpenEXR, which allows better color control, and after composed the scenes were re-rendered to uncompressed Quicktime which allowed better hard drive usage and organization.



The scenery is rendered with Ambient Occlusion. It has very «few» lamps: three fill lamps at the sides; one Sun lamp that coincides with a Spot lamp for the sun volumetric halos; one lamp up and another down... plus a few ones along the river to try to simulate some Global Illumination (they are pretty much useless but I kept them, don’t know why).

The characters use a Spot Lamp Dome\* plus the sun lamp. The center of the dome coincides with the characters position and follows them wherever they go, resulting in a reasonable renderTime/quality global illumination look. No Ambient Occlusion is used for the characters. For some scenes I’ve use an elaborate and very time consuming technique for creating raytraced soft shadows (img. 6-2), that no one would note if I didn’t mentioned it now and that I’ve abandoned in later scenes due the viability of the thing!

First I’ve rendered all the character with a shadow-buffer spot lamp dome, and then rendered the skin and the shirt with a raytraced shadows\*\* spot lamp dome, but all in separated layers: Color, spec, shadows, etc, then reassembled them again in compositor but this time with the shadows blurred. Three or four scenes use this method!

By the way, I made two versions of the scenery, one for render and another, with much less detail for animation reference. The characters never set foot on the real scenery. The camera had to be imported from the characters file to the scenery file for all the scenes.

The biggest challenge was to be able to keep lighting and the mood consistent from scene to scene, because, most of it is just compositing work and needed to be done for every scene. The scenery had an, per frame, average render time of 5 minutes and the characters 7 minutes. Orion Tear took about 5 months to render, fortunately, most of them during the night. Everything was rendered with blender internal renderer.

* Refers to the top half of an Icosphere mesh that has a Spot Lamp parented to it and the DupliVerts plus Rot option turned on, resulting in having one spot lamp per vertex pointing inwards and towards the center of the dome.

** Raytraced shadows were very sharp edged. The current blender CVS version doesn’t have that problem anymore.


Well the primary purpose of this project was 100% accomplished. I’ve learned a lot... but I really don’t want to make something like this again, alone I mean! ... and I don’t recommend to anyone to try it.

This is just my, at writing date, opinion... If you also want to learn 3d animation movie making by making it, you probably shouldn’t do more then 1 minute movies and you should focus yourself in only one particular task on each movie, but, who knows...

I could write hundreds of pages about Orion Tear, but I guess this article reasonably summarizes most of the things, so this is the end of the ride with me. Hope you like Orion Tear, for more news keep an eye on!






About my self:

I’m the guy that made Orion Tear, I’ve been using blender since version 2.0 and hope to be a better animation movie maker in the future ;)


image:Making_of_orion_tear8.png image:Making_of_orion_tear9.png image:Making_of_orion_tear10.png image:Making_of_orion_tear11.png

Личные инструменты