Wrap-up

Sooo – I realize I haven’t posted here for over a month. It’s now a week after the project was finished, or, as finished as it got. I thought it was time the result was presented here on the blog, my missing three weeks were explained, and my thoughts on the whole project were put down in text. It will be a pretty lengthy post.

But first, the movie:

You’ll notice a couple of shots in the middle are rendered/playblasted placeholders, and there are more than a few glitches here and there. We hope to patch things up in the summer; currently, we’re all busy with our internships. (Mine’s at Fido, by the way.-)

.

So, then, for the rest. I will start with some afterthoughts, continue from the very beginning – now with the benefit of hindsight – then fill in the three weeks of silence and finish it all off with some general thoughts. I will try to describe my workflow in full, discuss what worked and what didn’t, point out our more elegant solutions, and explain the reasoning behind choices that led to less elegant ones. I mean to be thorough, mostly to remind myself of the lessons learned, but also a bit in hope that it might be helpful to others. Simply put: it will be an exhaustive survey of the project.

.

Now, my first plan with my own specialization project was to just delve deep down into sculpting and resurface, eight weeks later, with a small set of elaborate statues set amidst graceful architecture, quickly supply them with spotless but simple shaders, and some tastefully sparse lighting. This plan was changed a bit when my solo project turned into our team effort, though not substantially so. But what I actually did was this:

Week 1: Character concepts.

Week 2: Environment block-in, reference-searching, research for other stuff.

Week 3: Architecture inspired by Alhambra.

Week 4: Scrapped Alhambra, now freestyling cheese-architecture.

Week 5: Scrapped the cheese-archs; now basing everything on Bada Bagh instead.

Week 6: Finished modelling.

Week 7: Lighting.

Week 8: Rendering.

(Week 9: More desperate rendering.)

What with twice throwing away almost all progress I had made, I actually spent a grand total of two weeks doing what I originally planned that wound up in the final result. The rest of the time was spent responding to the needs of making a film.

That said, I’ll take it from the top.

.

Story

Week One of the project was in fact preceded by a few weeks during which we discussed what the story would be. We considered basing it on well-known stories, like Don Quixote, or making something trailer-like. We ranged from bombastic pretentiousness to inane comedy. In the end, we discarded what we’d worked on a few days before kickoff, and settled on something that was intentionally somewhat bland. We weren’t storytellers, we decided, and so wanted to focus on the aspects of creating a movie, not making a good movie.

In retrospect, it seems like we could have come up with something more interesting. A longer, more proper planning-phase was needed – I’ll return to that – even if it would have taken away time from the production-phase. A curious thing with the story we settled on was that neither of the characters was to be either “good” or “bad” – they were both to occupy a narrow position of semi-villainousness, just likeable enough not to be disliked, but not so much that either character was perceived as being the good guy. And, much as we talked about quality over quantity, somehow, we still wound up with a rather large list of requirements for the 30+ different shots.

So: I now think we’d have produced a more interesting, and more consistent, result had we 1) stuck to a pre-defined concept with clearer character distinctions, and 2) toned down the narrative continuity in favour of a few highly polished shots inside a 3) way, way more limited space. And, 4) allowed ourselves the time to plan this.

.

Modelling

Apart from a short stint concepting the characters and looking into controlling hair-dynamics driving joint-chains, I pretty soon started doing just that: modelling. Now, we had an early, vague notion of the spaces needed for the story; a simple mockup was made, which was then used by the animators to find interesting angles, which was then refined with a more detailed block-in environment that focused on laying out the surfaces needed for the animation.

Now, the problem with our modelling was that we lacked direction. A week was spent blocking in the scene, then finding references for the environment, making moodboards, and such. But the only concepts we did for the environment – a paltry handful or so – were, somehow, cast aside as being too “simple”, and not really referred to later in production, except for colours. The only storyboards that ever existed were Maya-renders of the block-in environment.

We – me and Oskar – knew we wanted to focus on sculpting, and put less time on texturing and shading. (Which is, really, the reverse of most productions.) Further, much as we looked, we didn’t really find any references of CG-productions that suited us – they all seemed either too cartoony or too realistic. So, we had neither particularly direct CG-references, nor detailed concepts, nor storyboards in which to quickly test the concepts we came up with. It didn’t quite seem so unplanned when we were working on it, of course, but with the benefit of hindsight, one can say that it pretty much was. The lesson of a proper planning stage was to be driven home, repeatedly, during the project.

.

Week Three: Alhambra and a fledgling pipeline

So, when I began modelling on our chosen test-area (the entrance) I basically just found a cool pillar from Alhambra and started to copy that. This was done enthusiastically thoroughly in zBrush: we even made patterns and such in geometry. (A small pipeline was developed for that: from sculpting things by hand, or stamping them out as brush-alphas, we began using drag-rectangle masks to place our patterns and then use Deformers (offset, for instance) to extrude the shapes. Finally, and giving the best results, I began using zApplink, and a high-ish resolution zBrush document, to paint my alphas in Photoshop, then turn them into polypaint information, and then masking by polypaint intensity.) The results were pretty nice. They were also time-consuming and sillily highres.

But, once we had sculpted and stamped to our hearts’ content, the question arose as to how to get all that information into Maya. The first plan was to export a moderately-lowres mesh and use tangent-space normal maps for the details. But we weren’t particularly pleased with the results visually, the moderately lowres-meshes were pretty highres anyhow, and we kind of wanted to incorporate Decimationmaster into our workflow. This because it would give more lowres and more accurate models, and rid us of the need to worry about topology when building our basemeshes in Maya. We wanted to find a workflow which enabled us to spend as much time as possible sculpting, and which automated most of the rest of the process.

The problem with Decimationmaster was that, well, without us looking around for too long, there didn’t seem to be any simple way to, within zBrush, create a normalmap for a decimated mesh with the details of a highres mesh of a different topology. zBrush seemed to prefer making maps for the same subtool, but between different levels. Which meant projecting highres details on a subdivided decimated mesh; which required a bit of cleanup, and which worked poorly for areas we had lightly stamped with patterns. And, besides: tangent-space normal maps looked horrible on decimated meshes.

They looked less horrible if one separated all the faces in the UV map into separate shells, which lets the normalmap smooth over edges more properly, and worked alright-ish if one displaced the model a bit, too. When we got that far, we sighed and just displaced the original basemeshes instead. Now, we would figure out a workflow eventually, but along with that sigh, which came from this zBrush-to-Maya mess, came also an accepting of the fact that we’d never be able to sculpt everything with that kind of detail, and that we anyhow maybe shouldn’t go for this art direction, as it seemed to get a bit… cluttered.

.

And so, after a week’s modelling, we chucked it all and started over. It was the first step of us (me) realizing that the needs of the film required some adapting to; until then, I had mostly just watched wonderful references and wanted to copy them. But the most awesome architecture in the world might completely ruin the composition of a shot, and utterly distract the viewer in all the rest. So: it was good decision because of that reasoning. It was also a poor decision, because it wasn’t followed through with a proper analysis of what went wrong.

Some of what had gone wrong I’ve already mentioned: lack of concept art and overview of the art direction as it would be implemented in the different shots. But we mixed this up with a second problem: that the work took too long and that the pipeline was unfinished. I now suspect that the art direction wasn’t all that off, and that we probably just needed to sort out our storyboards to see that it would have worked: the problem lay in the time, which would have been less of a problem had we just sorted out the pipeline. But, we decided to change both art direction and pipeline.

.

Week Four: freestyling and final pipeline

So began the second week of modeling, which saw a whole lot of work getting done. I drew on some of my favourite themes from Alhambra, then freestyled much of the rest, and had soon modelled a substantial portion of the scene. I even did the four statues, and found time to make a concept-sculpt of Yesim’s head. Our pipeline was hammered out to include xNormal to do the creation of object-space normalmaps. This, we found, required the least clean-up and gave very good results. The pipeline, in full, went like this:

Carefree creation of basemeshes in Maya and unworried sculpting in zBrush. Topology and all that was of no concern.

Many props – such as pillars – were made out of multiple subtools, and were needed in a couple of variations, being more or less ruined. The way I made these was to first do the whole version; then duplicate it, mask areas I wanted to preserve, and smooth/flatten areas that were to have fallen away, then give those surfaces a bit of stony roughness. That is to say: if I had a pillar, and wanted a version of that pillar that really just had the very base left, I simply smoothened and flattened until I had compressed all the (millions) of polygons of the upper portion of the pillar into the cracked surface. The other way would have been to create new basemeshes and projecting the detail back on; a more economic way of doing it, but – or so I found it – slower.

Anyhow, once I had made as many degenerated versions of a whole set piece I wanted, I arranged all the pieces, giving them some space between each other. This was then decimated, then merged into one subtool (just to simplify things, I kept things that were to go into the same atlas-map together) and automatically uvmapped by UVMaster with as little interference as possible. Some cleanup and sorting of UVs was done by using GoZ to quickly send meshes back and forth between Maya and zBrush.

Next, everything was exported and imported into xNormal. Now, the reason I arranged all the pieces at a distance from each other was so as not to confuse xNormal with intersecting or almost-touching surfaces of different pieces. Occlusion and cavity maps were also baked and were put together with some yellowish colours to form the base of our simple shading.

About now, after another week, it became undeniable: if things had seemed cluttered before, they certainly were so now. The new pillars and their architecture lacked any base in our references, except for a particular detail I loved in Alhambra – the mocarabe, or stalactite-formations – which had been scaled up out of proportion in this new mode. They were a step in a more cartoony direction, but simply didn’t work out.

So I started over. Again.

.

Week Five and Six: Bada Bagh and priorities

Now I turned to Bada Bagh to inspiration, and found cleaner lines which would work better in our shots, yet still much of the flowery decoration I wanted. With a rather robust pipeline backing me up, there’s rather little to say. With an acceptable art direction finally nailed down, it was just a question of doing it all, which went smooth enough.

There was a certain change of plans around now, though – until now, the story had included a goat for comic relief; the goat was scrapped, however, but we (for some reason) decided to incorporate goats into the architecture. Goat-statues, reliefs with goats carved into them; the circlet became goat-y and the last treasure became presided over by a giant golden goat. Why, one might ask? Perhaps to preserve our original working title, “Goat Darnit”. Certainly, I can’t recall any good reason for it, nor do I think the goat-theme was carried through particularly well in the final result.

The lesson there might be one of letting go of your darlings; when the goat-character was scrapped, there was no reason to incorporate goats into the movie any more. (Just as my Alhambra-stalactites were a burden later on, too.) It isn’t the only example of something surviving past its usefulness; certain pieces of architecture were added in at one point, being then needed for a shot’s composition (or for the layout to just make sense) – but lost usefulness after a while. A row of pillars was removed a week from rendering because they weren’t just not needed: they were making things worse.

Around this time in the project, the notion of having dailies had largely fallen away. We were too busy, it seemed – indeed, I was so busy I didn’t write another post on this blog until now. What with everyone having their own schedules and were well aware of what was needed, and everyone sitting next to each other and so knew where everyone else was, there seemed to be no point. However, I now think that the dailies were a tool that could have been well used; for instance, had the goat-reliefs been discussed with the whole group (instead of us two environmental modellers) they would have, no doubt, been scrapped. By allowing ourselves to be absorbed by our individual tasks, we lost (some) perspective of the big picture.

On a similar note one can point out that, while extra time was spent on the obvious, close-to-camera details; rather little time was spent looking through the shot-cameras until we came to the lighting phase. As modellers, me and Oskar mostly set about modelling the whole scene, and though concessions were made to the distance of the camera from the objects being modelled, we sort of operated under the assumption that the composition was fixed and things should just be refined; not removed, or moved, or other things be added. That is to say, again: once some headway had been made with the models, one should have taken the time to review how it worked in the movie and discuss how the composition in shots could be improved.

Anyhow, after two weeks of modelling we were about to start shading when it became clear that Daniel would need more time on the characters, and that the task of lighting all the 36(ish) shots was unassigned.

.

Week Seven: Lighting and the first few shots

Not much needs to be said about the lighting process. I managed three-five shots a day; Oskar soon had to chip in. The effect was, of course, that the shading and finalizing of the environment was put on hold, but the lighting was prioritized.

There are, no doubt, plenty of theories on proper lighting; we mostly just worked on gut-feeling and attempted rimlights. We started out by doing paintovers in Photoshop to find and convey our ideas, and then went on to try and replicate it in 3D. (The results invariably differed wildly.) Placeholders for grass were placed as needed, to guide the creation of vegetation later, and we shuffled around the geometry of the scene on a per-shot basis to find more pleasing results. Simple clouds were made of planes to indicate how the composition in the shot would work.

We planned to use final gather for rendering, but tried to use it as a supportive effect, not as the sole basis for the lighting setup. A simple, blue dome was set around the scene, though it was never given a HDR image, which I had originally planned. It seemed superfluous then, but would maybe have given more pleasing results.

Towards the end of the week, I finished the first two shots, so they could be rendered. I had to make some clouds; they are simply slightly modified versions of the cloud-preset in Maya. A repurposed fluid effect from an earlier project was used for the wisp of cloud whipping by in the first shot. Some small stones were made and placed out, and a bare minimum of texture-work was done to support the shot. We wanted unobtrusive textures; enough to not make the environment seem entirely untextured, but nothing that required much time to set up or distracted from the clean, sculpted features.

Pretty late on in the process Daniel suggested we give everything a bit of subsurface scattering, by simply plugging a mi_sss_fast-shader into the incandescence slot of our shaders. This brightened things up considerably; however, what with rendering 32-bit (later 16-bit) exr-files, and making sure to include an incandescence-pass, the sss effect could be entirely controlled in Nuke.

.

Week Eight: Rendering – the theory

Now, for rendering, we wanted three layers: the Main, in which everything had their standard shaders and the characters had blinns, final gather was computed, and so on. A SSS-layer was made for the characters, so that the strength of the sss-effect could be controlled in post. And, finally, we set up an AO-layer, as the mentalray renderpass does not feature normalmaps.

All shaders were set up manually, so that all shaders that had a normalmap had a corresponding AO shader; normalmaps were not needed for the use background-shaders (or black surfaceshaders) used to mask things out in the sss-layer, but versions had to be made for shaders that used displacements. Ideally, we ought to have found a way to generate these shaders automatically; we settled for at least automating the process of assigning those shaders with some simple scripts made by copying the echoed commands from Maya’s script editor.

Now, in the Main layer, I added in the usual renderpasses: reflections, specularity, diffuse, depth, and so on. Also, I set up a handful of light-contribution maps and diffuse-passes, so that the effect of particular lights could be controlled in post. Finally, a whole bunch of matte-passes were made to give control over particular objects in the scene; the characters, the characters’ eyes, the grass, the ground, and so on.

We actually set up the matte-passes in the objects’ original scene, so that they were referenced into the renderscenes. (The referencing system worked like this: a handful of environment-areascenes were referenced into a main-environment scene, which was referenced into the renderscene. The characterscenes (with rigging and shading) was referenced into an animationscene, which was referenced into the renderscene for a particular shot.)

Anyway, that way, we only had to make the passes once and then simply associate all the in-referenced contribution maps to the renderscene’s Main-layer and assign the matte-passes to that layer. They then automatically add themselves to their respective contribution maps.

Finally, in order for it all to work properly with our Backburner-controlled network rendering, I had to make sure that the renderscene and its project was shared over the network, and that only files from our rendermaster-machine were used for rendering, and that all file paths were either relative or pointed to the network location of the file. This was surprisingly tricky; for one thing, Maya’s file reference-system uses absolute paths for referenced scenes. So, what I did was, I set up an environment variable in each renderslave’s Maya.env-file, called “renderProject” and gave it the UNC path to our rendermaster. That is:

renderProject = \\mastercomputer\projectfolder_shared_over_the_network\

That way, I could replace the relevant string in each reference’s file path with a variable, so as to avoid giving absolute paths. The variable could then be changed if a different computer was to act as rendermaster – preferable to having to go into each scene and change each reference’s path by hand.

I used File Texture Manager to keep track of textures and make sure their paths were set correctly; also, I used it to set up .map-files for all textures. Finally, all dynamics had to be cached.

Once everything had been rendered, I started the tree in Nuke with the Main-file, which was somewhere around 40mb large, with a dozen standard passes, a handful of light-diffuses, and a score or more of matte-passes. Now, to preserve time and whatnot, I simply used the passes embedded in the exr to control the composite masterBeauty-pass. So that, if I found that the specularity was too strong, I would create a Merge-node, set its operation to From, pipe in the spec pass in the A channel and the masterBeauty into the B. That way, I could tone it down, or completely remove the effect, to then make some changes and colorcorrecting on that particular feature, and then add it back in again.

Along with the light-diffuse passes and the multitude of matte-passes, one could then do some pretty thorough colorcorrecting. The only things requiring particular attention was the characters’ sss, which was brought in separately, blended on top of the Main layer’s diffuse pass, given specularity and facing ratio-controlled reflections, and then put back in on top of the masterBeauty. Also, still-frames of clouds were rendered out and put on 3D-planes in Nuke, along with the cameras from Maya, to replace the background. After that, is was just a question of rendering it all out.

That was the theory.

.

Week Eight and Nine: Rendering – the mess

When the theoretical system coincided with actuality, setting up a renderscene took no more than half an hour. The problem was: the theoretical system became full-fledged only in the very last few days of rendering. By then, a week and a half had been spent doing more or less manual set-ups for the renderscenes, and in those last, desperate days, there was no way of taking a step back and organizing things – the last frames were rendered less than two hours before our final presentation. And there were a whole lot of interesting little quirks particular to every renderscene.

Now, the reason the theory wasn’t allowed time to be developed from the very beginning – apart from, well, lack of time – was that the project hovered on what seemed to be the very edge of what could, possibly, be set up manually. So that, when the question of “Just Doing It The Way We Know” versus “Spending Time Finding A Smarter, Quicker Way To Do It” was raised, we went with alternative A.

In the first few shots, in fact, all shaders were assigned manually – so that Daniel was needed to make sure the characters got the correct sss shaders and whatnot. We soon got tired of that, and so made some scripts to automate at least that job. No scripts were ever made for the environment; I made a base-renderscene and set up the environment’s shaders in the three layers there. That was then used – you guessed it – as a base for all renderscenes to come, with the animationscenes referenced into it.

About when we’d gotten that far, Tomas set up almost all the renderscenes’ renderlayers and shaders-assignment. The problem was that things changed halfways through that process. The characters got some new shaders, and some existing ones were renamed. Work continued on the environment – Daniel chipped in and placed out lots of little stones, Oskar finished some larger props that had, somehow, been left unfinished; none of the treasures were done; the tree wasn’t even begun. Also, rendersettings were changed.

All that meant that the setup Tomas had made, though splendid when made, needed cleanup. What with me and Oskar doing our particular light-setups per scene, sometimes one had to hide particular pieces of geometry or move them around, so that things invariably got lost in translation. And attempts to optimize and reduce rendertimes – which grew from fifteen minutes per frame to forty-sixty in some shots – were squeezed in here and there, leading to new workflows, leading to more changes.

For instance, it seemed importing the grass-scene into the renderscene saved a little bit of time, so I did that for a few shots. However, I had such problems getting the attribute maps carried across, I eventually abandoned that idea. For the longest while I wanted to render the grass separately, or put the sss on the environment in the SSS-layer; things that maybe could have saved time in the long run, but which would have taken further set-up.

We started out with our six workstations and a handful extra renderslaves; because of the kindness of others, we wound up with thirty in the end. However, as these were also people’s workstations, they would be turned on or off intermittently; because of the task-system in Backburner, that sometimes led to frames being unnecessarily rerendered. Also, sometimes, rendercomputers would encounter some error when opening a renderscene, but return a task-finished message; several times we found whole sections missing as a malfunctioning rendercomputer would go through several tasks and “finish” them. Because of the “Just Doing It” vs “Do It Smarter” equation had the added “Keep The Renderfarm Busy” thing going for it, I had little time to check the whole rendered sequences – instead I simply checked a few frames here and there to make sure nothing had gone wrong. That way, I obviously didn’t notice the frames that weren’t even there.

And when things were rerendered, or missing frames were filled in, one sometimes found that a renderscene had broken down in one’s absence – for instance, the first few renderscenes had to be remade almost entirely, as the characters had new shaders, there environment was finished, and new settings were used to render. One would occasionally try to adapt oneself to the settings then used, but find that this led to slight variations that were just disturbing flicker.

Further: the animators had, in the animation-scenes, created clusters to fix what the skinning and blendshapes could not; it seems that most of these deformers were lost in rendering. We should have used geometry caches instead of the animationscenes; it would have been more lightweight and less prone to errors.

.

So: Murphy’s law was proven, again and again. Things that could go wrong, did. Of course, it all almost worked, save for the four or five shots that had the tree in them – the tree was not finalized until the very last minute, and those were the last shots to be rendered. Since we were working on other things at the same time, it took a while before it was discovered that the rendertime per frame was two-three hours. And by then, there simply was no time to fix it.

I’d like to bring up the notion of dailies again, here, and the importance of sharing the workload so that certain crucial tasks don’t hinge on one individual. In this rendering business, I was that individual – but I failed utterly to share the load, or inform others of my workflow, until the very last days. While the project structure requires, of course, division of labour, and we all wanted to specialize on our particular areas; the execution of tasks vital to the project’s completion should not be made without insight by the other members of the team. That’s my new theory, at least. The regular sharing of information offered by dailies can allow people to perform in other roles than originally intended, when required to do so.

The point in case here is the five missing shots: the renderscenes that would not render weren’t actually set up by me, but Oskar and Daniel; but they were set up in the system I had devised. When that system failed, or when particular points of it was misunderstood or not communicated, they lacked the necessary information and experience I had acquired over the past week and a half to resolve the problem. Earlier, a whole night had gone without any rendering because I had missed pointing out the most important UNC-path for the Backburner rendering; one has to open the renderscene from the network, not from within the local computer’s folders. (Even if it’s the exact same file, the path sent to Backburner is different, and only the UNC path can be found by the network renderslaves.

Wheeew! That was long-winded. A few last words, then: almost all compositing was in fact done by the animators – Björn, Tomas and Hannes. I did a few – and acted as consult – and Oskar comped one or two. Thought I’d mention that.

.

Conclusion

Congratulations, you’ve just read (or scrolled through) nine pages of text. It might seem that a whole lot of it was spent detailing just how things went wrong; but I think that’s the part that is interesting, and that I want to convey to others. Everyone knows how things should be done – long, thorough planning-phase and whatnot. And while this project certainly didn’t crash and burn, it could have gone much smoother; and it is by pointing out the errors made, the effects they had and how they could (possibly) have been avoided, that I hope to helpfully inform.

So, I thought I’d review some of the general conclusions I’ve already mentioned, and sprinkle in some additional thoughts.

.

First, as I returned to again and again; planning is paramount. One wants to make sure that you’ve really taken the Quality over Quantity thing to heart. It feels a whole lot better spending some time finding the Smarter and Quicker way, than rushing ahead and doing things the old familiar way.

Storyboards are a great tool – to try out the art direction, to plan the modelling, to define the lighting and plan the compositing. For us, the two weeks of scrapped modelling could have been well spent just doing storyboards, which would have given a more consistent and well-planned result in the end. Our efforts would have been more focused on the needs of the particular shots, instead of – as we did it – a more general effort expended on the entire scene.

We had planned to try out a full pipeline much earlier in the project and to deliver a final shot weeks before everything was to be done; for one reason or another – the characters were tweaked until the last week, for instance – this was not done. It should have been done, and rigorously so, so that general solutions could have been found to automate the process later on and the pitfalls of the system had been detected.

Dailies, if used to simply say “I did this yesterday, and today I will do this” are rather useless. If used to thoroughly discuss the discoveries made, one’s workflow and one’s priorities – well, then it sounds a whole lot more interesting. Setting aside some time each day to take a step back and actively review each others’ work, and to be made privy to others’ workflows, will help maintain perspective, help analyse just what has gone wrong, and distribute responsibility more evenly.

Finally, this: I found that making the environments for this short to require a very different kind of mindset than had I made a small scene for a private project. The roles assumed in the group project were more defined by the project than the theoretical role itself. The importance of openness for critique and discussion can hardly be overstated. I don’t think we got it quite right on this first try. But I think we’ll be better at it next time.

<iframe src=”http://player.vimeo.com/video/21716557″ width=”400″ height=”300″ frameborder=”0″></iframe><p><a href=”http://vimeo.com/21716557″>Greed Cliffe WIP</a> from <a href=”http://vimeo.com/user5919699″>Stefan Berglund</a> on <a href=”http://vimeo.com”>Vimeo</a>.</p>
Posted in Environments, General | 1 Comment

Jean Francois ready for rendering

Not much time left now. I just finished the last work on Jean Francois. Since it was quite a while since my last post, I’ll try to cover as much ground as possible now.

A lot has happened in the last few days. All texture work has been finished and all maps have been baked out and shaders have been tweaked and finalized. Since our artstyle is quite stylized and cartoony all textures were handpainted, sometimes using photoreference as an initial guide (mostly for the skin parts).

After finishing the color maps I exported these into photoshop to work on my sub surface maps. I decided to use mental rays subsurface scattering based shader miss fast skin maya for all the skin shading. This shader uses three channels to build up the layers of skin. Epidermal is the top layer of skin where i plug in a cool desaturated version of the colormap. The subdermal layer gets a overly saturated reddish map to represent the fleshy tones underneath the skin. Finally the backscatterin channel gets a simple map as to decide where the back scattering effect should be most prominent. Often spots like the nose or the ears tend to have a red kind of translucent glow since light travels through them. Since we wanted to go with a bit exhaggurated sss effect i tried to make this shader a bit more prominent than usual when rendering.

The clothes recieved some very simple blinn shading. The stylized look demanded an almost flat shading here. Some simple metal shading was done using the mia materal x passes shader for mental ray.

Displacement and normal maps were baked out from zbrush using the multi map exporter plugin. The displacements were used up to a level of 4 subdivisions in maya using the mental ray approximation editor to compute the subdivision. The rest of the details, subdivisions 4-7 were done by exporting normal maps with the base created at lvl 4 in zbrush. This way we wouldn’t end up with an unnecessary high amount of polygons upon rendering, relying on normal maps to create all the detail that had no direct impact on the silhouette.

My final part was then to decide how to set up all the character specific render layers in Maya. Some shaders, such as the sss shaders do not support rendering with passes in mental ray. This made it necessary to split up these shaders into a seperate render layer, masking out all intersecting and covering geometry and creating layer overrides to make sure only the sss effect would be rendered. Ambient occlusion was also split up in a layer of its own. There is a preset for ambient occlusion in the maya passes presets, but this usually gives a bit poorer results, so we decided upon setting up this layer manually. The final render layer held all the other necessary passes needed for good comp in nuke. More about that in a little while.

Lastly and a bit sadly I come to the conclusion that the animated displacement maps for the facial animation I started to research earlier on will not be able to be fitted into my remaining schedule. Had this part been planned and researched a bit more thoroughly at an earlier stage I might have had time to make it happen.

Posted in Characters | Leave a comment

Milestone 3 Playblast

Playblast version of our short. Our first rendered shot will pop up here soon.

Some tweaking on the animation still left to do. We’ll try to fix as much as possible on each shot right before render. Also, Stefan and I will go over each camera prior to render and tweak some lenses and angles to get the most out of the lighting and composition.

http://vimeo.com/20992219

Posted in Animation, General | Comments Off on Milestone 3 Playblast

Animation polish

http://vimeo.com/20739939

Here are the animations I’ve polished so far.

The second one here (where J-F comes up and sees the treasure), has been changed. The original block-in simply didn’t work well with the skinned character. Shoulders and elbows were all over the place, and since we’re short on time the necessary corrective blendshapes for that particular animation are of low priority right now. That said, I think that this is a better choice regardless. It’s snappier, he sees the treasure quicker and leaves more time for anticipation in the beginning and for more reaction-time at the end. “Less is more”, “Kill your darlings”, “KISS”, etc

The plan now is to get the rest of the shots polished before milestone 3 next monday. After that, I’ll try to do a second polish on as many shots as I can. We don’t have much time left, but some shots will definitely needs a second going over at the very least.

Posted in Animation | 1 Comment

animation polish and additional td duties

I’ve had a lot on my plate these past two weeks or so, been straightening out a lot of technical issues with skinning, rigging and blendshapes, corrective and ordinary, providing a bit of support for Tomas and Daniel and getting some animation done in between. Still, things are more or less going in the right direction. Yesim is getting updated blendshapes, some unfortunate pushing going on in the gums but it looks like this will be an easy fix by just painting blend shape weight maps. I set the eyelids up with sculpt deformers to make the eyeballs affect them a little, however, this seems to cause some problems in combination with blendshapes so they’re scrapped for the moment.

JP’s going to get his face rig soon as well, ran into some trouble though while trying to split his blendshape meshes into left and right sides (using the excellent abTwoFace script once again from crumbly). Confused as to what could be causing this since splitting yesims meshses went smoothly enough, haven’t had time to look into it yet but I hope I’ll get it sorted out tomorrow.

As for the animation, shot 12 got quite a makeover, there were a lot of mood changes in there pressed into those 4 seconds which came across as just.. well, she felt weird and unnatural. So I decided to keep just the first and the last pose basically and worked it out from there. The tiara is still missing in this shot since it wasn’t done when I started on it. The other shots have stayed pretty much true to the blocking.

Posted in General | 1 Comment

Jean-Francois has feelings too!

I just finished the facial blendshapes for Jean-Francois and having them done I quickly realize I’ll have to go back a bit and exaggerate everything a bit. Not a big thing though, as I’m starting to feel a bit more secure in my workflow. Everything is uv-mapped as well and the next step is to fix the corrective blends for Jean-Francois body. This seems it might take some time though as he seems to break quite easily with the current skinning, but that’s what you get when you choose to do bulky characters i guess…

I also finished sculpting the clothes for both characters. I took some help from a great artist called Selwy – http://www.selwy.com/2009/zbrush-clothes-tutorial/


Posted in Characters | Leave a comment

getting closer

As my esteemed colleague Stefan said earlier we are now almost done with all of the big sculpting there may be some tweaking to be done. the textures are however not done so bare with us a while longer.

Right now i’m sculpting the ground areas that has not yet been done (ground under the dome, ground to the right of the terrace and ground under the entrance area). that should be done tomorrow, then i will spruce things up with some grass and general foliage in the coming days. we’ll also work on some lighting to better help us with the texturing.
The last week i’ve been finilizing the terrace, sculpted the dome, and made the kings crown (the first treasure)

as you can see the crown is not yet shaded, it will be gold, while the horns and the eyes i have not yet added will be faceted emeralds.

Oh and now seems as good a time as any to outline our props creation process. First we create geometry with edge spacing that is good for sculpting in zbrush (quads that are as square as possible). Then we sculpt them in zbrush. then we use decimation master to decimate the meshes and UV master to bake  the UV’s. Then we take the models into Xnormal where we use the decimated geometry as low res meshes (the mesh that receives the projected normals) and the exported the high res meshes from zbrush as source mesh to create Object space normal maps. we also use Xnormal for ambient occlusion maps and cavity maps and use them as our base texture. Finally we overlay the object space normal maps with a detail normal map created with the photo shop plug-in Ndo.

Right now all our textures are just a flat color with ambient and cavity overlay, but depending on camera angels/lighting condition we will add further detail as needed.  the process will be more thoroughly  explained in the “how to” document.

Posted in Environments | 3 Comments

Progressing along.

Well, things move along, if ever so slowly. Most everything’s modeled, at least the big shapes. Hope to do a bunch of smaller stones and pieces of rubble later, but next up is finishing our rather simple texturing and shading.  Anyhow, some WIP shots:

Posted in Environments | Leave a comment

Nearing the end of skinning J-F. It’s unfortunately taken quite a bit longer than we anticipated as dealing with very wide arms and legs was more challenging that I originally thought.

The image shows how I work when I skin. On the main monitor I’ve got the work area where I do all of the painting and modifying with the object I’m working on isolated. On the right-hand monitor I’ve got one window showing the same view without isolating the object so I can see the changes I’m making, the component editor to see how much the vertices are affected and by what joint. There’s also a third window if I need to watch an object from another side, or for general reference.

Posted in Animation, Characters, General | 1 Comment

Blendshapes, research, uv-mapping and sculpting

The last few days have been quite rewarding. Mostly because I’ve been working with quite a few things I previously wasn’t all that familiar with. I have done some simple blendshapes before, but now I got a good chance to take it a bit further. For the facial blends, I decided to go all out and work entirely in Zbrush for this. Before I started out I began reading up on how to solve the problem with connecting and blending displacementmaps together with blendshapes. After som looking around a stumbeled upon a character TD named John Patrick who hade an awesome description on his site: http://canyourigit.com/correctiveDisplacements.php

Basically you create you blendshapes in zbrushm export a displacement for every blend + one for the base pose. You then build a small network in Mayas Hypershade to create a pipeline that takes out the difference between the base disp-map and the maps for the blends and then adds them on top of the first map. The blending is then linked to the blendshapes so that they work simoultaniously.

Sculpting the blends in Zbrush was very intuitive and a big advantage was to be able to create symmetrical shapes. These will later be split up in two halves in maya. I found the book “Stop Staring” by Jason Osipa to be extremely helpful, it basically tells you everything you need to know about modelling, rigging, skinning and blendshaping a face.

The layer feature in Zbrush becomes very handy when making blends. First and foremost you create a base layer to start out from and then a new layer for every new pose. You can then “tween” the layers using a slider to check out the blendshapes and se how they react to eachother.

I’m also done with the uv-mapping of Yesim, it all went quite smoothly and I decided to split her up into three maps, mostly to make sure the face got a map for itself since it will need to have quite a few maps to match the blendshapes. The rest of the skin then was then placed together with some other things one map, just to minimize the numbers of sss shaders i will need later on.

Currently I’m dealing with the corrective blendshapes for Yesims body. This is needed to fix some deformation issues that couldn’t be solved with skinning. This generally works really well, I’m using the Comet Pose Deformer script for maya that allows one to create blendshapes without having to duplicate geotry and hook it up manually. The only problem I’ve run into, and it is actually quite a large one, is that after creating a few blendshapes the calculating time for the script goes through the roof. Currently I’m stuck wating for 30 minutes for every blendshape I create. I will have to find a way to solve or work around this.

Apart from that, whats left now is blendshapes for Jean-Francois face, his uvs and corrective blends. Then I’ll just have to texture paint everything, bake my maps, set up the shaders in Maya, get the corrective displacements going for the blendshapes, create a backpack for Jean-Francois and make some eyes….

Too bad I really dont have time for more sculpting, it’s a lot of fun.

Time for a final burst to get everything done now!

Posted in Characters | Leave a comment