Wrap-up

Sooo – I realize I haven’t posted here for over a month. It’s now a week after the project was finished, or, as finished as it got. I thought it was time the result was presented here on the blog, my missing three weeks were explained, and my thoughts on the whole project were put down in text. It will be a pretty lengthy post.

But first, the movie:

You’ll notice a couple of shots in the middle are rendered/playblasted placeholders, and there are more than a few glitches here and there. We hope to patch things up in the summer; currently, we’re all busy with our internships. (Mine’s at Fido, by the way.-)

.

So, then, for the rest. I will start with some afterthoughts, continue from the very beginning – now with the benefit of hindsight – then fill in the three weeks of silence and finish it all off with some general thoughts. I will try to describe my workflow in full, discuss what worked and what didn’t, point out our more elegant solutions, and explain the reasoning behind choices that led to less elegant ones. I mean to be thorough, mostly to remind myself of the lessons learned, but also a bit in hope that it might be helpful to others. Simply put: it will be an exhaustive survey of the project.

.

Now, my first plan with my own specialization project was to just delve deep down into sculpting and resurface, eight weeks later, with a small set of elaborate statues set amidst graceful architecture, quickly supply them with spotless but simple shaders, and some tastefully sparse lighting. This plan was changed a bit when my solo project turned into our team effort, though not substantially so. But what I actually did was this:

Week 1: Character concepts.

Week 2: Environment block-in, reference-searching, research for other stuff.

Week 3: Architecture inspired by Alhambra.

Week 4: Scrapped Alhambra, now freestyling cheese-architecture.

Week 5: Scrapped the cheese-archs; now basing everything on Bada Bagh instead.

Week 6: Finished modelling.

Week 7: Lighting.

Week 8: Rendering.

(Week 9: More desperate rendering.)

What with twice throwing away almost all progress I had made, I actually spent a grand total of two weeks doing what I originally planned that wound up in the final result. The rest of the time was spent responding to the needs of making a film.

That said, I’ll take it from the top.

.

Story

Week One of the project was in fact preceded by a few weeks during which we discussed what the story would be. We considered basing it on well-known stories, like Don Quixote, or making something trailer-like. We ranged from bombastic pretentiousness to inane comedy. In the end, we discarded what we’d worked on a few days before kickoff, and settled on something that was intentionally somewhat bland. We weren’t storytellers, we decided, and so wanted to focus on the aspects of creating a movie, not making a good movie.

In retrospect, it seems like we could have come up with something more interesting. A longer, more proper planning-phase was needed – I’ll return to that – even if it would have taken away time from the production-phase. A curious thing with the story we settled on was that neither of the characters was to be either “good” or “bad” – they were both to occupy a narrow position of semi-villainousness, just likeable enough not to be disliked, but not so much that either character was perceived as being the good guy. And, much as we talked about quality over quantity, somehow, we still wound up with a rather large list of requirements for the 30+ different shots.

So: I now think we’d have produced a more interesting, and more consistent, result had we 1) stuck to a pre-defined concept with clearer character distinctions, and 2) toned down the narrative continuity in favour of a few highly polished shots inside a 3) way, way more limited space. And, 4) allowed ourselves the time to plan this.

.

Modelling

Apart from a short stint concepting the characters and looking into controlling hair-dynamics driving joint-chains, I pretty soon started doing just that: modelling. Now, we had an early, vague notion of the spaces needed for the story; a simple mockup was made, which was then used by the animators to find interesting angles, which was then refined with a more detailed block-in environment that focused on laying out the surfaces needed for the animation.

Now, the problem with our modelling was that we lacked direction. A week was spent blocking in the scene, then finding references for the environment, making moodboards, and such. But the only concepts we did for the environment – a paltry handful or so – were, somehow, cast aside as being too “simple”, and not really referred to later in production, except for colours. The only storyboards that ever existed were Maya-renders of the block-in environment.

We – me and Oskar – knew we wanted to focus on sculpting, and put less time on texturing and shading. (Which is, really, the reverse of most productions.) Further, much as we looked, we didn’t really find any references of CG-productions that suited us – they all seemed either too cartoony or too realistic. So, we had neither particularly direct CG-references, nor detailed concepts, nor storyboards in which to quickly test the concepts we came up with. It didn’t quite seem so unplanned when we were working on it, of course, but with the benefit of hindsight, one can say that it pretty much was. The lesson of a proper planning stage was to be driven home, repeatedly, during the project.

.

Week Three: Alhambra and a fledgling pipeline

So, when I began modelling on our chosen test-area (the entrance) I basically just found a cool pillar from Alhambra and started to copy that. This was done enthusiastically thoroughly in zBrush: we even made patterns and such in geometry. (A small pipeline was developed for that: from sculpting things by hand, or stamping them out as brush-alphas, we began using drag-rectangle masks to place our patterns and then use Deformers (offset, for instance) to extrude the shapes. Finally, and giving the best results, I began using zApplink, and a high-ish resolution zBrush document, to paint my alphas in Photoshop, then turn them into polypaint information, and then masking by polypaint intensity.) The results were pretty nice. They were also time-consuming and sillily highres.

But, once we had sculpted and stamped to our hearts’ content, the question arose as to how to get all that information into Maya. The first plan was to export a moderately-lowres mesh and use tangent-space normal maps for the details. But we weren’t particularly pleased with the results visually, the moderately lowres-meshes were pretty highres anyhow, and we kind of wanted to incorporate Decimationmaster into our workflow. This because it would give more lowres and more accurate models, and rid us of the need to worry about topology when building our basemeshes in Maya. We wanted to find a workflow which enabled us to spend as much time as possible sculpting, and which automated most of the rest of the process.

The problem with Decimationmaster was that, well, without us looking around for too long, there didn’t seem to be any simple way to, within zBrush, create a normalmap for a decimated mesh with the details of a highres mesh of a different topology. zBrush seemed to prefer making maps for the same subtool, but between different levels. Which meant projecting highres details on a subdivided decimated mesh; which required a bit of cleanup, and which worked poorly for areas we had lightly stamped with patterns. And, besides: tangent-space normal maps looked horrible on decimated meshes.

They looked less horrible if one separated all the faces in the UV map into separate shells, which lets the normalmap smooth over edges more properly, and worked alright-ish if one displaced the model a bit, too. When we got that far, we sighed and just displaced the original basemeshes instead. Now, we would figure out a workflow eventually, but along with that sigh, which came from this zBrush-to-Maya mess, came also an accepting of the fact that we’d never be able to sculpt everything with that kind of detail, and that we anyhow maybe shouldn’t go for this art direction, as it seemed to get a bit… cluttered.

.

And so, after a week’s modelling, we chucked it all and started over. It was the first step of us (me) realizing that the needs of the film required some adapting to; until then, I had mostly just watched wonderful references and wanted to copy them. But the most awesome architecture in the world might completely ruin the composition of a shot, and utterly distract the viewer in all the rest. So: it was good decision because of that reasoning. It was also a poor decision, because it wasn’t followed through with a proper analysis of what went wrong.

Some of what had gone wrong I’ve already mentioned: lack of concept art and overview of the art direction as it would be implemented in the different shots. But we mixed this up with a second problem: that the work took too long and that the pipeline was unfinished. I now suspect that the art direction wasn’t all that off, and that we probably just needed to sort out our storyboards to see that it would have worked: the problem lay in the time, which would have been less of a problem had we just sorted out the pipeline. But, we decided to change both art direction and pipeline.

.

Week Four: freestyling and final pipeline

So began the second week of modeling, which saw a whole lot of work getting done. I drew on some of my favourite themes from Alhambra, then freestyled much of the rest, and had soon modelled a substantial portion of the scene. I even did the four statues, and found time to make a concept-sculpt of Yesim’s head. Our pipeline was hammered out to include xNormal to do the creation of object-space normalmaps. This, we found, required the least clean-up and gave very good results. The pipeline, in full, went like this:

Carefree creation of basemeshes in Maya and unworried sculpting in zBrush. Topology and all that was of no concern.

Many props – such as pillars – were made out of multiple subtools, and were needed in a couple of variations, being more or less ruined. The way I made these was to first do the whole version; then duplicate it, mask areas I wanted to preserve, and smooth/flatten areas that were to have fallen away, then give those surfaces a bit of stony roughness. That is to say: if I had a pillar, and wanted a version of that pillar that really just had the very base left, I simply smoothened and flattened until I had compressed all the (millions) of polygons of the upper portion of the pillar into the cracked surface. The other way would have been to create new basemeshes and projecting the detail back on; a more economic way of doing it, but – or so I found it – slower.

Anyhow, once I had made as many degenerated versions of a whole set piece I wanted, I arranged all the pieces, giving them some space between each other. This was then decimated, then merged into one subtool (just to simplify things, I kept things that were to go into the same atlas-map together) and automatically uvmapped by UVMaster with as little interference as possible. Some cleanup and sorting of UVs was done by using GoZ to quickly send meshes back and forth between Maya and zBrush.

Next, everything was exported and imported into xNormal. Now, the reason I arranged all the pieces at a distance from each other was so as not to confuse xNormal with intersecting or almost-touching surfaces of different pieces. Occlusion and cavity maps were also baked and were put together with some yellowish colours to form the base of our simple shading.

About now, after another week, it became undeniable: if things had seemed cluttered before, they certainly were so now. The new pillars and their architecture lacked any base in our references, except for a particular detail I loved in Alhambra – the mocarabe, or stalactite-formations – which had been scaled up out of proportion in this new mode. They were a step in a more cartoony direction, but simply didn’t work out.

So I started over. Again.

.

Week Five and Six: Bada Bagh and priorities

Now I turned to Bada Bagh to inspiration, and found cleaner lines which would work better in our shots, yet still much of the flowery decoration I wanted. With a rather robust pipeline backing me up, there’s rather little to say. With an acceptable art direction finally nailed down, it was just a question of doing it all, which went smooth enough.

There was a certain change of plans around now, though – until now, the story had included a goat for comic relief; the goat was scrapped, however, but we (for some reason) decided to incorporate goats into the architecture. Goat-statues, reliefs with goats carved into them; the circlet became goat-y and the last treasure became presided over by a giant golden goat. Why, one might ask? Perhaps to preserve our original working title, “Goat Darnit”. Certainly, I can’t recall any good reason for it, nor do I think the goat-theme was carried through particularly well in the final result.

The lesson there might be one of letting go of your darlings; when the goat-character was scrapped, there was no reason to incorporate goats into the movie any more. (Just as my Alhambra-stalactites were a burden later on, too.) It isn’t the only example of something surviving past its usefulness; certain pieces of architecture were added in at one point, being then needed for a shot’s composition (or for the layout to just make sense) – but lost usefulness after a while. A row of pillars was removed a week from rendering because they weren’t just not needed: they were making things worse.

Around this time in the project, the notion of having dailies had largely fallen away. We were too busy, it seemed – indeed, I was so busy I didn’t write another post on this blog until now. What with everyone having their own schedules and were well aware of what was needed, and everyone sitting next to each other and so knew where everyone else was, there seemed to be no point. However, I now think that the dailies were a tool that could have been well used; for instance, had the goat-reliefs been discussed with the whole group (instead of us two environmental modellers) they would have, no doubt, been scrapped. By allowing ourselves to be absorbed by our individual tasks, we lost (some) perspective of the big picture.

On a similar note one can point out that, while extra time was spent on the obvious, close-to-camera details; rather little time was spent looking through the shot-cameras until we came to the lighting phase. As modellers, me and Oskar mostly set about modelling the whole scene, and though concessions were made to the distance of the camera from the objects being modelled, we sort of operated under the assumption that the composition was fixed and things should just be refined; not removed, or moved, or other things be added. That is to say, again: once some headway had been made with the models, one should have taken the time to review how it worked in the movie and discuss how the composition in shots could be improved.

Anyhow, after two weeks of modelling we were about to start shading when it became clear that Daniel would need more time on the characters, and that the task of lighting all the 36(ish) shots was unassigned.

.

Week Seven: Lighting and the first few shots

Not much needs to be said about the lighting process. I managed three-five shots a day; Oskar soon had to chip in. The effect was, of course, that the shading and finalizing of the environment was put on hold, but the lighting was prioritized.

There are, no doubt, plenty of theories on proper lighting; we mostly just worked on gut-feeling and attempted rimlights. We started out by doing paintovers in Photoshop to find and convey our ideas, and then went on to try and replicate it in 3D. (The results invariably differed wildly.) Placeholders for grass were placed as needed, to guide the creation of vegetation later, and we shuffled around the geometry of the scene on a per-shot basis to find more pleasing results. Simple clouds were made of planes to indicate how the composition in the shot would work.

We planned to use final gather for rendering, but tried to use it as a supportive effect, not as the sole basis for the lighting setup. A simple, blue dome was set around the scene, though it was never given a HDR image, which I had originally planned. It seemed superfluous then, but would maybe have given more pleasing results.

Towards the end of the week, I finished the first two shots, so they could be rendered. I had to make some clouds; they are simply slightly modified versions of the cloud-preset in Maya. A repurposed fluid effect from an earlier project was used for the wisp of cloud whipping by in the first shot. Some small stones were made and placed out, and a bare minimum of texture-work was done to support the shot. We wanted unobtrusive textures; enough to not make the environment seem entirely untextured, but nothing that required much time to set up or distracted from the clean, sculpted features.

Pretty late on in the process Daniel suggested we give everything a bit of subsurface scattering, by simply plugging a mi_sss_fast-shader into the incandescence slot of our shaders. This brightened things up considerably; however, what with rendering 32-bit (later 16-bit) exr-files, and making sure to include an incandescence-pass, the sss effect could be entirely controlled in Nuke.

.

Week Eight: Rendering – the theory

Now, for rendering, we wanted three layers: the Main, in which everything had their standard shaders and the characters had blinns, final gather was computed, and so on. A SSS-layer was made for the characters, so that the strength of the sss-effect could be controlled in post. And, finally, we set up an AO-layer, as the mentalray renderpass does not feature normalmaps.

All shaders were set up manually, so that all shaders that had a normalmap had a corresponding AO shader; normalmaps were not needed for the use background-shaders (or black surfaceshaders) used to mask things out in the sss-layer, but versions had to be made for shaders that used displacements. Ideally, we ought to have found a way to generate these shaders automatically; we settled for at least automating the process of assigning those shaders with some simple scripts made by copying the echoed commands from Maya’s script editor.

Now, in the Main layer, I added in the usual renderpasses: reflections, specularity, diffuse, depth, and so on. Also, I set up a handful of light-contribution maps and diffuse-passes, so that the effect of particular lights could be controlled in post. Finally, a whole bunch of matte-passes were made to give control over particular objects in the scene; the characters, the characters’ eyes, the grass, the ground, and so on.

We actually set up the matte-passes in the objects’ original scene, so that they were referenced into the renderscenes. (The referencing system worked like this: a handful of environment-areascenes were referenced into a main-environment scene, which was referenced into the renderscene. The characterscenes (with rigging and shading) was referenced into an animationscene, which was referenced into the renderscene for a particular shot.)

Anyway, that way, we only had to make the passes once and then simply associate all the in-referenced contribution maps to the renderscene’s Main-layer and assign the matte-passes to that layer. They then automatically add themselves to their respective contribution maps.

Finally, in order for it all to work properly with our Backburner-controlled network rendering, I had to make sure that the renderscene and its project was shared over the network, and that only files from our rendermaster-machine were used for rendering, and that all file paths were either relative or pointed to the network location of the file. This was surprisingly tricky; for one thing, Maya’s file reference-system uses absolute paths for referenced scenes. So, what I did was, I set up an environment variable in each renderslave’s Maya.env-file, called “renderProject” and gave it the UNC path to our rendermaster. That is:

renderProject = \\mastercomputer\projectfolder_shared_over_the_network\

That way, I could replace the relevant string in each reference’s file path with a variable, so as to avoid giving absolute paths. The variable could then be changed if a different computer was to act as rendermaster – preferable to having to go into each scene and change each reference’s path by hand.

I used File Texture Manager to keep track of textures and make sure their paths were set correctly; also, I used it to set up .map-files for all textures. Finally, all dynamics had to be cached.

Once everything had been rendered, I started the tree in Nuke with the Main-file, which was somewhere around 40mb large, with a dozen standard passes, a handful of light-diffuses, and a score or more of matte-passes. Now, to preserve time and whatnot, I simply used the passes embedded in the exr to control the composite masterBeauty-pass. So that, if I found that the specularity was too strong, I would create a Merge-node, set its operation to From, pipe in the spec pass in the A channel and the masterBeauty into the B. That way, I could tone it down, or completely remove the effect, to then make some changes and colorcorrecting on that particular feature, and then add it back in again.

Along with the light-diffuse passes and the multitude of matte-passes, one could then do some pretty thorough colorcorrecting. The only things requiring particular attention was the characters’ sss, which was brought in separately, blended on top of the Main layer’s diffuse pass, given specularity and facing ratio-controlled reflections, and then put back in on top of the masterBeauty. Also, still-frames of clouds were rendered out and put on 3D-planes in Nuke, along with the cameras from Maya, to replace the background. After that, is was just a question of rendering it all out.

That was the theory.

.

Week Eight and Nine: Rendering – the mess

When the theoretical system coincided with actuality, setting up a renderscene took no more than half an hour. The problem was: the theoretical system became full-fledged only in the very last few days of rendering. By then, a week and a half had been spent doing more or less manual set-ups for the renderscenes, and in those last, desperate days, there was no way of taking a step back and organizing things – the last frames were rendered less than two hours before our final presentation. And there were a whole lot of interesting little quirks particular to every renderscene.

Now, the reason the theory wasn’t allowed time to be developed from the very beginning – apart from, well, lack of time – was that the project hovered on what seemed to be the very edge of what could, possibly, be set up manually. So that, when the question of “Just Doing It The Way We Know” versus “Spending Time Finding A Smarter, Quicker Way To Do It” was raised, we went with alternative A.

In the first few shots, in fact, all shaders were assigned manually – so that Daniel was needed to make sure the characters got the correct sss shaders and whatnot. We soon got tired of that, and so made some scripts to automate at least that job. No scripts were ever made for the environment; I made a base-renderscene and set up the environment’s shaders in the three layers there. That was then used – you guessed it – as a base for all renderscenes to come, with the animationscenes referenced into it.

About when we’d gotten that far, Tomas set up almost all the renderscenes’ renderlayers and shaders-assignment. The problem was that things changed halfways through that process. The characters got some new shaders, and some existing ones were renamed. Work continued on the environment – Daniel chipped in and placed out lots of little stones, Oskar finished some larger props that had, somehow, been left unfinished; none of the treasures were done; the tree wasn’t even begun. Also, rendersettings were changed.

All that meant that the setup Tomas had made, though splendid when made, needed cleanup. What with me and Oskar doing our particular light-setups per scene, sometimes one had to hide particular pieces of geometry or move them around, so that things invariably got lost in translation. And attempts to optimize and reduce rendertimes – which grew from fifteen minutes per frame to forty-sixty in some shots – were squeezed in here and there, leading to new workflows, leading to more changes.

For instance, it seemed importing the grass-scene into the renderscene saved a little bit of time, so I did that for a few shots. However, I had such problems getting the attribute maps carried across, I eventually abandoned that idea. For the longest while I wanted to render the grass separately, or put the sss on the environment in the SSS-layer; things that maybe could have saved time in the long run, but which would have taken further set-up.

We started out with our six workstations and a handful extra renderslaves; because of the kindness of others, we wound up with thirty in the end. However, as these were also people’s workstations, they would be turned on or off intermittently; because of the task-system in Backburner, that sometimes led to frames being unnecessarily rerendered. Also, sometimes, rendercomputers would encounter some error when opening a renderscene, but return a task-finished message; several times we found whole sections missing as a malfunctioning rendercomputer would go through several tasks and “finish” them. Because of the “Just Doing It” vs “Do It Smarter” equation had the added “Keep The Renderfarm Busy” thing going for it, I had little time to check the whole rendered sequences – instead I simply checked a few frames here and there to make sure nothing had gone wrong. That way, I obviously didn’t notice the frames that weren’t even there.

And when things were rerendered, or missing frames were filled in, one sometimes found that a renderscene had broken down in one’s absence – for instance, the first few renderscenes had to be remade almost entirely, as the characters had new shaders, there environment was finished, and new settings were used to render. One would occasionally try to adapt oneself to the settings then used, but find that this led to slight variations that were just disturbing flicker.

Further: the animators had, in the animation-scenes, created clusters to fix what the skinning and blendshapes could not; it seems that most of these deformers were lost in rendering. We should have used geometry caches instead of the animationscenes; it would have been more lightweight and less prone to errors.

.

So: Murphy’s law was proven, again and again. Things that could go wrong, did. Of course, it all almost worked, save for the four or five shots that had the tree in them – the tree was not finalized until the very last minute, and those were the last shots to be rendered. Since we were working on other things at the same time, it took a while before it was discovered that the rendertime per frame was two-three hours. And by then, there simply was no time to fix it.

I’d like to bring up the notion of dailies again, here, and the importance of sharing the workload so that certain crucial tasks don’t hinge on one individual. In this rendering business, I was that individual – but I failed utterly to share the load, or inform others of my workflow, until the very last days. While the project structure requires, of course, division of labour, and we all wanted to specialize on our particular areas; the execution of tasks vital to the project’s completion should not be made without insight by the other members of the team. That’s my new theory, at least. The regular sharing of information offered by dailies can allow people to perform in other roles than originally intended, when required to do so.

The point in case here is the five missing shots: the renderscenes that would not render weren’t actually set up by me, but Oskar and Daniel; but they were set up in the system I had devised. When that system failed, or when particular points of it was misunderstood or not communicated, they lacked the necessary information and experience I had acquired over the past week and a half to resolve the problem. Earlier, a whole night had gone without any rendering because I had missed pointing out the most important UNC-path for the Backburner rendering; one has to open the renderscene from the network, not from within the local computer’s folders. (Even if it’s the exact same file, the path sent to Backburner is different, and only the UNC path can be found by the network renderslaves.

Wheeew! That was long-winded. A few last words, then: almost all compositing was in fact done by the animators – Björn, Tomas and Hannes. I did a few – and acted as consult – and Oskar comped one or two. Thought I’d mention that.

.

Conclusion

Congratulations, you’ve just read (or scrolled through) nine pages of text. It might seem that a whole lot of it was spent detailing just how things went wrong; but I think that’s the part that is interesting, and that I want to convey to others. Everyone knows how things should be done – long, thorough planning-phase and whatnot. And while this project certainly didn’t crash and burn, it could have gone much smoother; and it is by pointing out the errors made, the effects they had and how they could (possibly) have been avoided, that I hope to helpfully inform.

So, I thought I’d review some of the general conclusions I’ve already mentioned, and sprinkle in some additional thoughts.

.

First, as I returned to again and again; planning is paramount. One wants to make sure that you’ve really taken the Quality over Quantity thing to heart. It feels a whole lot better spending some time finding the Smarter and Quicker way, than rushing ahead and doing things the old familiar way.

Storyboards are a great tool – to try out the art direction, to plan the modelling, to define the lighting and plan the compositing. For us, the two weeks of scrapped modelling could have been well spent just doing storyboards, which would have given a more consistent and well-planned result in the end. Our efforts would have been more focused on the needs of the particular shots, instead of – as we did it – a more general effort expended on the entire scene.

We had planned to try out a full pipeline much earlier in the project and to deliver a final shot weeks before everything was to be done; for one reason or another – the characters were tweaked until the last week, for instance – this was not done. It should have been done, and rigorously so, so that general solutions could have been found to automate the process later on and the pitfalls of the system had been detected.

Dailies, if used to simply say “I did this yesterday, and today I will do this” are rather useless. If used to thoroughly discuss the discoveries made, one’s workflow and one’s priorities – well, then it sounds a whole lot more interesting. Setting aside some time each day to take a step back and actively review each others’ work, and to be made privy to others’ workflows, will help maintain perspective, help analyse just what has gone wrong, and distribute responsibility more evenly.

Finally, this: I found that making the environments for this short to require a very different kind of mindset than had I made a small scene for a private project. The roles assumed in the group project were more defined by the project than the theoretical role itself. The importance of openness for critique and discussion can hardly be overstated. I don’t think we got it quite right on this first try. But I think we’ll be better at it next time.

<iframe src=”http://player.vimeo.com/video/21716557″ width=”400″ height=”300″ frameborder=”0″></iframe><p><a href=”http://vimeo.com/21716557″>Greed Cliffe WIP</a> from <a href=”http://vimeo.com/user5919699″>Stefan Berglund</a> on <a href=”http://vimeo.com”>Vimeo</a>.</p>
Be Sociable, Share!
This entry was posted in Environments, General. Bookmark the permalink.

One Response to Wrap-up

  1. oskar says:

    nice, stefan! roligt att se och läsa.

    titta en panda! *copy* *paste*.

Leave a Reply