and a BIG thanks to everyone who attended the gradshow, you guys were brilliant!
and a BIG thanks to everyone who attended the gradshow, you guys were brilliant!
Last post before gradshow, and with a fully rendered sequence which for me was very thrilling because it’s the first time I’ve seen him in his entirety with audio, beard and all.
But first lets see what’s been going on since last post. WELL, I finished the rig for all post-animations. And it looks like this.
Included in the rig is a very dynamic control of the tounge with blendshapes and joints, also a “drift”-jaw which is something that I invented to cope with lip drag (when you open your mouth your teeth moves before your lower lip starts moving), pupil-dilation-controls, sticky lips and some wobblecontrols for the hat and shoulders etc.
When recording with faceshift it has a tendency to never really close the characters mouth completely, also there is no tounge recording and in general I had to tweak a lot of stuff to make him pronounciate the words better. But that’s all and well because that’s what I set out to do in the first place!
Also I finished up on the last bit of shading-work. I will never ever look-dev my textures on a black background again. especially when the good folks at http://www.hdrlabs.com/sibl/archive.html supports you with some free HDR-textures for IBL-lighting. A good test to any texture is to see how it handles different lighting scenarios and it’s important not to develop for a specific scene. Imagine your director asking you for a daylight shot of your swampmonster that you’ve been texturing in almost pitch black.
Part from that I’ve just been touching up on lots of different small stuff. Setting up my different scenes for rendering on multiple computers, making sure that all comps render in the same way/quality (you’ld be amazed to know what maya can do when moving scenes to a new workstation)
Next up is increasing quality and FG-settings. Might also change the lighting slightly
The beard gods must have punished me since last time, it was a struggle to get the beard to work properly.
To start with, the gorgeous lighting that I’m using is called Native IBL lighting, it’s a lot like “Emit light” except it’s much better and MUCH faster. You activate this by creating an IBL and typing the following in your script editor.
Then you create two new string options and set the following string options like this
Name: environment lighting mode
Name: environment lighting quality
Value: 0.3 (can be bumped up but increases render time)
Name: environment lighting shadow
Name: environment lighting scale
Value: 1.0 (think of this as the light intensity)
Name: light relative scale (this option forces physically accurate lighting on non bsdf shaders)
More on this here: http://elementalray.wordpress.com/2013/01/18/the-native-builtin-ibl/
And so all is dandy! Except that apparently Maya Fur does not render that well (or fast) with IBL-setups. NOW! If you set Fur Render Settings to Volume instead of default Primitives. You’ll get beard that looks like frost and does not really look good at all and registers poorly with the direct lighting from the IBL.
IF you set Fur Render to Primitives you’ll get something decent, but if you want multiple layers of fur, it’s a no go. Apparently this has been a long running bug in maya, my guess is that it has something to do with the depth sorting of multiple strands of fur. The result is multiple random black 4×4 pixel spots flickering on every frame.
SO a great guy called “Puppet” http://www.puppet.su/download/shaders_p_e.shtml has made this custom shader called p_HairTK which works like a DREAM and is really good, it just takes a little while to setup, works with GI and FG, indirect lighting, all kinds of different direct lighting. EXCEPT native IBL light emission. So I got beautiful beard but couldn’t keep my light-setup which I’ve put on a pedistal by now.
But I found a way. After Maya 2012, autodesk introduced a hidden way of overriding their otherwise hardcoded fur/hair-shaders. You do this by selecting the transform node for your Furfeedback and typing
addAttr -ln “miExportMaterial” -at bool -defaultValue 1;
addAttr -ln “miMaterial” -at message ;
Now I’m doing eyelashes, doing my best not to make him look pretty!
Forgot to blog earlier about this weeks progress so in compensation I’m doing a much bigger post then usual with some nice findings!
This week I’ve devoted a lot of time to lookdev my textures to make sure they’re up to snuff and on even levels from clothes to face. You always remember the thing that pops most, so I have to make sure that my stuff is either equally good or equally bad rather then a good looking face and badly shaded clothing.
A wise man once said. “You better put more stuff in the scene before you decide that your other stuff is finished”.
What this means is that what I earlier thought was “finished” was not so finished. Adding more textures and more detailed surfaces in an image always either enhances or supresses the previously “detailed” stuff.
So I finished my shading and clothing detail. In the process I came accross a difficulty when it comes to doing embroidery and post-painting displaced geo, it’s damn near impossible to get a non smudgy result. You have to constantly render out your displaced geo with your diffuse to make sure every fiber aligns the way you want it. So I had to come up with a nice way to work from texture to geo instead of baking down sculpt detail and post-painting the diffuse. Luckily Zbrush is an amazing software with its plugin Noisemaker
Here’s what I had to do. Instead of sculpting my detail from scratch, I started out with the diffuse.
I made a tileable pattern, a tileable fabric alpha. Brought them into Noisemaker, set it to UV and adjusted my depth accordingly. Then I also baked it to layers and used the morph target in order to paint depth and lost detail, wearing etc. And voila!
Part from this and waiting a lot of time on lookdev-renders I also researched maya fur to give him some peachfuzz and a sleazy looking beard
To do this I’ve been using mayas fur and custom-making stuff like baldness maps, length maps etc. These are esentially just black and white maps that decide where he has less or more hair etc. (This is the baldness map. White is beard, black is no beard)
He’s starting to look reaaally sleazy and for some reason I’m starting to recognize myself in his looks. They say people always do self-portraits and I sure hope this is not mine.
Next up is his long flimsy hair that is also going to be dynamic.
(Other then that I’m looking into how many computers that I will be able to render on. I’ve kept a pretty hardcore filestructure which enables me to import the workspace onto other computers with confidence in everything rendering as it should.)
Sorry for the post-delay, this is what happens when you get really into your work!
Draft render of senior Transylvania as he’s now known.
No post-editing what so ever, just a good base to start from!
Corrective animation time and also testing hair this week.
I also had to shave some beard because Faceshift was going crazy with what I assume was some crazy infrared lightbounce-shenanigans in my (previously great) beard.
It’s one of those sacrifices that one has to make, luckily it worked fine without removing it all!
(could not find the maker of this comic strip but kudos to him!)
Posting has been slow because I have been doing many minor things last week. Such as fixing new eyeballs due to “tri-nests” in the middle of the cornea, sorting out some edge-flow issues in the animation and stuff of that nature.
The last few days however I’ve been working on something interesting. Manipulating translucency in soft tissue. Gums. And refractions in crystal-like structures, teeth.
The solution for both of these was to bump the SSS-weight and severely decreasing the SSS-radius.
In the end I’m pretty happy with the result. They have a pearl like finish and crystal like refractions.
Also I finished the corrective rig with blendshapes for the neck and blendshapes/rig for the tounge.
With Faceshift I’ll have a really good foundation to work with. But it will only give me the “outer” shapes, meaning facial expressions and eye-direction. But the teeth and tounge will have to be animated manually. So I prepared some different blendshapes with a highly interactive rig that can really TWIST the tounge (excuse the pun)
Preparing a test-render sequence with teeth, eyes, tounge etc for the beginning of next week.
Tomorrow is prep-day for that test-render. I went from testing one frame at 5:30 min and after I changed basically nothing but a gloss value I got it down to 2:20min rendertime.
Something is weird about the scene and I shall sniff it out!
Next time you’ll probably see a very fleshy face with some slimy interiors put in!