this was the render that we got this morning. The biggest issues have been dealt with such as the strange skin weights and the strong saturated lights on the legs.
an AO pass for the path will be added tomorrow.
We have the final milestone on Monday but we still have some days after that until the graduation show. The textures are finally coming together but I still have to love some extra loving to the gloss map, which does not do what it should. Making custom passes are a little tricky.
The rigg is coming along and hopefully there will be some time for extra tweaks next week.
Yesterday I worked on the motion vector pass.
What a bitch. And today I got to see the result render, and uh”!… frame 29 is blank. So googled the problem, most posts was covering people wanting to insert blank frames, not anything about removing them. The ones that did bring up blank frames only talked about the problem, not so much about the solution and of course most threads had closed due to inactive thread for more than 12 months.
So I went back to maya and tried a render in maya; same result. This made me think that there was something wrong with the camera not the batchrender. And lo and behold, on frame 29 the camera looks away from the Huldra…. Like there’s something more interesting to look at over there… out in the blank maya space. Stupid camera! Must have happened something with that specific key frame. Easy fix, glad that the problem wasn’t greater than that.
This project left less time for me to improve on animating than I thought, as it is now I’m constantly jumping between animating and working on the muscle deformers. I’ve added the blendshapes Sandra sculpted into the rig. Everything should be deforming correcly sometime this week and I’ll have to get cracking good on the animation bit.
Was wondering about techniques when animating, heard some make major poses and then slowly break it down till they’ve basically made a pose for each frame. What I’ve always done is making the key poses and allowed Maya to make most of my inbetweens – only correcting it where is was off.
Without further ado – the animation progress so far.
Working with pre-filmed footage was extremely hard – anytime I’ve found that I wanted to hold a pose I would have to go through the rest of the animation and cut some animation on another part and try fit it to the best of my ability.
After I fixed the normalmap the lighting didn’t match the same way it did before. So it’s back to light tweaking.
I found a HDR pic with similar lighting, and with a little editing in photoshop I made an image based lighting that I could use as a reference. I won’t use final gather in the final render because I think it gives to little control when I want to edit the light.
I used to render with gamma 0.454, because of the gamma difference in maya and nuke. But I after a rendering today when I had fixed some lights I still thought that the gamma was some what off. And decided to change back to gamma 1 and instead sett the default output profile to linear sRGB. The difference between this two are extremely small, but in the end one was better than the other.
Because I used Misss Fast Skin Shader there weren’t any passes where I could render out the subsurface scattering layer’s. So I had to create my own passes with write to color buffer. Then it was the problem with the shadow passes because this particular shader doesn’t support shadow passes, the solution to this what that I connected the skin shader to a mia x material passes. This shader didn’t have a specular slider but a gloss slider, so I had to make yet another WriteToColorBuffer. The result was a nice and all was well. But when I rendered out the passes this happened to the direct irradiance. And after some testing I found out that the gloss map was to blame. Where there was no gloss/spec on the map (mainly the part where there was twigs and bark) the direct irradiance was completely black. So when I added the overall layer the result was that the bark was almost completely black.
To this problem I made an easy fix, I made the map gray to white instead of black to white. This fixed the black holes in the direct irradiance, then I used a colorCorrect in nuke to amp up the gamma on the gloss/spec map to it’s original contrast. And used the overall layer where I had the bark moss and twigs to cut out the parts of the glossmap that I wanted to have zero spec. It’s an EasyFix and I would like to know how to fix the problem in maya, but for now this fix will do.
This is what it looks like right now, there’s some problem with the bark on the legs. They look very blue and I think it has something to do with the rest of the skinmaps lack of color on that specific area. Will fix that later today.
I’ve been having a problem with seams on the normalmap.
I tried flipping the green channel, the red channel, the red and the green channel.
I tried painting over the seams in zbrush.
I changed the reading to linier rgb.
Even if I use a plane normalmap with one color the problem is still there.
The normal are all smoothed and pointing the right way, there are no double faces.
If I disconnect the normalmap completely the seams disappear, so there’s no problem with the texturemaps.
In the viewport with the normalmap on I can’t see any seams.
I’ve tried deleting history.
I’ve made a new bumpnode.
I made new lights and changed the shadow settings.
I’ve tried new normalmaps from zbrush, xnormals, mudbox. But the problem is still there.
The uv shells have enough room between them.
After much debate I decided that I maybe could fix the problem in post. But after finding more strange seams in the normalmap, fixing it in post would not be an option.
Fourth day with same problem, I found the source of the problem. The vertices are not pointing in the right direction along the uv seams. This makes the computer think that the faces have another angle than they actually do. So I tried some different things with set vertex normal, set to face, average normal, unlock normal until I saw that the vertex normal had a better direction from the vertices. Then I exported the new obj into mudbox with the highpoly. Then when I baked the normalmap I unlabeled the smooth uv options. And voila! Working normalmap.
As one can see it was some time since my last update, so I thought it be about time I explained what I was up to!
Since MS2 I’ve noticed that the spine didn’t quite articulate as expected due to faulty rotation orders. This was a major problem so I had my spine redone (it’s for the better I promise). With both hips and chest controls then having new rotation orders, and lots of feedback on how little space the Huldra occupied of the screen, I went into the blocking in of the animation once again.
As she has jumped down and started pacing toward the camera I thought a flock of birds might catch her attention getting her to change direction. I’m trying to get the timing and spacing better for the walk when her back is toward the camera. Or actually for the whole shot as I want her to linger more as the birds catches her attentin and also right before the defensive pose.
I never wanted her to be run over by the camera, but I didn’t manage to make up my mind about which direction for her to run of to – think I’ll settle for the direction I got so far.
And in other news, I modeled most muscle deform objects for the legs and arms. Tried different ways to get the abdominal muscles move the way I want them to, which I’m still having a hard time with but toward the end of the week I expect to have some rough muscle weight paints down. With only roughly three weeks left for the project I’m starting to feel the heat.
Today I connected the blendshapes to the Huldra. Before I did so I had to add some more edge loops around the eyes by order from Andreas to easier rig the eyes. And as I predicted this caused problem with the connecting of the blendshapes. The vertices after the edge loop tool got different names. I tried to search for a solution for this but found nothing. Originally I had planned to use the transfer attributes by to change the names on the vertices.. But there was no such thing. So I came to a different solution.
First I separated the head from the original mesh and made as many copies of the head as there was blendshapes. Then I positioned them close the respective blendshapes and used the transfer attributes and changed the vertex position and vertex normal with sample settings set to UV. This changed the new duplicated head to the same shape as the imported blendshapes. Then I just deleted history on the new head, made the connection to the old head, reattached the head with combine. And voila blendshapes done. Of course there was a lot of trial and error. But now it works as a charm. I also have displacementmaps for the blendshapes. But I’ll connect them some other time this week.
I tested a render in the light scene with the blendhapes on. There are still work to be done on the texture, also the normalmaps are some what strange. Have to render a new one. Then of course the ao and the eyes have to texture and no shine. But we’ll get there
Here is a video with what we have so far. Most key poses are in place but will be reviewed as we’ve gotten some comments on the size of the Huldra which we will consider. With about half the project time left we still have to do the following:
- Deform Rigg
- More detailed environment to get shadow passes
We spent the Monday shooting some animation references with Erik Öhman which I found invaluable when starting to do the blocking in. I did also notice some minor bugs in the rigg that I was quick to eliminate – kinda glad I’ve noticed that riggs are actually repair-able, as long as one keeps in mind the labyrinth of connections.
And as of Wednesday, we figured we might want to extent the clip to be able to fit a fade in/out – so I was back in MatchMover trying to add frames unto the sequence which proved to be pointless. Ended up re-doing the whole track in NukeX which turned out to be more accurate, faster and easier to get right. I’ve now managed to get the same scaling in the scene as previously with just a slight offset in translation. But as I’ve only started blocking in poses I thought it was well worth it. The workflow was a bit different and I had some problem getting that amazing track into the actual Maya scene until I found a nuke plug-in to help me.
Now on to the quick explenation of the Rigg. It has a joint hierarchy being driven by other chains which I control with control objects. I implemented IKFK switching on arms and spine, but opted for only having the option available to create FK on legs if I saw any need for it, so for the time being legs are all IK. The hoof is group-based with driven keys for control, and hands all FK. As the scale of the scene is nailed with Sandra having done some quick skin-shading tests I didn’t implement any scaling funtion, so the only funtion my master control has is initial placement of the whole character.
The driven joint has a splitted mesh of the Huldra parented unto it to allow me to get a grip of what the animation is going to look like – also – the mesh of the skeleton is also parented unto corresponding joints so I quickly can spot problems if let’s say the radius ulna twist would stop working for some reason. This skeleton will later be used to drive the skin mesh together with maya muscle deformers.
Last but not least, as we’re going to have a more close-up shot of the Huldra cowering toward the end of the shot, we’re going to implement a blendshape based facial rigg. As the character had a quite human face we saw that adding life to the face might be a high priority.
So the blendshapes are done. If we notice that there not enough we’ll add some more, but this will do for now.
When I tested the blendhapes with displacementmaps where I had removed the faces that would not be deformed, I separated the head from the body and then duplicated the head THEN I made the blendshapes from the copied mesh in maya, imported them into zbrush, made displacementmaps and then hooked them all up to the original mesh in maya.. NOW question, is it possible to export a bunch of meshes from zbrush into maya, and somehow remove the faces that won’t be blended and still connect the blendshapes to the original mesh WITHOUT maya complaining about missing faces? If someone knew if I just painted myself into a corner, it would be nice to know. If someone have a solution.. that would be nice to know to. Maybe I can transfer vertices names from the original mesh to the blendshape ones. They all have the same uv.
Will do some tests later today now, dinner!
So I have started with the blendshapes. The face won’t be talking anything and will only be visible a few seconds so we decided on a limited number of blendshapes. So far I’ve made mouth left, right, lower lips up, brows down, left and right. This is what she looks like with all of them turned on at once. A beautiful sight haha XD
Will be posting a blendshape chart after their done.
There have also been some slight editing of the legs after some more feedback from Dave Grasso so a new lowpoly has to be imported into maya with new displacementmap.
The high poly is done and I’ve been working mostly on the retopo this week. I had some difficulty with her back because the twigs are so close together. I could have made the twigs into a separate mesh, but I thought it would be easier to make the two meshes blend easier it only were one mesh. Anyhow I’m finally done, the uv map is also done and ready for the making of the displacement, normalmaps and so on.
Because I’m using zbrush I have to project back the highpoly on the new mesh. Think it’s a little strange that zbrush doesn’t allow you to have to meshes to make a displacementmap. This could probably be the only advantage with using mudbox… I do like zbrush.
Ok, I just couldn’t keep myself from doing it so I spent the day improving the spine rigg.
I figured I’d also publish a clip featuring the improved motion achieved – not that I’m going to show the previous one! I was reminded by something I read at a forum post looking for solutions to creating a good spine rigg while doing the research for the project – it went something like: Riggers have a tendency to over-complicate the riggs while the animator just want it to work. Starting to wonder if this makes me more of a rigger or an animator.
Anywho – will have the brand new spine up and running in the actual rigg by tomorrow lunch I’m predicting. And fear not – an explenation for how the whole rigg was done, will probably be written toward the end of the project.
I’m a few days behind schedule but at last the rigg is done. With done I mean I only have some minor “cosmetic” bugs, like I wanted the IK control object for the arms to have the same orientation as the wrist – as the roatation of the control doesn’t matter at all. And also while posing the skeleton I noticed I missed adding driven key so that I could pivot the hind-leg from the hoof.
Now I’m either going to split a low poly mesh to use for animation or I might make a quick skin-bind and later jump into making the muscle deform objects.
And also – not entierly happy with how the spine rigg works. It posed much more of a challenge to create a somewhat anatomically correct spine rigg when it came to IK, but I think I have an idea of how to fix it. Might just do with 3 driving IK joint chains instead of two. One for Lumbar section, one for Thoracic and one for the neck to better achieve the agility of the lumbar and cervical vertebrae. For the time being it’s only seperated at the Cervical-Thoracic intersection which has led to the chest being pretty hard to keep from intersecting the spine.
I thought that it would be on it’s place to say that me and Andreas has found ourself some mentors for our project.
Andreas found a mentor for animation/rigging Tomas Tjernberg from Stiller Studios, an old student from gscept.
And I found two guys from the deviantart community that work with modeling in the game and film industry and they were kind enough to help with some feedback on the character
David Grasso http://goblin-bones.deviantart.com/ a modeller in the film industry.
And Jon Troy Nickel http://hyperdivine.deviantart.com/ a modeller in the game industry.
/Guys if your reading this, I can’t give you enough credit! It REALLY means a lot, tnks
They’ve both given me good pointers on the anatomy and I feel like now is a good time to start on some detailing. So started with some bark on the back (1. help texture and sculpt, 2. without texture), zbrush project texture is very neat (3.) Also started with some pores in the face, and some detailing on the branches.
I’m hoping that the sculpt will be ready for retopologising in the end of the weekend.
The scene has been tracked as of last tuesday, thought it would be about time to show it in a tiny format – 640×360 if I’m nog mistaken.
They skeletal mesh I’ll be using to help deform our Huldra is done. Managed to make myself scripts to automate the creation of controls and also the “no-flip-knee” as seen in The Art of Rigging. Just had a bit of problem achieving the radius ulna twist.
Also, look at our huldra anatomy – as the skeleton is pretty final. Most parts were downloaded for free and tweaked to fit our female huldra. The lower “hind-legs” were modelled by Sandra.