Your browser (Internet Explorer 6) is out of date. It has known security flaws and may not display all features of this and other websites. Learn how to update your browser.
X

Visual Portraits – Visual Effects in Live Footage

The final shot & breakdown! The result of 2 months hard work.

Fix It In Post

Blending CG & Camera Projection
At first, I thought I would create an alpha to use as a matte in Nuke, but instead found that using an animated Bezier with a feathered edge gave much more control of how much of the edge I wanted to remove in specific frames. An alpha would have been unnecessary work.

I connected the Bezier to the merge node of the jaw-render and the footage – that way all nodes from the jaw were affected, giving me a blend between them. Similarly, I used beziers to lighten up certain parts of the face, so I can blend the tones of the mesh to the real footage without destroying the blended edges.

I had a hard time getting the camera projection to work, though the real problem was actually time, which I did not have. I used a lot of tutorial tips for reference when creating the setup, and tried to take on many different angles. Finally I got the projection to work, and it looked excellent! …Until my actor turned his head. The rotation obviously did not work out properly from the camera track. I tried using points in his face that would not move or involve facial expressions that might occur, and the trackers seemed to move corectly. So the problem might have been that I did’nt set the right preference values for the camera in the tracker, and I might have missed a rotation option somewhere. But since I didnt have time to find a solution, I had to use an easier method for now. I will however, look into this again after the presentations, it would be a shame to have gotten this far without using it! I can truly recommend camera projection for painting/putting effects onto your footage, (when it works:)) it saves time, is more efficient, and makes it easier to see where on the footage/UV youre doing changes.

For the projection I first created a scene connected to the camera track, using the original footage as source.
I added a 3d project node, and connected it to the node I wanted the cam to project. I used my imported base mesh in 3dview and aligned it to the projection using a pointcloud. Since I had a base mesh that was built to fit my actor it was an easy set up. I connected everything to a scanlinerender, changed the projection mode to UV and I therefore had a frame with uv rererence over the footage I could paint on in PS.

By creating a second scene similar to the first, I could use this setup to make it project my changes, and still have a moving image. This was achieved by connecting an alpha to the image as well, removing all but my changes to the image.

Ms3 Delivery

Ms3 delivery. This is however not final, and will be updated during the next, final, week.
PostMortem
Powerpoint

First Comp

Cutting it close to deadline now. Still have alot of work to do, but hopefully I will have time to do all I set out to do. Remaining work includes more comp, veins and bruises, maybe some background or put in depth by using fog.

This is just a first comp test, I have not adjusted specularity, colours or final color-scheme yet, as well as the mask I will use to blend in the mesh to the footage.

The renders were imported to Nuke and I used copy nodes for all the diffuse passes to transfer the alpha from the matte passes, so that the merged image stays on top of the original footage without removing it.

I merged the new nodesystem with the mergenode from the original footage and the eye-replace node tree.

Nuke – Keylight

To key out the greenscreen I used Nukes Keylight-node. I found keying in Nuke quite different from keying in other software, such as fusion. Keylight seems to be more powerful and you can come up with some really good results even if youre doing a very simple key, although I have learnt that there is never a one-key solution.

I tried different techniques for the greenscreen, basically because it it a great way to learn the node, as well as try out which method works best for my shot.

I started with keying out the head and the body seperately using Beziers, creating an alpha for each part, and later merging these two alphas for a good result. This process was however quite time-consuming, alot of tweaking was needed on both keys to get a decent result.

So I tried another much simpler approach, but I thought the results were better. It becomes evident that a good lit greenscreen makes the keying-process much easier, and looking back, I am glad I took the time to carefully light the greenscreen before filming.

I connected the Keylights source input to the original footage, and used the color picker in Screen Color to sample an area close to his neck. This area took away most of the greenscreen around the actor. The clip black slider was then used to remove even more of the background. You can see the status of your keying by changing the view mode to “Status”. This gives you a grayscale image showing you the status of your key. The purpose is of course to get a solid black and white “alpha” image, the black being the greenscreen which you want to remove, the white being the mask of the actor, the part you want to keep. To find a balance between removing the greenscreen without cutting into the actual actor is quite tricky and requires some fine tuning.

The result was ok for a first key, so I added another Keylight node and changed the first keylights view mode to intermediate since I want to combine the two.
The same process was used for this keylight. To add the alpha created from keylight 1, I changed the source Alpha in the Inside Mask Tab to “Add to inside mask”, and changed the view mode to Composite. This gives you an overview of your current image. I still had some greenscreen parts left in the corner of the footage, and used a Bezier to create a quick garbage mask. The bezier was animated to never cut into the actor, but still cutting out the greenscreen parts in the corner. This Bezier was connected into the outM in the Keylight, and the OutM Component in “Crops” then needs to be changed to inverted alpha – which cuts out everything outside the mask.

To get a smoother edge around the actor I used an edgeblur connected to the keylight. This gives you a more natural result which hopefully blends in better with your background.

Drool

To enhance the gory mouth effect, I quickly modeled some drool in Maya to add to the mesh. A Mia material X shader with a glass-preset is used, in order to get the wet feel, refractions and transparency. The settings had to be altered somewhat, just so that it would not be completely transparent – by adding an alpha to the transparency.

Furthermore I created a very simple animation rig, and animated the drool, to make it more realistic. The rig consists simply of joints and Ik handles.

Maps & Lighting the Scene

Displacement/Normal Maps – After changing the export settings I managed to export Displacement and Normal Maps for each object in Zbrush, using the ZPlugin MULTI MAP EXPORTER.
I selected a subtool and set the Map- and preferences preferred for export. There is a plugin for merging all subtools as well, which can give you one map for the entire mesh at once. However I didn’t get that plugin to work, so I simply merged all the maps later in photoshop.
For some reason zbrush by default flips the maps, so I used the “Flip V” option to prevent this.
When connecting the finished Displacement Map to the low-res mesh in Maya, I got an interesting result – the mesh became drastically bloated. This I learned is due to Maya’s interpretation of Displacements from Zbrush. Zbrush interprets black as no change and white to extreme change, while maya interprets black as extreme downward, white extreme upward and gray is no change.
To fix this, I changed the Alpha Gain and Offset settings under the Color Balance tab in the Displacement shader file.

The Specular is done in photoshop using the diffuse as reference.
To export the Texture map from Zbrush, I chose a subtool, and ticked the Clone Texture in the texture map tab. This clones your texture into your texture library, from where you can easily export it into photoshop. I had to merge all the texture maps into one in photoshop as well, like the displacements.

When I had connected all the maps to the mesh in Maya I needed to light the mesh so that the edges fit into the original footage, so that it will be easier to blend in post. It was a great advantage of course to have set up the original lighting for the greenscreen here, because I could just mimic the studio lighting in maya, to get decent results. The lighting becomes somewhat worse when the actor moves further away from the camera, this light was hard to mimic, which was something I had not thought about. I used 3 point lights and 2 area lights to mimic the studio setup.

Zbrush Texturing

The texture as it looks right now, will go in and do some final touches and changes later (f.eg to the teeth), depending on how it looks when integrated with the footage.

A set of my cozy Lightbox-toolbar textures used to get the gory, torn flesh feel;

To use projection textures I turned off the Zadd and changed to RGB channel and Colorize in the Polypaint menu. Then I simply used the textures in the lightbox and painted them directly onto the subtools. In some areas, where I wanted darker or lighter effects, I painted by hand. I also used the “new from polypaint” under the Texture Map tab, to get the textures onto the right uv map.

The modeling process; I imported my low-res mesh into Zbrush and divided it into Subtools using the “Groups Split” option. Every Subtool has a division level of 5-7 for a good, smooth result. I found the alpha brushes essential during the sculpting, to make small details.
Furthermore I tried exporting a displacement map directly from Zbrush, but it seems to result in a bugged map, so I will try using Xnormal instead and see if that works better.

Eye Replacement Test

Alot of pararell work going on at the moment!
Here is a short clip demonstrating the matched rotations of the eyes using trackers and 3d spheres in Nuke. (Track has not yet been completely stabilized + see some problems with the dot removal roto, ..will fix)

Since I want to get spherical distortions of 2d images I am using for the eye effects, I used Nuke’s tracking and 3d system. This is however all thanks to a lot of research and tutorials, since I did not have much former experience in Nuke’s 3d system. –>

In order to get a working 3d scene in Nuke, I began with putting in a Scene-, Cam-, Constant – and ScanlineReder-node. The Scanline taking care of the objects, and the Constant taking care of the format. I connected these to my footage, and then created two spheres, onto which I mapped the 2d images. A reformat node was also used to set the image size so that any images I choose to use wont behave strangely.

To make the sphere rotate and translate correctly with the eye, I first tracked 3 points of the eye, the left corner, the right and the pupil. Some frames had to be tracked manually, like the blink and the headturn, since the track points disappear. 3 points are needed, since the trackers are translated using calculation into rotation.

I used this expression, which is a calculation of rotation in Nuke-terms, using the 3 trackers;

degrees(atan( (parent.Tracker3.track1.y – (parent.Tracker3.track2.y + (parent.Tracker3.track3.y-parent.Tracker3.track2.y)/2 ))/((parent.Tracker3.track3.x-parent.Tracker3.track2.x)/2)))

Using the pupil tracker – the track translation was draged onto the spheres rotate y in the properties windows. The expression above was then added in to the rotate y, which makes the sphere rotate, but in the wrong direction –>
Nuke automatically puts the image on the backside of the sphere (away from the cam) as well as tries to expand the image onto the entire sphere – it becomes distorted. This was solved by changing the u + v extent parameters in the spheres properties, and in an added TransformGeo node, where you can simply transform/rotate/scale the geometry. The TransformGeo node is then connected to the Sphere to make its pivot correct.

To Stabilize the track I used the Curve Editor. The lft and rt trackers were filtered to a smoother movement. The rt trackers keys were also moved or flattened in places where the sphere jumps around a lot due to the tracking. The rt tracker was used because it was the easiest to track during the whole shot.

I then used a Bezier to roto the eye, and animated it using the most stable tracks translate x+y and center x+y. The image was also offset and I needed to insert an expression to get it into the right place, simply using the same numbers already in the translation, only changing the – and the + values.

The 2d image(sphere) was now rotating correctly with the eye. To solve the scaling that occurs throughout the shot I simply put in a transform node with an animated scale value. Added onto this is alot of color correcting and exposure. However, I am thinking to use this “pupil-effect” only in the last frames, as a “finale”. In the rest of the clip I am planning to use entirely black eyes, like the concept. But to do that I only have to remove the pupil effect (I already have a black image behind it) and put on some reflection.

Much work for a 3 second clip, but I want to learn how to use Nuke as much as possible…
Phew.

Zbrush Modeling

Been working on the mouth mesh in Zbrush for a couple of days, and I feel I am soon closing in to a final result. It took some time getting used to the interface and controls, since I have only worked in Mudbox before, which is rather different. Any feedback on the modeling would be highly appreciated, since this is the first detailed organic sculpt I am working on outside of Maya.

The next step is to add gory flesh/meat texture on this, which I think I will do directly in Zbrush. This will save me time I will need later in the project, since I am planning to do some z-depht effects, and re-lighting etc. in Nuke.

Other effects, such as the wounds, I am planning to do directly in Nuke, since I still have found the research about mesh projection in Nuke instructive, and would at least like to give it a go.