This is how the final result ended up like “The Assembly Line”. I’ve done compositing in nuke and added fog and dust particles in after effects. For the depth pass I rendered out in .exr and used a shuffle copy node that was connected to a zblur node. It’s a good idea to render out the z-pass in 2X the actual size, and then rescale it. Otherwise there might be some jagged edges where the focus is. Also, if objects are blurring over where the focus is, just check the box “occlusion”. For the reflections I ended up using cube maps, it took way faster to render than real reflections.
A busy week. I’ve textured the blocked in parts. And looked up some information about shaders. The shaders I’m gonna use is mia material, these shaders can not be viewed in the viewport and therefore I assigned blinns while I was texturing and then I changed them later on. The reason why I choosed mia materials, is because they really got good options for creating belivable materials. They are physically accurate with many optimizations and performance enhancers. Although they may come with presets, its good to know how to tweak them. For my project I’m going to mimic five different materials: stone/concrete material, shiny metal, matte metal, paper/cardboard materials and wood material.
For a stone material the diffuse roughness parameter should be rather high, the more porous the stone is, the higher this value should be. Reflection gloss should obviously be low and you can only use refl_hl_only as well. The reflectivity should be around 0.5-0.6 with brdf_fresnel off and brdf_0_degree_refl at 0.2, with the brdf_90_degree_refl at 1.0.
For the wood I use refl gloss of 0.5, refl samples of 16, reflectivity of 0.75
For the metal materials I checked the box refl metal, it will allow a nice blending between diffuse shading and glossy reflections.
Reflections is very render consuming, so when it comes to optimizing the shader, you should consider tweaking this parameter of the shader. Tweaking the reflection fall off cannot only give better results (as objects maybe 50 meters away won’t reflect), but it will also give you faster render times. For my indoor scene I’m also going to use the re_fallof_color as it tends to be useful for indoors scenes
During this week I’ve also tweaked the geomtery by around 150 000 tris. I still has some issues concerning the rendering. (which actually was my biggest fear, when I started this project.) At times it won’t render at all! But I hope that I will sort that out soon.
I will also go a little more indepth with the lighting as I tested both to use lights with and without FG:
I used both area lights and spot lights when I played around with the no FG settings. My workflow when not using FG was that, I started to light in maya software with depth map shadows filter size 7, for efficiency and then turn to mental ray with raytrace. When placing out the bounce lights it’s good to know that warm colors tend to bounce more. Area lights that are placed in the window should also have an opposite area light dropped to half the intensity. The benefits of this workflow is that you can place out lights really fast and get a fast result. Although in complex realistic scenes I felt that the result turned out to be unsatisactory, and placing out bounce lights in a big scene actually took longer than using FG. So that is why I in the end went with FG. Also, for a realistic scene it’s very important to remember to set the falloff to quadratic. As if the default is used (linear) the light won’t decay properly. For linear lights= intensity=decay/radius For a quadratic setting the formula goes intensity=power/ (radius*radius) which means that it requires a high intensity setting, but the falloff will be correct. This could also be improved by using a blackbody which will give you an option to choose the right temperature for the light in Kelvins.
Using the mental ray exposure node: With this node you could basically compose directly in maya. You can change the saturation, create a vignette etc. It’s also important to change the gamma correction if you get washed out colors. As computers work mathematically linear and our eyes does not, the pictures rendered out may appear in the wrong color space. So the workflow goes like this; convert textures to linear space, render to linear gamma 1.0 and floating point file format, then convert back into gamma 2.2.
For the optimization of the texturing, Iv’e first blocked in the objects that is going to share the same UV map. This will give me information about how high-res the different maps should be. Then I used Cas to arrange the maps with the ” “. If that is going to be used you first has to combine the objects and then use that button. After that I seperated the objects again and started to duplicate the objects that I wanted to be in the same UV spot, for optimization.
The UV-mapping has been taking a bit longer than expected. Although Iv’e come up with a fairly good strategy using both the Cas auto UV mapper along with mayas own “unfold selected UV’s” and the “smooth UV tool” in the UV texture editor. The workflow have been like this: Firstly I just make a planar mapping on the object to just make sure that all the edges on the object stick together, thereafter I cut the object where I want the final edges to go and then just hit the button “ABF++” which is a type of unwrapping tool in Cas (be sure that you’ve selected the object i object mode, when using “ABF++”). Then I use maya’s unfold tool and/or the smooth UV tool for the final touch. There is also an Align U and Align V button in Cas were you can align the shell in either U or V by selecting an edge and hit one of the buttons (you can also easily rotate the shell by quickly clicking the buttons “Align U” “Align V” “Align U” and so on…). (If there is a complex shape I used to cut it up in simple pieces at first, and then be using the UV texture editor to see where the most appropriate seams should be, by simply selecting them and sew them together and then repeating the unfolding process). Remember that if you get some weird UV unfolding results, you can just select portions of the shell and then hit “unfold selected UV’s” this will actually work in the most cases. Unless, if you have forgot to cut all the edges properly. In such case, use the Cas’s “Detect UV Seam Edges” as it will highlight all the edges for you, and it will allow you to see where the missing edge might be.
By combining the objects that will be in the same texture map and clicking on the “Quick fix ratio and Layout UVs”. You will have them aligned, and also in the correct size compared to each other -ready for a UV snap shot! The only drawback the plugin has is that it for some annoying reason puts the undo’s to limit 7, every time it’s uploaded.
Currently I’m UV mapping, and I’ve been using a great UV-mapping plugin for maya. The plugin is called Cas auto UV mapper, it’s very useful to use on complex geometry and it has some nice features that really can speed things up. It’s also easy to use, and is fully integrated into maya.
Here is some research I’ve done on the lighting: As the scene is such a large indoor scene, there is a problem with not using FG as there will be lots of lights providing as bounce lights. That in it self will eventually increase the render times. Therefore I’m still not sure if that is the way to go. I’ve found a new approach, were FG and portal lights are being used.
Portal lights is a way of lighting interior scenes when the main light source comes through a window. This setup generally creates problem when using FG as there is such a limited number of information to calculate the rays on. FG only calculates on the direct light so in a such case: the settings needs to both have a high FG ray count and a high FG point density, which will increase the render times. Even then, the result may still not be very convincing. So the solution is to use a shader called portal light. The portal light is an area light placed at the window, which is using the mia_physicalsky (which you’ll have to create) attributes to give the correct intensity and color from the outside sky.
What the portal light actually is doing is to block the FG rays, and converts the light from beyond the window to direct light. Which will give high quality area shadows with no interpolation issues. The portal ligh is concentrating the FG rays, so it doesn’t have to send lots of FG rays around the room. As the FG rays will see a well lit room rather than a black, it can be performed with lower FG rays. It will also give an extra bounce of light as the light from the window is direct. The portal light can also be used when using GI as it can shoot photons as well.
Area Light Shape –> Custom Shaders –> Light Shader –> mia_portal_light =for FG
Area Light Shape –> Custom Shaders –> Photon Emitter –> mia_portal_light =for GI
Another Problem I’ve been struggling with is using the light fog in mental ray, apparently mental ray doesn’t work very well with light fog. When they are rendered in mental ray the light fog just goes through the wall. So the only solution I’ve found is to render the light fog as a separate layer in maya software.
Below is a fast light test of the environment when I was using no fg with my own bounce lights, and the pictures with the purple ball is showing lighting with portal lights and with no portal lights. The last one is a picture showing the environment with a final gather test. With the portal lights you can achieve good results with fg even though the environment is dark.
This week I have started with the lighting, and done a lot of research on the subject. This is how the light is going to be set up:
I have chosen not to use GI, as my scene already is heavy as it is. I feel it will be faster, and much more educational if I put out all the lights myself. This approach will also give me much more control of the lighting. As when GI is used, small changes in the lighting will actually influence the whole surroundings.
I’ve also found some ways to optimize the scene, its about flushing the memory.
The basic ways to turn on flushable resources include:
Placeholders – Reuse memory for source geometry
Approximation – Reuse memory with fine approximation
Texture cache – Resuse memory with tiled texture maps
BSP acceleration – Reuse memory with large BSP
More about memory leaks concerning mental ray can be read here:
On this monday’s milestone meeting I got some feedback regarding that, there should be some movement in the scene to make it more exciting. Although I don’t want to focus on animation in this project, I admit that some simple movement in the scene could be just enough to improve the final result. I will have to figure out what will work best, if it will be particles, or some simple animation (e.g. a moving fan).
The following week I will start lighting and begin to UV map. At first, I will just keep the grey shaders on, and then add a few colored ones on various key positions. From there on, I can slowly build up the right color spectrum for the scene. With this approach, I will know where to focus on the texturing and where some procedural texturing will just be enough. -Which will be a crucial task for this tight time schedule!
This week I’ve been modeling the environment and the vehicle. I have also researched high poly modelling for production, shading, tested post effects and looked up some UV-mapping techniques. I did some AO tests on the environment for lighting, and to get an indication of the render time. I may add some more details on the scene later on if it’s needed, as I start UV mapping and lighting. I also changed the Previz as it focused too much on close ups, when the goal is to view an environment.