CaZaBa

Home

About

Tutorial

Gallery

Download

Purchase

Contact

Czech

English

Render - Examples II.

Color keying - Environment render - Lens flare - Ambient occlusion - Airlight - Color finishing

In this tutorial, let's look at some of the other effects that can be used to render in CaZaBa.

Color keying

When working on large projects where are many complex models with numerous vertices and faces, processing the image is tedious and solving imperfections often requires the long render process to be frequently repeated. Therefore, it is preferable to split the work on each shot into several parts, which are processed sequentially into the resulting image. As a result, eliminating possible discrepancies in the scene means having to redo only that portion of the frame, which saves valuable time. For example, you can first prepare a video with a foreground or background scene for a one shot and right after it is perfected, use the color keying to incorporate additional scene objects into the image. Likewise, you can add new objects to movies that we received from different source.

This can be done in the CaZaBa program by following. Suppose we got a video with a shot on the city and we want to add more objects to it so that they hide behind the houses in the video. In the Animation - AVI stream background... menu, choose File path to our video with the city view. Use the Animation and Material parameters to set the animation in which you want to project the video and the material that you don't use on any objects in the scene, for example Material 30. Parameter Offset is the number of frames of the animation to hold the video before play. The Repeat checkbox allows you to play the movie in a loop, Key color and Tolerance sets the video color we choose to be transparent, but for our recent purpose leave the values as they are. Exit the window by pressing Close button.

In the scene, in the Cameras / Lights tab on the left panel, create a camera and set it to one of the viewports using the arrow in the top left corner of the viewport - View - Camera... If the camera is not selected in the scene, select it. Open the Material Palette on the toolbar, roll down and to the camera add the Material 30 (into which we have directed AVI stream) by pressing the Add button. You can close the material palette and then check the Background projection checkbox in the camera settings. In the viewport where we set the camera view, we should now see the video that we assigned to material no.30.

It is now up to the user to adjust the camera movement and other embedded objects in the scene to video content. At the same time, if you want one of the objects in the scene for example to hide behind a house in the video, put to the foreground an object in the scene that matches the shape of the house in the video. By basic model's color or by material assign to it the color that the other models don't contain, and which we'll be keying later on, for example bright green. Also, animate this object appropriately to match the movement of the house in the video.

Tip: It can happen, that the AVI stream in the background may have a different aspect ratio of the image than the viewport which displays it. Thus, scene models and video may appear disproportionate, which can be easily corrected by setting the same output resolution during rendering, but during the modelling is handy for example, by dragging of the border between viewports adjust viewport aspect ratios to match images.

If we have set the video as the background in the camera and our scene matched to the content of this video, we can go to Render settings by pressing on the toolbar. Here scroll down to the Postprocesses section, where we'll use the first subsection called Background. Scene vs. background filter refers to the background distance from the camera. By checking the Depends on depth option, the background is placed in the scene image according to the Depth parameter in the camera properties, which is set to 1.0 by default (infinity). A value between 0.0 and 1.0 would mean placing it closer to the scene. The other option Depends on distance is driven by a camera parameter Distance, which is in space units. We choose Depends on depth. Next, as we are inserting the scene into the video, in the Color keying subsection select Key the scene. This will replace the set Key color on the scene objects with the video image. Choose the Key color at the same glowing green as we've added to the bodies used for concealment in favor of background video content. You can now try to render the result.

Tip: As same when working with the green / blue screen in the real studio here you also need your keyed objects to illumine additionally so that the key color tolerance is sufficient for their proper filtering. Otherwise, the unnaturally green smudge may hit the resulting image of a sudden due to a hue that is too different from the setting.

Tip: You can replace the keyed color in the background video too by switching to Key background. However, the quality of the provided video is very important here, so the result may not always be ideal.

Environment render

One of the effects that enhances the realistic impression of a scene is the Environment render. This is the rendering of the body's environment on its surface according to certain geometric rules so that the result will look like mirror reflection of the surrounding scene.

For proper function of the effect we must begin by setting the body's material. Open the Materials palette on the toolbar. In the material you want mirror reflection of, press the More... button to open the Material properties window with some additional parameters. There is also the Mirror parameter between them (right at the bottom of the window). This is set to 0.0 by default, ie. without mirror reflections. By setting any higher value we will make in the material partially visible its diffuse - base color and partially mirror reflection. This up to 1.0, which specifies full mirroring of the surroundings without any noticeable diffusion component.

Tip: The value of the Mirror parameter is effective only after you have assigned the material to the model with the Add button, so it can be applied in each keyframe of the body with different value. This way we can animate the mirror reflection property, just like the basic light components Ambient, Diffuse, etc.

If the material is properly set on the model, we can go to Render settings by clicking the toolbar button . Scroll a little down here to the section beginning with the Environment render checkbox. In default render settings, this effect is turned off, so change it by checking the box. The input fields of three parameters are activated and their function is very simple. For obtaining the image of the surroundings, the program takes a picture / renders a scene from the place of the model in all directions. Therefore, the environment image received will be at the resolution of Image resolution. If you don't want super-sharp reflections, you will usually need a resolution about the preset 1024 pixels. Front plane Z then indicates the closest body distance and Back plane Z the distance of the farthest body that will be captured in mirror reflection. Both parameters are given in space units.

Tip: For reflections to appear realistic, the program must pre-render the surroundings for each body separately. Therefore, as the number of models increases in the scene, the processing time of one full animation frame increases considerably. This can be partially compensated by merging bodies with the same material (Converting to General geometry Ctrl+G - go to element edit tab - button Attach model...). Of course, this can only be done if the bodies should not animate itself individually, the error of incorrect view point of the mirror reflection doesn't matter, etc.

Lens flare

This effect simulates the reflection of bright light sources in the lens of the camera which sometimes occurs when working with a real camera. The light sources deployed in the scene do not enter this effect by themselves, but only the specular reflections of their rays on the models and the bodys with Emissive lighting set up. The image of these two groups of glow sources is simply flipped by the focus - center of the screen and further processed, so that the resulting Lens flare reflections have the same shape as the glow source. This is a image postprocess effect, so you can find its settings in the Postprocesses section of the Render settings .

The first seven parameters apply to the basic effect setting. Threshold sets the level of the glow force to be put into effect. The higher the value, the stronger the glow must be to account for the effect. Contrast and Brightness determine the overall image properties of the Lens flare before embedded in the resulting image. Depending on the overall colors of the scene, the visibility of the effect in the final image may vary. Therefore, there is a Weight of lens flare in image parameter to fine-tune how much the effect is visible. Number of samples per lens specifies the number of pixels the lens flare should blur and Blur factor is a multiplicator of the number of samples in case the user wants even more blur. We omitted the first Number of lenses parameter that is the number of imaginary camera lenses that basically determines how many reflections one glow source creates. CaZaBa allows a maximum of 8 lenses / reflections whose properties are set individually in the following Lens - settings.

In the drop-down of the Lens parameter, select the specific lens we want to edit. If you choose any of them, you will see that they are already preset for easier start. The CaZaBa program processes the lenses by the order number always from lowest to highest. Thus, for example, if we set Number of lenses = 4, lenses appear in the effect according to the settings of Lens 1 to Lens 4. The Artifact type allows you to set the form of glare between two options, which are Disc - a full copy of the glow source and Halo - just the glow source's outline. During the real occurrence of Lens flare effect, due to various interferences, glare may break up to rays that point to center. This can be individually turned on / off for each lens with the Light rays checkbox.

In the real Lens flare phenomenon, the individual reflections form on the axis source - focus, where are usually larger ones at the edge of the image and their size shrinks towards the center. The Artifact shrinkage parameter specifies the reduction ratio of the base glare image and thus its convergence to the center of the screen. The larger the shrinkage value, the closer to the center of the image the reflection will be drawn, and by setting a different value for each lens, we will create the aforementioned series of reflections. In practice, individual reflections appear on the source -focus line in a different order, which is determined by the last parameter Behind the center. When checked, the flare is placed in the source - focus - flare order, otherwice with source - flare - focus displayed in the order without being checked.

Ambient occlusion

When calculating lighting, it can be seen that shading of small details of the bodies (bump mapping, various cracks, etc.) is evident only when the area is directly illuminated. But once the model or part of it is in the shadows or faced away from the light sources, the shading of the details will disappear. This is because the shading calculation needs to know the orientation of the incident rays of diffuse light. But in the shade there is only ambient omnidirectional light and the diffuse component is missing. Therefore, shading without direct illumination needs to be solved by a diffuse-independent method, Ambient occlusion.

Ambient occlusion in other words surround coverage is a method that, based on the screens depth map, evaluates which pixels are encircled by some obstacles and which not. Those pixels that have a Z coordinate deeper than most other pixels around them, are surrounded by obstacles and are painted darker. On the other hand, pixels with Z coordinate smaller than most surrounding pixels, those are without obstacles around and are therefore colored light. Finally, the map you get is used for shading the resulting image.

Ambient occlusion settings can be found in the Render settings and it's easy to handle. For its function, the algorithm understands the image's depth map as a surface, so it must be additionally produced a normal map to it that comes out of the depth map. It is the same as generating a normal texture from the bump texture in the material settings. Therefore, there are three parameters. Picture normal depth scale, which specifies the size of the depth memory differences, and Image normal inversion X(Y) that allows you to change the signs of normal vectors in the X axis or Y axis if the result is accidentally processed upside down. However, normally the values already set are sufficient.

The following three parameters will test each pixel by Number of samples and within the Radius of samples and Samples depth scale for each obstacle in close proximity. Normally, inaccuracies and noise are generated in the Ambient occlusion map. Its strength can be controlled by the Noise reduction parameter as well as the remaining three parameters. Filter iteration is the number of times the noise filter repeats on the Ambient occlusion map, which is further adjusted according to Brightness and Contrast.

Tip: If we are rendering models with material containing an alpha texture, it is required still in the render settings under Alpha render section check the box Test the alpha in the depth map. Only in this way will the render process the alpha texture's shapes. Otherwise (when unchecked) would be limited to the actual geometric shapes of the bodies. If Alpha texture with semi-transparent areas is used, Threshold of transparency can affect their testing. The higher the threshold, the lighter the shades of the Alpha textures will be seen as transparent.

Tip: Ambient occlusion is processed for the overall scene geometry, but if we use in materials also the ambient occlusion textures, you can get more detailed results again.

Airlight

Airlight, sometimes also called Volumetric shadows or Volumetric light, simulates diffuse light scattering in a particle environment such as dust, fog and the like. When a model casts a shadow in this environment, dark bars - volumetric shadow appear between it and its shadow, which is sometimes nicknamed "God rays".

All needed settings can be found in the Airlight section in the Render settings. The effect is made so that the volumetric shadow of the body is toned to Space color (double click on the color square) and the result is highlighted by Brightness and Contrast. The Threshold parameter specifies a level from which the shadow darkens and the other space becomes lighter to make the effect more visible.

Tip: Airlight settings much depends on the scene and camera and light source location in it. Looking directly into the light source, the setting may be around Contrast = 3.0, Brightness = 0.0 and Threshold = 0.590 while when looking in the opposite direction, the setting will be approximately Contrast = 2.0, Brightness = 0.0 and Threshold = 0.150. Of course you can fine-tune everything at your own discretion.

Color finishing

After processing all effects, you can finalize the resulting image in the Color finishing section. For example, you can control selective color by checking the Use a selective color checkbox. Selective color (double-click on color square) and Selective color-tolerance selects a specific hue. The Color saturation parameter sets the color scheme of the image, 0.0 = black and white; 1.0 = original image colors. If Selective color to keep is checked, Color saturation will only change on portions of the image that differ from the selected Selective color. Conversely, if you select Selective color to transform, the saturation will only change in areas with Selective color selected and the rest of the image will remain unchanged.

The following parameters Color tone, Contrast and Brightness allows you to fine-tune the color cast and image lightness. Finally, you can ofcourse convert your entire image to color negative by checking the Invert checkbox. The last two parameters then allows to simulate the color signal desynchronization, ie. decomposition between individual RGB layers. The first of these parameters - Color Distortion - strenght determines the amount of disintegration in pixels and the second parameter Color Distortion - angle sets the angle at which each layer spans - that is horizontally, vertically, or any other.

<< Return to topics

Copyright (c) 2013 - 2023

General terms and conditions

End user license agreement