Volumetric Lighting

In the 3D renders below I have taken a 3D image that I had previously created and tried to improve it by adopting the techniques discussed in this blog. The main focus was to improve photorealism and experiment with volumetric lighting.

When comparing the two images it’s astonishing to think that I was pleased with my original render. It goes to show just how much I’ve learnt since starting the MA.

As mentioned above, I was using this 3D image as a vehicle for experimenting with volumetric lighting. In the improved image, I have applied many of the theories already discussed in this blog, such as adding noise, depth of field, there’s now some dirt at the bottom of the wall and dust on the shelf, but the biggest difference is the addition of atmospheric dust. You can now see the visible light penetrating the right hand side of the image, it is obscuring the model’s rear leg and creating some streaks of light/shadow as it hits the top of the model’s shell.

What’s even more important in the improved render is the consideration that’s been given to the narrative. The diagonal lines, camera angle, and use of warm and cold lighting help to achieve what I had originally intended. It seems that the study of character development and cinematography have had as much impact on the improved render as the study of photorealism in 3D and CGI.

What does it take to make a 3D / CGI Object look Photorealistic?

Through my own experiments I have observed that the understanding of light is fundamental to achieving photorealism and it seems that many other 3D practitioners share this opinion.

Within the 3D environment, three aspects of light need to closely replicate their real-world counterparts:

  1. Light needs to illuminate the surface of a 3D object in a realistic way.
  2. The 3D object needs to prevent light from hitting other objects in the scene and create shadows.
  3. Light needs bounce off the 3D models in the form of reflections.

1. Light needs to illuminate the surface of a 3D object in a realistic way

HDRI

High Dynamic Range Images are a useful tool for creating photorealstic 3D renders as they create a natural light source and can also be used as an environment that appears within reflections. When using HDR Images, it’s useful to first light a scene by applying textures that have a solid colour of 50% grey to the 3D models. This allows the observer to adjust the exposure of the surrounding environment, replicating the light that was present when the HDRI was captured, before creating other textures.

Warm / Cold Lighting

When creating artificial lights, one should observe the colour temperature of those lights. Quite often, a warm (orange/yellow) light is used as a key light, and a cold (blue) light is used as a fill light. This is because our eyes are familiar with seeing the sun cast shadows whilst the blue sky casts soft light over the shadowed areas.

Colour Temperature

A 3D artist in search of photorealism should create lights that have the same colour temperature as their real world counterparts. The following is a small selection of real world lights and their colour temperature.

  • Candle flame: 1900°Kelvin
  • 100‐watt household bulb: 2865°Kelvin
  • Daylight: 5600°Kelvin

If a 3D artist in pursuit of photorealism was creating a directional light that was intended to emulate a sun, then the virtual light’s colour temperate should match that of its real world counterpart, i.e. approximately 5,600°Kelvin (depending on the time of day etc).

Volumetric Light

Any visible light within a 3D scene is created using volumetric lighting. A common use of volumetric lighting is to replicate a key light penetrating a dusty environment (More on dusty environments later).

IES Lighting

Even better than trying to match the correct colour temperature, an architectural 3D artist in search of photorealism should almost certainly make use of IES (Illuminating Engineering Society of North America) lights wherever possible. The IES have created a standard that allows manufacturers to record characteristics for the lights they make, such as colour temperature, falloff and visible light etc. These measurements are saved in a text file and made publicly available. 3D software (such as Cinema 4D) can then use these files to replicate a real world light exactly.

2. The 3D object needs to prevent light from hitting other objects in the scene and create shadows

Define Spatial Relationships

“When asking the audience to accept a scene that would otherwise strain its credibility, convincing shadow interaction can add an important piece of reality to help sell the illusion. If a production is supposed to be completely photorealistic, a single element such as a missing shadow could be all it takes to make your work look ‘wrong’ to the audience. Shadows serve the interest of adding realism and believability, even if there is no other reason for them in the composition” (Birn, )

Hard/Soft Shadows

When creating shadows, it is important to think about the source of light creating those shadows. A large light source that encompasses the entire scene would create a soft and even shadow, whereas a small distant source of light would create hard shadows. In nature, the sun casts hard shadows whereas the sky casts soft shadows.

Shadow Cookies

A cookie is used in the cinema to cast a shadow with a predefined shape. For example, if you wanted an actor to look like he was in a forest, you might cut the shapes of tree branches out of cardboard and place them between the key light and the actor. This would cast shadows that look like tress into the scene.

In 3D, if an artist is trying to composite a 3D object into a live scene, such as a photograph or video, shadow cookies cast over both the 3D model and the original scene can help to behind the two media together making it difficult for the audience to distinguish between them.

3. Light needs bounce off the 3D models in the form of reflections

Surface Texture

Although this project hasn’t gone into great depth in regard to surface textures, they are however extremely important when trying to achieve photorealism. All objects, except perhaps a black hole, have some amount of reflection, however, all have differing reflection properties. For example, a chrome lamp will have a very hard reflection, whereas a wooden picture frame will have a much softer reflection.

Fresnel

In addition to how hard or soft a reflection is, the amount of reflection on most objects will change depending on the angle you look at it. This is achieved in the 3D environment with the use of a Fresnel layer applied to the texture.

4. Other things to Consider

Colour

When creating textures for 3D models that will appear within a HDRI environment, it is helpful to use colours that match the hue and saturation of the HDR environment. Once a 3D image has been rendered, Hue and Saturation adjustments applied to the entire image help to blend the two media together.

Camera Artefacts

One problem with 3D renders is that everything produced is beautifully clean and sharp, as if it had been photographed with an extremely superior lens and sensor. In order to fool the human eye into believing something was captured with a camera, some of the unwanted by-products of cheaper lenses need to be replicated. This includes over/under exposure, chromatic aberration, noise and grain, vignetting and silvering. Stylistic choices such as depth of field should also be observed. And when shooting film or animation, other artefacts such as motion blur should also be added.

Edges

It has also been observed that in the 3D environment it is possible to create perfectly square edges which, if magnified an infinite number of times, would remain perfectly square. In the real world this is less often the case as edges tend to be worn and/or rounded. To make 3D objects appear real, hard/square edges should be avoided.

Randomness / Chaos

In addition to avoiding square edges, and perfectly clean/crisp renders, some of the random chaos of the real world should also be introduced. For example, rather than using an algorithm too create a brick wall that is perfectly straight and where every brick is exactly the same shape and size, there should be some variation. In addition to this, dirt should be added into a scene.

Dirt

When creating dirt with a photorealistc effect, you should paint dirt onto a model by hand. Burns (p229) correctly states that you should“choose dirt maps that add specific, motivated detail to your objects. Think through the story behind all of the stains and imperfections on a surface – something has to cause any dirt, scratches, or stains that you would see”. 

Dirt should only be present on the surface of models, but should also be present floating in the environment in the form of dust, steam, or similar. Volumetric lighting is a good way to achieve this.

Perception

The final thing to consider is human perception.

Scale

It’s possible, although I haven’t as yet been able to prove or disprove this theory, that scale plays an important role in creating an illusion of photorealism. We know that if we see a 60 foot gorilla on the screen, it is most likely computer generated rather than a real photograph.

Unreal

It appears that it is easier to fool the human brain into believing something is real if the brain has fewer points of reference. Take for example a human hand, creating a 3D hand that an audience believes is real is extremely difficult as it is something we spend a great deal of time looking at and accordingly we have extensive points of reference. If, on the other hand, I was to create 3D model and said it was a newly discovered creature that was found deep in the ocean, it would be easier to fool the mind into believing it was real as the brain has fewer points of reference.

That said though, it is still important to look to the real world for influence and reference when creating something that is fictional.

Improving on this Research

It’s difficult to find a way to improve upon this research, as has already been said, it appears that many other practitioners already share my view and any experiments that I have conducted myself are simply reinventing the wheel. Is it possible that all of the problems have already been overcome?

At present, I don’t feel I have explored the subject deeply enough to be able to offer any new insight that hasn’t already been discussed elsewhere.

What I propose to do now is produced some 3D renders that illustrate all of the points that I have made above. These renders will then be presented for assessment as a ‘body of work’. It is hoped that whilst creating some new renders, problems might arrise that haven’t already been tackled. However, I expect that this is more likely to happen if I approach a novel situation that other practitioners haven’t yet tried to create in 3D. This could be fun 😀

To try and streamline this process, I find that although I can build 3D models, it takes me a great deal of time. In light of this, I might try and create some scenes with simple geometry, such as a piece of jewellery, a planet, or perhaps to take a scene that I have created previously and try to make it more photorealistic.

Summary of Findings to Date

Before commencing with any further experimentation, today has been put aside for summarising my findings thus far.

In my initial experiment with the grey spheres I observed that an understanding of light is fundamental to achieving photorealism. Within the 3D environment, three aspects of light need to closely replicate their real-world counterparts:

  • Light needs to illuminate the surface of a 3D object in a realistic way.
  • The 3D object needs to prevent light from hitting other objects in the scene and create shadows.
  • Light needs bounce off the 3D models in the form of reflections.

High Dynamic Range Images are a useful tool for creating realistic lighting as they create a natural light source and can also be used as an environment that appears within reflections. When using HDR Images, it’s useful to first light a scene by applying textures that have a solid colour of 50% grey to the 3D models. This allows the observer to adjust the exposure of the surrounding environment, replicating the light that was present when the HDRI was captured, before creating other textures.

When creating textures for 3D models is it important to use Fresnel Reflections as this type of reflection more closely replicates how light bounces in the real world. To understand what Fresnel reflections are, imagine looking at a body of water on a Sunny day, if you looked across the surface of the water you would see a lot of reflection and little of what is beneath the surface. If, on the other hand, you looked down at the water from above, you would be able to see what was below the surface and less of the reflections. This property of light needs to be emulated in the 3D environment.

In addition to this, discounting black holes, everything in the real world has a reflection, although some have very little. With this in mind, everything in the 3D environment should have some amount of reflection.

When creating textures for 3D models that will appear within a HDRI environment, it is also helpful to use colours that match the hue and saturation of the HDR environment. Once a 3D image has been rendered, Hue and Saturation adjustments applied to the entire image help to blend the two media together.

One problem with 3D renders is that everything produced is beautifully clean and sharp, as if it had been photographed with an extremely superior lens and sensor. In order to fool the human eye into believing something was captured with a camera, some of the unwanted by-products of cheaper lenses need to be replicated. This includes chromatic aberration, noise and grain, vignetting and silvering. When shooting film or animation, other artefacts such as motion blur should also be added.

It has also been observed that in the 3D environment it is possible to create perfectly square edges which, if magnified an infinite number of times, would remain perfectly square. In the real world this is less often the case as edges tend to be worn and/or rounded. To make 3D objects appear real, hard/square edges should be avoided.

It’s possible, although I haven’t as yet been able to prove or disprove this theory, that scale plays an important role in creating the illusion. We know that if we see a 60 foot gorilla on the screen, it is most likely computer generated rather than a real photograph. Similarly, in this animation of a spider, the illusion of realism is ruined because we know that spiders are not this big.

Spider Animation

Finally, it appears that it is easier to fool the human brain into believing something is real if the brain has fewer points of reference. Take for example a human hand, creating a 3D hand that an audience believes is real is extremely difficult as it is something we spend a great deal of time looking at and accordingly we have extensive points of reference. If, on the other hand, I was to create 3D model and said it was a newly discovered creature that was found deep in the ocean, it would be easier to fool the mind into believing it was real as the brain has fewer points of reference.

In account of all this, what’s next?

Although I’d very much like to continue developing the antagonist, as can be seen in the project plan, module deadlines dictate that I begin researching the findings of other practitioners and comparing their results to my own.

Before this is done, I will first address my objectives for satisfying the learning outcomes to ensure that they are all being achieved.

Adding Artifacts to Animation

In my previous post I demonstrated that Replicating Digital Imaging Artifacts helped to achieve photorealism.

To further this research I have been experimenting with replicating artifacts found in digital film such as grain, and motion blur and adding them to 3D animations.

The examples below show a single frame from an animation, one with the added artifacts and one without.

When looking at a single frame, the motion blur effect seems out of place, but when watching the animation and the motion blur of the original camera movement becomes apparent, it is clear that the replicated blur helps to create the illusion of realism. In contrast, in the animation without the added motion blur, the sharpness of the cg elements stand out from the noisy, blurred background and begin to hinder the illusion of realism.

Animation with Artifacts

Animation without Artifacts

As well as motion blur, other artifacts have been added according to previous findings such as chromatic aberration, vignetting, and noise.

The affect of sound will be discussed in a future post.

Replicating Digital Imaging Artifacts

Throughout this process of trying to emulate reality I have speculated that it is not the real world that should be emulated, but realism as found in a photograph. To this end I have attempted to replicate some of the artifacts that occur in digital photography in order to improve my previous best attempt at replicating reality.

The Image ‘Without Post-Processing‘ shows the previous attempt at achieving photorealism.

In the image ‘Camera Artefacts‘ I have added some chromatic aberration (The red fringe on the edge of the balls), I have scaled the image up and down, compressed it and decompressed it, and used run length encoding to degrade the quality of the image, and have also added some noise. All of these effects would occur naturally when working with digital photographs.

In addition to this I have also desaturated the image and adjusted the colour balance to make the highlights a little more blue. The benefits of this are that all of these changes affect the entire image, not just the 3D elements. As both the background and the 3D elements have been affected by the same processing, it becomes more difficult for the eye to distinguish between the two forms of media.

For reference, the original photograph has been included also.

Whilst there is not as much noise or chromatic aberration in the reference image, when comparing the two artificial images, it is the image with the added artifacts that is most convincing.

Does Scale Affect Photorealism?

I’m afraid the jury’s still out on this one.

I was hoping that the larger frog would look far less believable than the smaller one but this wasn’t the case.

My feeling is that neither frog look photorealistic enough to enable a judgement to be made. At present, it is still clear that the smaller frog is fake. If I could first make the smaller frog look real, and then increase the scale of the frog, my guess would be that scale would then inform the viewer that the image was a fake.

Achieving Photorealism with Hue and Saturation

In a previous post I said that the cartoon snails fit into their environment better than the frog because the saturation (vibrancy) of the colours more closely match the surrounding environment.

The intention of today’s experiment was to identify this is fact or myth. To this aim two images have been produced; in one image, the 3D element utilises the colour of its environment, in the second image, the colour of the 3D element is in strong contrast to the environment.

Looking at these two images I was pleased to see that my initial thoughts about colour are true. My opinion is that it is far easier to believe the green frog might have been photographed, whereas the red frog has clearly been added digitally.

It is important then that when trying to achieve photorealism, the colour and saturation of a 3D object should match that of the environment.

Still not convinced? Try adjusting the hue and saturation for yourself. If you find a setting that makes the frog look more believable, I’d be delighted to hear about it.