Books on Photorealism in 3D and CGI

If you’ve been following my blog and would like to know more about creating photorealistic 3D CGI renders, you can go straight to the source with these books from Amazon:


Crafting 3D Photorealism: Lighting Workflows In 3ds Max, Mental Ray and V-Ray

Light for Visual Artists: Understanding & Using Light in Art & Design

Color and Light: A Guide for the Realist Painter

Digital Lighting and Rendering

Photorealism: You Can Do It

Digital Texturing and Painting

Elemental Magic, Volume II: The Technique of Special Effects Animation: 2 (Animation Masters Title)

The HDRI Handbook 2.0: High Dynamic Range Imaging for Photographers and CG Artists

Physically Based Rendering: From Theory To Implementation

What does it take to make a 3D / CGI Object look Photorealistic?

Through my own experiments I have observed that the understanding of light is fundamental to achieving photorealism and it seems that many other 3D practitioners share this opinion.

Within the 3D environment, three aspects of light need to closely replicate their real-world counterparts:

  1. Light needs to illuminate the surface of a 3D object in a realistic way.
  2. The 3D object needs to prevent light from hitting other objects in the scene and create shadows.
  3. Light needs bounce off the 3D models in the form of reflections.

1. Light needs to illuminate the surface of a 3D object in a realistic way

HDRI

High Dynamic Range Images are a useful tool for creating photorealstic 3D renders as they create a natural light source and can also be used as an environment that appears within reflections. When using HDR Images, it’s useful to first light a scene by applying textures that have a solid colour of 50% grey to the 3D models. This allows the observer to adjust the exposure of the surrounding environment, replicating the light that was present when the HDRI was captured, before creating other textures.

Warm / Cold Lighting

When creating artificial lights, one should observe the colour temperature of those lights. Quite often, a warm (orange/yellow) light is used as a key light, and a cold (blue) light is used as a fill light. This is because our eyes are familiar with seeing the sun cast shadows whilst the blue sky casts soft light over the shadowed areas.

Colour Temperature

A 3D artist in search of photorealism should create lights that have the same colour temperature as their real world counterparts. The following is a small selection of real world lights and their colour temperature.

  • Candle flame: 1900°Kelvin
  • 100‐watt household bulb: 2865°Kelvin
  • Daylight: 5600°Kelvin

If a 3D artist in pursuit of photorealism was creating a directional light that was intended to emulate a sun, then the virtual light’s colour temperate should match that of its real world counterpart, i.e. approximately 5,600°Kelvin (depending on the time of day etc).

Volumetric Light

Any visible light within a 3D scene is created using volumetric lighting. A common use of volumetric lighting is to replicate a key light penetrating a dusty environment (More on dusty environments later).

IES Lighting

Even better than trying to match the correct colour temperature, an architectural 3D artist in search of photorealism should almost certainly make use of IES (Illuminating Engineering Society of North America) lights wherever possible. The IES have created a standard that allows manufacturers to record characteristics for the lights they make, such as colour temperature, falloff and visible light etc. These measurements are saved in a text file and made publicly available. 3D software (such as Cinema 4D) can then use these files to replicate a real world light exactly.

2. The 3D object needs to prevent light from hitting other objects in the scene and create shadows

Define Spatial Relationships

“When asking the audience to accept a scene that would otherwise strain its credibility, convincing shadow interaction can add an important piece of reality to help sell the illusion. If a production is supposed to be completely photorealistic, a single element such as a missing shadow could be all it takes to make your work look ‘wrong’ to the audience. Shadows serve the interest of adding realism and believability, even if there is no other reason for them in the composition” (Birn, )

Hard/Soft Shadows

When creating shadows, it is important to think about the source of light creating those shadows. A large light source that encompasses the entire scene would create a soft and even shadow, whereas a small distant source of light would create hard shadows. In nature, the sun casts hard shadows whereas the sky casts soft shadows.

Shadow Cookies

A cookie is used in the cinema to cast a shadow with a predefined shape. For example, if you wanted an actor to look like he was in a forest, you might cut the shapes of tree branches out of cardboard and place them between the key light and the actor. This would cast shadows that look like tress into the scene.

In 3D, if an artist is trying to composite a 3D object into a live scene, such as a photograph or video, shadow cookies cast over both the 3D model and the original scene can help to behind the two media together making it difficult for the audience to distinguish between them.

3. Light needs bounce off the 3D models in the form of reflections

Surface Texture

Although this project hasn’t gone into great depth in regard to surface textures, they are however extremely important when trying to achieve photorealism. All objects, except perhaps a black hole, have some amount of reflection, however, all have differing reflection properties. For example, a chrome lamp will have a very hard reflection, whereas a wooden picture frame will have a much softer reflection.

Fresnel

In addition to how hard or soft a reflection is, the amount of reflection on most objects will change depending on the angle you look at it. This is achieved in the 3D environment with the use of a Fresnel layer applied to the texture.

4. Other things to Consider

Colour

When creating textures for 3D models that will appear within a HDRI environment, it is helpful to use colours that match the hue and saturation of the HDR environment. Once a 3D image has been rendered, Hue and Saturation adjustments applied to the entire image help to blend the two media together.

Camera Artefacts

One problem with 3D renders is that everything produced is beautifully clean and sharp, as if it had been photographed with an extremely superior lens and sensor. In order to fool the human eye into believing something was captured with a camera, some of the unwanted by-products of cheaper lenses need to be replicated. This includes over/under exposure, chromatic aberration, noise and grain, vignetting and silvering. Stylistic choices such as depth of field should also be observed. And when shooting film or animation, other artefacts such as motion blur should also be added.

Edges

It has also been observed that in the 3D environment it is possible to create perfectly square edges which, if magnified an infinite number of times, would remain perfectly square. In the real world this is less often the case as edges tend to be worn and/or rounded. To make 3D objects appear real, hard/square edges should be avoided.

Randomness / Chaos

In addition to avoiding square edges, and perfectly clean/crisp renders, some of the random chaos of the real world should also be introduced. For example, rather than using an algorithm too create a brick wall that is perfectly straight and where every brick is exactly the same shape and size, there should be some variation. In addition to this, dirt should be added into a scene.

Dirt

When creating dirt with a photorealistc effect, you should paint dirt onto a model by hand. Burns (p229) correctly states that you should“choose dirt maps that add specific, motivated detail to your objects. Think through the story behind all of the stains and imperfections on a surface – something has to cause any dirt, scratches, or stains that you would see”. 

Dirt should only be present on the surface of models, but should also be present floating in the environment in the form of dust, steam, or similar. Volumetric lighting is a good way to achieve this.

Perception

The final thing to consider is human perception.

Scale

It’s possible, although I haven’t as yet been able to prove or disprove this theory, that scale plays an important role in creating an illusion of photorealism. We know that if we see a 60 foot gorilla on the screen, it is most likely computer generated rather than a real photograph.

Unreal

It appears that it is easier to fool the human brain into believing something is real if the brain has fewer points of reference. Take for example a human hand, creating a 3D hand that an audience believes is real is extremely difficult as it is something we spend a great deal of time looking at and accordingly we have extensive points of reference. If, on the other hand, I was to create 3D model and said it was a newly discovered creature that was found deep in the ocean, it would be easier to fool the mind into believing it was real as the brain has fewer points of reference.

That said though, it is still important to look to the real world for influence and reference when creating something that is fictional.

Improving on this Research

It’s difficult to find a way to improve upon this research, as has already been said, it appears that many other practitioners already share my view and any experiments that I have conducted myself are simply reinventing the wheel. Is it possible that all of the problems have already been overcome?

At present, I don’t feel I have explored the subject deeply enough to be able to offer any new insight that hasn’t already been discussed elsewhere.

What I propose to do now is produced some 3D renders that illustrate all of the points that I have made above. These renders will then be presented for assessment as a ‘body of work’. It is hoped that whilst creating some new renders, problems might arrise that haven’t already been tackled. However, I expect that this is more likely to happen if I approach a novel situation that other practitioners haven’t yet tried to create in 3D. This could be fun 😀

To try and streamline this process, I find that although I can build 3D models, it takes me a great deal of time. In light of this, I might try and create some scenes with simple geometry, such as a piece of jewellery, a planet, or perhaps to take a scene that I have created previously and try to make it more photorealistic.

Summary of Findings to Date

Before commencing with any further experimentation, today has been put aside for summarising my findings thus far.

In my initial experiment with the grey spheres I observed that an understanding of light is fundamental to achieving photorealism. Within the 3D environment, three aspects of light need to closely replicate their real-world counterparts:

  • Light needs to illuminate the surface of a 3D object in a realistic way.
  • The 3D object needs to prevent light from hitting other objects in the scene and create shadows.
  • Light needs bounce off the 3D models in the form of reflections.

High Dynamic Range Images are a useful tool for creating realistic lighting as they create a natural light source and can also be used as an environment that appears within reflections. When using HDR Images, it’s useful to first light a scene by applying textures that have a solid colour of 50% grey to the 3D models. This allows the observer to adjust the exposure of the surrounding environment, replicating the light that was present when the HDRI was captured, before creating other textures.

When creating textures for 3D models is it important to use Fresnel Reflections as this type of reflection more closely replicates how light bounces in the real world. To understand what Fresnel reflections are, imagine looking at a body of water on a Sunny day, if you looked across the surface of the water you would see a lot of reflection and little of what is beneath the surface. If, on the other hand, you looked down at the water from above, you would be able to see what was below the surface and less of the reflections. This property of light needs to be emulated in the 3D environment.

In addition to this, discounting black holes, everything in the real world has a reflection, although some have very little. With this in mind, everything in the 3D environment should have some amount of reflection.

When creating textures for 3D models that will appear within a HDRI environment, it is also helpful to use colours that match the hue and saturation of the HDR environment. Once a 3D image has been rendered, Hue and Saturation adjustments applied to the entire image help to blend the two media together.

One problem with 3D renders is that everything produced is beautifully clean and sharp, as if it had been photographed with an extremely superior lens and sensor. In order to fool the human eye into believing something was captured with a camera, some of the unwanted by-products of cheaper lenses need to be replicated. This includes chromatic aberration, noise and grain, vignetting and silvering. When shooting film or animation, other artefacts such as motion blur should also be added.

It has also been observed that in the 3D environment it is possible to create perfectly square edges which, if magnified an infinite number of times, would remain perfectly square. In the real world this is less often the case as edges tend to be worn and/or rounded. To make 3D objects appear real, hard/square edges should be avoided.

It’s possible, although I haven’t as yet been able to prove or disprove this theory, that scale plays an important role in creating the illusion. We know that if we see a 60 foot gorilla on the screen, it is most likely computer generated rather than a real photograph. Similarly, in this animation of a spider, the illusion of realism is ruined because we know that spiders are not this big.

Spider Animation

Finally, it appears that it is easier to fool the human brain into believing something is real if the brain has fewer points of reference. Take for example a human hand, creating a 3D hand that an audience believes is real is extremely difficult as it is something we spend a great deal of time looking at and accordingly we have extensive points of reference. If, on the other hand, I was to create 3D model and said it was a newly discovered creature that was found deep in the ocean, it would be easier to fool the mind into believing it was real as the brain has fewer points of reference.

In account of all this, what’s next?

Although I’d very much like to continue developing the antagonist, as can be seen in the project plan, module deadlines dictate that I begin researching the findings of other practitioners and comparing their results to my own.

Before this is done, I will first address my objectives for satisfying the learning outcomes to ensure that they are all being achieved.

Using an Equirectangular HDRI Panorama to Light a 3D Scene

A few days ago I shared my workflow for creating an equirectangular HDRI panorama, within the post I said that I would share my workflow for using the HDRI as a light source in the 3D environment at a later time. So…

With the panoramic HDR image created, the first thing that needs to be done is to prepare the image for use in 3D. Firstly, the image needs to be flipped horizontally, otherwise it will appear inside-out in the 3D environment.

I then save the HDRI with the name background.hdr. This image will be used as a high resolution background to my scene.

I then resize the image to about 2048 pixels wide and save it under the name reflection.hdr. This image will be seen in the reflections in the scene. As the reflections won’t need to be as sharp as the background, this smaller file size helps to improve render times.

I then resize the image again to about 600 pixels wide and save it under the name diffuse.hdr. This is the image that will be used to light the scene. This much smaller file size will help to significantly improve render times. To improve render quality, I also add a very strong Gaussian blur to the image. This helps to reduce any blotchiness in the shadows of the resulting 3D render.

Reflection

reflection.hdr

Diffuse

diffuse.hdr

With the images prepared I then use a free student version of Cinema 4D to create my 3D scene. Firstly, I create two materials named diffuse and reflection and assign my HDR images to the respective colour channels.

I then create two sky objects. The sky object is a sphere of infinite size that encompasses the 3D scene. I rename one sky object to reflection, and the other two diffuse. I add to the two sky objects a compositing tag each.

In the diffuse sky object’s compositing tag, I disable everything but the GI (global illumination) option. This forces the software to use the diffuse sky for nothing other than global illumination (light).

In the reflection sky object’s compositing tag I enable only the ‘Seen by reflection’ parameters.

I then create a material named background and assign my background image to the colour channel.

Background Image

Background Image

I apply this material to a background object.

I then change the resolution of my output render to match the dimensions of the background image. This is important to make sure everything lines up in the next step.

Next, I create a camera object and adjust the size of the lens and image sensor to match the camera that was used to capture the background image. A quick Google search tells me that the size of the image sensor on my Nikon D60 is 23.6mm. To find out what focal length I used, I right click on my photograph in Windows Explorer and go to the details tab. Here I can see that I used a 20mm lens when photographing the background.

Focal length

Focal length

It is important to match up the image sensor size and focal length in order to make sure that the perspectives of any 3D objects you place into the scene match that of the background.

In this particular scene I measured the filing cabinet and created a cube to match these dimensions. This was cube was needed to provide a surface for catching shadows below any 3D objects I placed into the scene, in this case, the three spheres. The three spheres were created next and finally, I look through my virtual camera and take some time to line everything up.

Overview of 3D Scene

3D Scene

Camera View

Camera View

That concludes the creation of the 3D scene. To get the lighting to work correctly all that’s left to be done is to go into the render settings, turn on the global illumination effect, turn off the default light, hit the render button, and see what happens.

First Render

First Render

I found that in this particular image I had to add an extra light source because the HDRI alone wasn’t bright enough to create sufficient shadows on the filing cabinet’s surface.

Whilst writing this article I also found that adding a compositing tag to the filing cabinet with the ‘Compositing Background’ parameter enabled, forced the cabinet to be self-illuminating. This corrected the problem I was having in previous renders where the saturation in the filing cabinet was being lost.

I also found that this allowed me more freedom to adjust the exposure of the HDR Image which in turn meant that I could remove the artificial sun light I’d created and rely solely on the HDRI sky object. The result of this is a more natural shadow.

Whilst I was at it, following what I’d learnt from my previous observations relating to Fresnel reflection falloff, I also adjusted the reflection on the balls.

Even Better Render

Even Better Render

With an improved render, I think it’s time to make a comparison and talk about the qualities and imperfections that are steering the image closer to, and further from, photorealism.

How to make a HDRI Sky-Dome

The first experiment

In my previous post I said that I was going to begin the experiments by taking a photograph and seeing if I could replicate that photograph in 3D. Today I spent some time preparing all of the files that I’d need and thought I’d share my workflow with others who are trying to achieve a similar result. If you are already familiar equirectangular high dynamic range panorama I’d skip this post, otherwise, read on…

First of all, here is the photograph that I’m going to try and recreate:

Photo to Replicate

Photo to Replicate

In this first experiment though I’m not going to try and recreate the whole image in 3D, instead, I’m going to remove the grey balls, take another photograph, and use that photograph as a background. I will then try to recreate the grey balls in CG to see if I can ‘fake’ the original image.

The first I thing I do when starting a project like this is to put my camera in manual mode and take a white balance measure using a Gretag Macbeth White Balance Card. This isn’t entirely necessary as I’ll correct the white balance again in post but I feel it is good practice to start with as accurate colour as possible.

Measuring White Balance

Measuring White Balance

With the white balance measured, I’ll then work on getting the right exposure. Depending on the lighting I set my camera’s aperture at either f8 or f11 as this usually results in the sharpest image whilst also reducing lens flair. As the lighting in this scenario is quite low and I won’t be taking photographs towards the sun, I used an aperture of f8 in order to allow more light into the lens.

Having determined the correct aperture I place my camera where my subject is going to appear, in this case where the grey balls used to be, and move my camera around and using the exposure meter I find the minimum and maximum shutter speeds that that I might need.

In the example, I needed a shutter speed of about 1/80th of a second to get correct exposure through the window, and a shutter speed of about 3 seconds to expose for the shadows in the corner of the room.

My next task is to set the shutter speed half way between these two values and take a picture of a colour chart. Again, I use the Gretag Macbeth card for this purpose (More on the colour correction later).

Measuring Colour

Measuring Colour

With my white balance, maximum shutter speed, minimum shutter speed and colour chart all measured, I work out how many pictures I need to get a true HDR Image. If you’ve not come across HDRI before, here’s a quick overview.

Whenever a photographer takes a photograph, he/she must decide how much light to allow through the lens. Too much and everything is overexposed, too little and everything is underexposed, either way, there’s great loss of detail. The photographer must make a compromise and try to find a middle point, accepting that they are going to lose some detail in the highlights and some in the shadows. With HDR imaging on the other hand this isn’t the case. Instead, you might take three photographs; one to expose for the highlights, one for the mid tones, and one for the shadows. You would then use software such as photoshop (File > Automate > Merge to HDR) to combine all three images into a single HDR image.

The difference in a HDR image to a normal image, is that the image contains information not just about what colour an individual pixel is, but more importantly, how bright it is. The benefit of this is that a HDR image can be used in 3D as a source of light (We’ll get to this later).

So, getting back to making my own HDRI, I worked out earlier that I need a maximum shutter speed of 2 seconds, and a minimum of 1/80th of a second. With these two figures in hand I work out how many exposures I will need to bridge the gap. To do this, I always try to start at the maximum and keep dividing by 2 until I reach the minimum.

So, if my first exposure is 2 seconds, my next is 1 second, then 1/2 a second, then 1/4, then 1/8th of a second. My Nikon D60 won’t do 1/16th of a second so I use 1/15th, then 1/30th and 1/60th of a second. The next increment would be 1/120th of a second, so I decide to stay at 1/60th as it is closer to the minimum shutter speed that I measured earlier. From this calculation I worked out that I will need 9 pictures to get a good exposure across the range.

However, I’m not just trying to capture a single HDR image, instead I’m trying to photograph the entire room. You might be familiar with shooting panoramic images where the camera is sometimes placed on a tripod and a series of images are taken as the tripod head is rotated about the horizontal axes (It’s actually rotated around the vertical axes, but it’s sometimes easier to think of it as being horizontal). To photograph the entire room I need to do something similar but I need to rotate my camera around both the vertical and horizontal axes. I will then stitch of all of the photos together into what is called an equirectangular image. This is where you photograph something that is spherical, and project it onto a flat surface, very similar to taking a picture of the earth and making it into a rectangular map. To photograph a spherical panorama I use a Nodal Ninja MkIII.

Nodal Ninja MkIII

Nodal Ninja MkIII

When shooting a full spherical panorama like this you need to be quick so that you can capture the entire scene before the natural light changes (in this case the light coming through the window). Obviously, if you’re shooting in artificial light, this doesn’t matter so much.

As a side note, when I first did this I used an 18mm lens on my camera that would require taking over a thousand photographs to get a full spherical HDRI panorama. I now use a 10mm lens that requires I take about 250 images to capture all of the light that’s bouncing around the room.

To insure there is no movement in the camera so that the different exposures can be perfectly aligned I connect my camera to a laptop and operate the camera remotely.

Shooting a HDRI Panorma

Shooting a HDRI Panorma

With this set up I take a photograph with a 2 second shutter speed, reduce the shutter speed by half and take another photograph. Taking 9 intervals until I reach 1/60th of a second. I then rotate my Nodal Ninja MkIII by 25.7 degrees and capture another 9 exposures.

I use 25.7 degrees because this gives the resulting images a 25% overlap using my 10mm lens and having a 25% overlap makes it easier to stitch the photographs together later on. The great thing about the Nodal Ninja is that it comes with plates that let you adjust how many degrees you want it to turn by each time. I set mine to 25.7 degrees and when I turn the Ninja, it clicks into place, ensuring that all my photographs have a perfect 25% overlap.

Once I’ve rotated a full 360 degrees horizontally, I adjust the vertical pivot and repeat the process. With a 10mm lens this needs to be repeated three times to capture the entire scene.

250 images later I’m ready to create the panorama.

9 Exposures with 25% Overlap

9 Exposures with 25% Overlap

Before I pack everything away I take another picture of the colour chart which I can use to correct the images later on if the light through the window has changed drastically whilst I’ve been taking the pictures. For good measure I also take some more pictures of the filing cabinet and record some video of the cabinet too in case I decide to create some animation at some point in the future.

With all of my photographs ready It’s time to correct them all, for this I use Adobe Lightroom. I import my picture of the Gretag Macbeth Colour Chart that I took originally and use the X-Rite colour checker plugin to measure the colours in the image. This gives me a colour profile that I then apply to the rest of the images, ensuring that all of the images taken have not only matching, but correct colour. For this to work, all of the images need to be in RAW format.

I also use the lens correction tool in Lightroom to remove the lens distortion and vignetting that I get from the 10mm lens. Finally, I resize all of my images and convert them to .tif file format. It would make my workflow quicker if the D60 I use could take RAW photographs (or .nef on the Nikon) at a lower resolution as it would save time in post, and save time writing to the memory card when the picture was originally taken. Alas, it does not so I have to resize them here. It’s a good idea to make all of the images smaller and convert to .tif because it will make it far easier for the stitching software to cope with the huge amount of data contained in the 250 photographs.

I use PTGui to stitch the pictures together because it’s clever enough to know that I want to create a HDR panorama and realises that I have taken the same image at different exposures.

In this instance I had to do quite a lot of manual adjustments (several hours) to help the stitching software because I’d moved the chair, door, curtains and wires on the floor whilst taking the photographs and the irregular position of overlapping elements in the image was confusing the software (clouds have a nasty habit of doing this too). Below however, is the final result.

Equirectangular HDRI

Equirectangular HDRI

I save the resulting HDR panorama in .hdr file format. You’ll notice that there is a large empty space in the bottom of the panorama. This is because the camera can’t magically shoot through your tripod. In this instance though, I knew the surface below the camera was very similar to the surrounding area so I just take it into Photoshop and use the clone stamp tool to fill the empty space.

Corrected Panorama

Corrected Panorama

I didn’t need to be too careful about getting a good blend here as I’ll be modelling a surface for the top of the filing cabinet later on.

With an equirectangular HDRI panorama ready, it’s time to use it in 3D to create the sky dome.