Render Comparisons

With some progress made it’s time to compare the renders and try to identify what qualities of reality have been achieved and what is still missing.

The first image shows above (top left) is the original photograph that is acting as a point of reference for trying to emulate reality. Rather than concentrating on making an identical copy of the photograph, the intention was instead to emulate the elements of the photograph that make it look real.

The second image (top middle) is the background image that I am using in my 3D renders. A decision was made to use a background in this way in order to give the experiment a more narrow focus. As my understanding of what is happening improves, I will later attempt to create an entire scene from scratch.

The initial test render (top right) was the initial result of the experiment, seen for the first time, without any adjustments.

Interestingly, when showed this image, I was asked why I had taken a photograph of three balls. Haha, already I had fooled somebody! Admittedly though, this person had been shown the image on the small screen of my mobile phone, but this did prove that something in this image makes it begin to appear ‘real’. it doesn’t take a genius to work out that it is the reflections in the balls. This coincides with my initial theories relating to achieving photorealism by studying light.

The biggest failing in this image is the lack of shadow below the balls which is causing them to look like they are floating. Again, a product of light, or lack thereof.

Some time was spent producing an improved render (bottom left). The shadows below the balls definitely help to sell the image. Indeed, whilst sat in my office at work my colleagues found it difficult to believe that this is a computer generated image. Yet still, it lacks something, there is still something about it that doesn’t seem quite right. Firstly, the green filing cabinet seems to be very desaturated compared to the rest of the environment and the balls seem to be in a darker environment than they are supposed to be. In addition, after giving some thought to fresnel reflectivity, I realised that the reflections are still wrong.

In the further improved render (bottom middle)  the shadows seem more natural now that the artificial software lights created in the 3D environment have been turned off, and whilst there is some odd striping at the front of the cabinet, it has at least got its’ colour back.What’s lacking in this image is that whilst the fresnel reflectivity around the edge of the spheres is better, the centre of the spheres need to be more reflective. The reflections are also far too blurry for such a polished surface and in comparison to the original photograph, they still seem a bit too dark. Time for one more render I think…

Using an Equirectangular HDRI Panorama to Light a 3D Scene

A few days ago I shared my workflow for creating an equirectangular HDRI panorama, within the post I said that I would share my workflow for using the HDRI as a light source in the 3D environment at a later time. So…

With the panoramic HDR image created, the first thing that needs to be done is to prepare the image for use in 3D. Firstly, the image needs to be flipped horizontally, otherwise it will appear inside-out in the 3D environment.

I then save the HDRI with the name background.hdr. This image will be used as a high resolution background to my scene.

I then resize the image to about 2048 pixels wide and save it under the name reflection.hdr. This image will be seen in the reflections in the scene. As the reflections won’t need to be as sharp as the background, this smaller file size helps to improve render times.

I then resize the image again to about 600 pixels wide and save it under the name diffuse.hdr. This is the image that will be used to light the scene. This much smaller file size will help to significantly improve render times. To improve render quality, I also add a very strong Gaussian blur to the image. This helps to reduce any blotchiness in the shadows of the resulting 3D render.

Reflection

reflection.hdr

Diffuse

diffuse.hdr

With the images prepared I then use a free student version of Cinema 4D to create my 3D scene. Firstly, I create two materials named diffuse and reflection and assign my HDR images to the respective colour channels.

I then create two sky objects. The sky object is a sphere of infinite size that encompasses the 3D scene. I rename one sky object to reflection, and the other two diffuse. I add to the two sky objects a compositing tag each.

In the diffuse sky object’s compositing tag, I disable everything but the GI (global illumination) option. This forces the software to use the diffuse sky for nothing other than global illumination (light).

In the reflection sky object’s compositing tag I enable only the ‘Seen by reflection’ parameters.

I then create a material named background and assign my background image to the colour channel.

Background Image

Background Image

I apply this material to a background object.

I then change the resolution of my output render to match the dimensions of the background image. This is important to make sure everything lines up in the next step.

Next, I create a camera object and adjust the size of the lens and image sensor to match the camera that was used to capture the background image. A quick Google search tells me that the size of the image sensor on my Nikon D60 is 23.6mm. To find out what focal length I used, I right click on my photograph in Windows Explorer and go to the details tab. Here I can see that I used a 20mm lens when photographing the background.

Focal length

Focal length

It is important to match up the image sensor size and focal length in order to make sure that the perspectives of any 3D objects you place into the scene match that of the background.

In this particular scene I measured the filing cabinet and created a cube to match these dimensions. This was cube was needed to provide a surface for catching shadows below any 3D objects I placed into the scene, in this case, the three spheres. The three spheres were created next and finally, I look through my virtual camera and take some time to line everything up.

Overview of 3D Scene

3D Scene

Camera View

Camera View

That concludes the creation of the 3D scene. To get the lighting to work correctly all that’s left to be done is to go into the render settings, turn on the global illumination effect, turn off the default light, hit the render button, and see what happens.

First Render

First Render

I found that in this particular image I had to add an extra light source because the HDRI alone wasn’t bright enough to create sufficient shadows on the filing cabinet’s surface.

Whilst writing this article I also found that adding a compositing tag to the filing cabinet with the ‘Compositing Background’ parameter enabled, forced the cabinet to be self-illuminating. This corrected the problem I was having in previous renders where the saturation in the filing cabinet was being lost.

I also found that this allowed me more freedom to adjust the exposure of the HDR Image which in turn meant that I could remove the artificial sun light I’d created and rely solely on the HDRI sky object. The result of this is a more natural shadow.

Whilst I was at it, following what I’d learnt from my previous observations relating to Fresnel reflection falloff, I also adjusted the reflection on the balls.

Even Better Render

Even Better Render

With an improved render, I think it’s time to make a comparison and talk about the qualities and imperfections that are steering the image closer to, and further from, photorealism.

How to make a HDRI Sky-Dome

The first experiment

In my previous post I said that I was going to begin the experiments by taking a photograph and seeing if I could replicate that photograph in 3D. Today I spent some time preparing all of the files that I’d need and thought I’d share my workflow with others who are trying to achieve a similar result. If you are already familiar equirectangular high dynamic range panorama I’d skip this post, otherwise, read on…

First of all, here is the photograph that I’m going to try and recreate:

Photo to Replicate

Photo to Replicate

In this first experiment though I’m not going to try and recreate the whole image in 3D, instead, I’m going to remove the grey balls, take another photograph, and use that photograph as a background. I will then try to recreate the grey balls in CG to see if I can ‘fake’ the original image.

The first I thing I do when starting a project like this is to put my camera in manual mode and take a white balance measure using a Gretag Macbeth White Balance Card. This isn’t entirely necessary as I’ll correct the white balance again in post but I feel it is good practice to start with as accurate colour as possible.

Measuring White Balance

Measuring White Balance

With the white balance measured, I’ll then work on getting the right exposure. Depending on the lighting I set my camera’s aperture at either f8 or f11 as this usually results in the sharpest image whilst also reducing lens flair. As the lighting in this scenario is quite low and I won’t be taking photographs towards the sun, I used an aperture of f8 in order to allow more light into the lens.

Having determined the correct aperture I place my camera where my subject is going to appear, in this case where the grey balls used to be, and move my camera around and using the exposure meter I find the minimum and maximum shutter speeds that that I might need.

In the example, I needed a shutter speed of about 1/80th of a second to get correct exposure through the window, and a shutter speed of about 3 seconds to expose for the shadows in the corner of the room.

My next task is to set the shutter speed half way between these two values and take a picture of a colour chart. Again, I use the Gretag Macbeth card for this purpose (More on the colour correction later).

Measuring Colour

Measuring Colour

With my white balance, maximum shutter speed, minimum shutter speed and colour chart all measured, I work out how many pictures I need to get a true HDR Image. If you’ve not come across HDRI before, here’s a quick overview.

Whenever a photographer takes a photograph, he/she must decide how much light to allow through the lens. Too much and everything is overexposed, too little and everything is underexposed, either way, there’s great loss of detail. The photographer must make a compromise and try to find a middle point, accepting that they are going to lose some detail in the highlights and some in the shadows. With HDR imaging on the other hand this isn’t the case. Instead, you might take three photographs; one to expose for the highlights, one for the mid tones, and one for the shadows. You would then use software such as photoshop (File > Automate > Merge to HDR) to combine all three images into a single HDR image.

The difference in a HDR image to a normal image, is that the image contains information not just about what colour an individual pixel is, but more importantly, how bright it is. The benefit of this is that a HDR image can be used in 3D as a source of light (We’ll get to this later).

So, getting back to making my own HDRI, I worked out earlier that I need a maximum shutter speed of 2 seconds, and a minimum of 1/80th of a second. With these two figures in hand I work out how many exposures I will need to bridge the gap. To do this, I always try to start at the maximum and keep dividing by 2 until I reach the minimum.

So, if my first exposure is 2 seconds, my next is 1 second, then 1/2 a second, then 1/4, then 1/8th of a second. My Nikon D60 won’t do 1/16th of a second so I use 1/15th, then 1/30th and 1/60th of a second. The next increment would be 1/120th of a second, so I decide to stay at 1/60th as it is closer to the minimum shutter speed that I measured earlier. From this calculation I worked out that I will need 9 pictures to get a good exposure across the range.

However, I’m not just trying to capture a single HDR image, instead I’m trying to photograph the entire room. You might be familiar with shooting panoramic images where the camera is sometimes placed on a tripod and a series of images are taken as the tripod head is rotated about the horizontal axes (It’s actually rotated around the vertical axes, but it’s sometimes easier to think of it as being horizontal). To photograph the entire room I need to do something similar but I need to rotate my camera around both the vertical and horizontal axes. I will then stitch of all of the photos together into what is called an equirectangular image. This is where you photograph something that is spherical, and project it onto a flat surface, very similar to taking a picture of the earth and making it into a rectangular map. To photograph a spherical panorama I use a Nodal Ninja MkIII.

Nodal Ninja MkIII

Nodal Ninja MkIII

When shooting a full spherical panorama like this you need to be quick so that you can capture the entire scene before the natural light changes (in this case the light coming through the window). Obviously, if you’re shooting in artificial light, this doesn’t matter so much.

As a side note, when I first did this I used an 18mm lens on my camera that would require taking over a thousand photographs to get a full spherical HDRI panorama. I now use a 10mm lens that requires I take about 250 images to capture all of the light that’s bouncing around the room.

To insure there is no movement in the camera so that the different exposures can be perfectly aligned I connect my camera to a laptop and operate the camera remotely.

Shooting a HDRI Panorma

Shooting a HDRI Panorma

With this set up I take a photograph with a 2 second shutter speed, reduce the shutter speed by half and take another photograph. Taking 9 intervals until I reach 1/60th of a second. I then rotate my Nodal Ninja MkIII by 25.7 degrees and capture another 9 exposures.

I use 25.7 degrees because this gives the resulting images a 25% overlap using my 10mm lens and having a 25% overlap makes it easier to stitch the photographs together later on. The great thing about the Nodal Ninja is that it comes with plates that let you adjust how many degrees you want it to turn by each time. I set mine to 25.7 degrees and when I turn the Ninja, it clicks into place, ensuring that all my photographs have a perfect 25% overlap.

Once I’ve rotated a full 360 degrees horizontally, I adjust the vertical pivot and repeat the process. With a 10mm lens this needs to be repeated three times to capture the entire scene.

250 images later I’m ready to create the panorama.

9 Exposures with 25% Overlap

9 Exposures with 25% Overlap

Before I pack everything away I take another picture of the colour chart which I can use to correct the images later on if the light through the window has changed drastically whilst I’ve been taking the pictures. For good measure I also take some more pictures of the filing cabinet and record some video of the cabinet too in case I decide to create some animation at some point in the future.

With all of my photographs ready It’s time to correct them all, for this I use Adobe Lightroom. I import my picture of the Gretag Macbeth Colour Chart that I took originally and use the X-Rite colour checker plugin to measure the colours in the image. This gives me a colour profile that I then apply to the rest of the images, ensuring that all of the images taken have not only matching, but correct colour. For this to work, all of the images need to be in RAW format.

I also use the lens correction tool in Lightroom to remove the lens distortion and vignetting that I get from the 10mm lens. Finally, I resize all of my images and convert them to .tif file format. It would make my workflow quicker if the D60 I use could take RAW photographs (or .nef on the Nikon) at a lower resolution as it would save time in post, and save time writing to the memory card when the picture was originally taken. Alas, it does not so I have to resize them here. It’s a good idea to make all of the images smaller and convert to .tif because it will make it far easier for the stitching software to cope with the huge amount of data contained in the 250 photographs.

I use PTGui to stitch the pictures together because it’s clever enough to know that I want to create a HDR panorama and realises that I have taken the same image at different exposures.

In this instance I had to do quite a lot of manual adjustments (several hours) to help the stitching software because I’d moved the chair, door, curtains and wires on the floor whilst taking the photographs and the irregular position of overlapping elements in the image was confusing the software (clouds have a nasty habit of doing this too). Below however, is the final result.

Equirectangular HDRI

Equirectangular HDRI

I save the resulting HDR panorama in .hdr file format. You’ll notice that there is a large empty space in the bottom of the panorama. This is because the camera can’t magically shoot through your tripod. In this instance though, I knew the surface below the camera was very similar to the surrounding area so I just take it into Photoshop and use the clone stamp tool to fill the empty space.

Corrected Panorama

Corrected Panorama

I didn’t need to be too careful about getting a good blend here as I’ll be modelling a surface for the top of the filing cabinet later on.

With an equirectangular HDRI panorama ready, it’s time to use it in 3D to create the sky dome.