About Glyn Davidson

http://mountainwheelchair.com www.glyndavidson.co.uk

Improved the render over breakfast this morning

Bags are all packed and I’m ready to leave but couldn’t resist playing with the render over breakfast this morning. The result is vastly superior but there are still problems that need fixing; the shadows are too hard, the reflections are wrong, and I’m loosing a lot of saturation.

At least, thanks to the shadows, it doesn’t look like the balls are floating any more. Perhaps then it’s not light, but shadow that holds the key?

Advertisements

Quick Test Render

I’m away for the rest of the week taking students to Portsmouth so thought I’d post a quick update.

After about half an hour in the 3D environment, I’ve made this:

There are many flaws in the image but it’s getting late and I need to give the family some attention.

Hopefully, I’ll have time at the weekend to improve the render and add a post about how the HDRI panorama was used in the 3D environment to light the scene.

How to make a HDRI Sky-Dome

The first experiment

In my previous post I said that I was going to begin the experiments by taking a photograph and seeing if I could replicate that photograph in 3D. Today I spent some time preparing all of the files that I’d need and thought I’d share my workflow with others who are trying to achieve a similar result. If you are already familiar equirectangular high dynamic range panorama I’d skip this post, otherwise, read on…

First of all, here is the photograph that I’m going to try and recreate:

In this first experiment though I’m not going to try and recreate the whole image in 3D, instead, I’m going to remove the grey balls, take another photograph, and use that photograph as a background. I will then try to recreate the grey balls in CG to see if I can ‘fake’ the original image.

The first I thing I do when starting a project like this is to put my camera in manual mode and take a white balance measure using a Gretag Macbeth White Balance Card. This isn’t entirely necessary as I’ll correct the white balance again in post but I feel it is good practice to start with as accurate colour as possible.

With the white balance measured, I’ll then work on getting the right exposure. Depending on the lighting I set my camera’s aperture at either f8 or f11 as this usually results in the sharpest image whilst also reducing lens flair. As the lighting in this scenario is quite low and I won’t be taking photographs towards the sun, I used an aperture of f8 in order to allow more light into the lens.

Having determined the correct aperture I place my camera where my subject is going to appear, in this case where the grey balls used to be, and move my camera around and using the exposure meter I find the minimum and maximum shutter speeds that that I might need.

In the example, I needed a shutter speed of about 1/80th of a second to get correct exposure through the window, and a shutter speed of about 3 seconds to expose for the shadows in the corner of the room.

My next task is to set the shutter speed half way between these two values and take a picture of a colour chart. Again, I use the Gretag Macbeth card for this purpose (More on the colour correction later).

With my white balance, maximum shutter speed, minimum shutter speed and colour chart all measured, I work out how many pictures I need to get a true HDR Image. If you’ve not come across HDRI before, here’s a quick overview.

Whenever a photographer takes a photograph, he/she must decide how much light to allow through the lens. Too much and everything is overexposed, too little and everything is underexposed, either way, there’s great loss of detail. The photographer must make a compromise and try to find a middle point, accepting that they are going to lose some detail in the highlights and some in the shadows. With HDR imaging on the other hand this isn’t the case. Instead, you might take three photographs; one to expose for the highlights, one for the mid tones, and one for the shadows. You would then use software such as photoshop (File > Automate > Merge to HDR) to combine all three images into a single HDR image.

The difference in a HDR image to a normal image, is that the image contains information not just about what colour an individual pixel is, but more importantly, how bright it is. The benefit of this is that a HDR image can be used in 3D as a source of light (We’ll get to this later).

So, getting back to making my own HDRI, I worked out earlier that I need a maximum shutter speed of 2 seconds, and a minimum of 1/80th of a second. With these two figures in hand I work out how many exposures I will need to bridge the gap. To do this, I always try to start at the maximum and keep dividing by 2 until I reach the minimum.

So, if my first exposure is 2 seconds, my next is 1 second, then 1/2 a second, then 1/4, then 1/8th of a second. My Nikon D60 won’t do 1/16th of a second so I use 1/15th, then 1/30th and 1/60th of a second. The next increment would be 1/120th of a second, so I decide to stay at 1/60th as it is closer to the minimum shutter speed that I measured earlier. From this calculation I worked out that I will need 9 pictures to get a good exposure across the range.

However, I’m not just trying to capture a single HDR image, instead I’m trying to photograph the entire room. You might be familiar with shooting panoramic images where the camera is sometimes placed on a tripod and a series of images are taken as the tripod head is rotated about the horizontal axes (It’s actually rotated around the vertical axes, but it’s sometimes easier to think of it as being horizontal). To photograph the entire room I need to do something similar but I need to rotate my camera around both the vertical and horizontal axes. I will then stitch of all of the photos together into what is called an equirectangular image. This is where you photograph something that is spherical, and project it onto a flat surface, very similar to taking a picture of the earth and making it into a rectangular map. To photograph a spherical panorama I use a Nodal Ninja MkIII.

When shooting a full spherical panorama like this you need to be quick so that you can capture the entire scene before the natural light changes (in this case the light coming through the window). Obviously, if you’re shooting in artificial light, this doesn’t matter so much.

As a side note, when I first did this I used an 18mm lens on my camera that would require taking over a thousand photographs to get a full spherical HDRI panorama. I now use a 10mm lens that requires I take about 250 images to capture all of the light that’s bouncing around the room.

To insure there is no movement in the camera so that the different exposures can be perfectly aligned I connect my camera to a laptop and operate the camera remotely.

With this set up I take a photograph with a 2 second shutter speed, reduce the shutter speed by half and take another photograph. Taking 9 intervals until I reach 1/60th of a second. I then rotate my Nodal Ninja MkIII by 25.7 degrees and capture another 9 exposures.

I use 25.7 degrees because this gives the resulting images a 25% overlap using my 10mm lens and having a 25% overlap makes it easier to stitch the photographs together later on. The great thing about the Nodal Ninja is that it comes with plates that let you adjust how many degrees you want it to turn by each time. I set mine to 25.7 degrees and when I turn the Ninja, it clicks into place, ensuring that all my photographs have a perfect 25% overlap.

Once I’ve rotated a full 360 degrees horizontally, I adjust the vertical pivot and repeat the process. With a 10mm lens this needs to be repeated three times to capture the entire scene.

250 images later I’m ready to create the panorama.

Before I pack everything away I take another picture of the colour chart which I can use to correct the images later on if the light through the window has changed drastically whilst I’ve been taking the pictures. For good measure I also take some more pictures of the filing cabinet and record some video of the cabinet too in case I decide to create some animation at some point in the future.

With all of my photographs ready It’s time to correct them all, for this I use Adobe Lightroom. I import my picture of the Gretag Macbeth Colour Chart that I took originally and use the X-Rite colour checker plugin to measure the colours in the image. This gives me a colour profile that I then apply to the rest of the images, ensuring that all of the images taken have not only matching, but correct colour. For this to work, all of the images need to be in RAW format.

I also use the lens correction tool in Lightroom to remove the lens distortion and vignetting that I get from the 10mm lens. Finally, I resize all of my images and convert them to .tif file format. It would make my workflow quicker if the D60 I use could take RAW photographs (or .nef on the Nikon) at a lower resolution as it would save time in post, and save time writing to the memory card when the picture was originally taken. Alas, it does not so I have to resize them here. It’s a good idea to make all of the images smaller and convert to .tif because it will make it far easier for the stitching software to cope with the huge amount of data contained in the 250 photographs.

I use PTGui to stitch the pictures together because it’s clever enough to know that I want to create a HDR panorama and realises that I have taken the same image at different exposures.

In this instance I had to do quite a lot of manual adjustments (several hours) to help the stitching software because I’d moved the chair, door, curtains and wires on the floor whilst taking the photographs and the irregular position of overlapping elements in the image was confusing the software (clouds have a nasty habit of doing this too). Below however, is the final result.

I save the resulting HDR panorama in .hdr file format. You’ll notice that there is a large empty space in the bottom of the panorama. This is because the camera can’t magically shoot through your tripod. In this instance though, I knew the surface below the camera was very similar to the surrounding area so I just take it into Photoshop and use the clone stamp tool to fill the empty space.

I didn’t need to be too careful about getting a good blend here as I’ll be modelling a surface for the top of the filing cabinet later on.

With an equirectangular HDRI panorama ready, it’s time to use it in 3D to create the sky dome.

How do I get where I want to be?

At present I expect the three modes of study that will result in the most success are observationresearch and comparison.

Observation

The primary mode of study will comprise of taking photographs and trying to reproduce them in 3D in an attempt to emulate the real world. This will provide opportunities to contrast and compare the original photographs to their CGI replicas in order to identify similarities and differences between them, the outcome of which should be the identification of a number of qualities that make an image photo-realistic.

Once a satisfactory result has been obtained, or indeed if I find myself unable to achieve a satisfactory result, I will summarise my findings and repeat the process with a different photograph.

My initial belief is that the mechanisms for achieving photorealism in a CG Image will be specific to the individual image and that a ‘one size fits all’ solution will not be found. For this reason the experiment should be conducted in different lighting situations, with different objects and different materials in order to formulate a range of solutions.

At the time of writing, my expectation is that organic matter, specifically human flesh, will prove the most difficult to replicate. This is due to two primary factors; firstly, the nature of the skin’s surface and the arrangement of translucent matter, such as veins and muscles, below the skin is very complex, and secondly, it is something that we as humans spend a great deal of time looking at. Nevertheless, I expect that research in this area will be readily available.

Research

Once a stage is reached where I am unable to improve my renders through my own observations alone, focus will be given to researching the findings of other practitioners who are interested in photorealism. This need not necessarily be specific to 3D and CGI as artists have been trying to achieve photorealism for many years prior to the advent of current 3D technologies.

This research will be supplemented with three main areas of study which consist of:

  • Biology – to identify how the brain interprets signals being received from the eye
  • Photography and digital imaging – to identify how a photograph is captured
  • Light – to identify how light acts in the real world

Comparison

Having researched work by other practitioners a comparison will be made to the findings of my initial experiments. Acting upon the results of this comparison I will try to improve my initial renders to see if I can emulate reality even further.

Conclusion

To conclude I will summarise all of my findings and produce a number of 3D CG images that hope to mimic the real world and fool the audience into believing the images are photographs of real world objects and not something that was entirely computer generated.

How did I get here?

Being able to wield a pen and pencil was something I’d always found myself able to do with some success, yet I found the medium and/or my ability to utilise it, limited, and equally the pleasure I drew.

Years passed without producing anything creative until 2003 when I defied my lecturers and all things 3D Studio Max and taught myself how to use Maxon’s Cinema 4D. Nine years later and I’m still only scraping the surface of C4D!

Inspired by films like King Kong (2005), The Lord of the Rings (2001) and Harry Potter (2001) as well as earlier films such as Labyrinth (1986), The Dark Crystal (1982) and Jason and the Argonauts (1963) my ambition had always been to work in film. To this end, I’ve focussed the MA in Creative Mediathat I’m currently attending on achieving this goal.

This has taken me down a path where I have studied High Dynamic Range Imaging and their use in 3D, Linear Workflow, Compositing, Colour Grading, and Matchmoving. The example below shows the resulting culmination of these studies.

Camera Tracked and Composited Animated 3D CGI FrogCamera Tracked, Composited and Animated 3D CGI Frog

The results of my initial studies were successful yet the brain is still not fooled into believing that the CG elements are real. Even when looking at blockbuster films like The Lord of the Rings, one is still able to distinguish between what is real and what is not. Take this scene of the character Ferodo in the same shot as a Cave Troll for example, it is clear which of these characters are real and which is computer generated.

Lord of the Rings Cave TrollLord of the Rings Cave Troll

What qualities of reality are missing in CG?

If you’d like to know more about my creative journey to date I have written a short bio on my freelance 3D CGI design and animation portfolio.

Where am I going?

Last night I made a post about What I hoped to achieve with this page, it seems to make good sense then that my next post be about How I intend to achieve it.

Although I have already collated a number of research materials, available on the Links page, I’ve not as yet read through them. Not because I’m a lazy student, but because I believe I might get better results by working this out for myself, hoping to develop personal insight through my own experiments, rather than prescribing to formulas derived by somebody else.

I believe that rather than arriving at a finite end, this mode of study has the potential to continue indefinitely. Despite this, it is expected that a natural transition will occur where consulting the findings of other practitioners makes more sense than continuing with the experiments.

Yesterday, I speculated that the answer to achieving photorealism in CGI was hidden in light, biology, and cameras. This hypothesis wasn’t just plucked randomly out of the air and is something I’ve been thinking about for years, albeit without a great deal of focus. Perhaps, before embarking on this exploration, I should evaluate the things I already know?

Once again, watch this space…

Why am I here?

I bet it has something to do with light

This is my very first Blog Post so I thought I’d make the most of a good opportunity to organise my thoughts and decide what my intentions are for this page.

The purpose for this page is two-fold really, although I’m not yet sure which is the primary goal and which is secondary. One objective  is to use this page as an assessment tool for the MA in Creative Media that I’m currently attending, another is to become a better artist (whatever that might mean).

I recently completed a freelance project where my client asked me to produce 3D CGI renders that looked photo-realistic. In the pursuit of photorealism I produced these images:

Whilst I’m not overly disappointed with the result, one can still determine that the content of the images is unreal, computer generated. Considering that in blockbuster films with blockbuster budgets, one can still identify which elements of the film are CGI and which are real, my own renders are not a bad result. Despite this though, I can’t help but feel that they could have been better.

I now find myself asking the question: what is that makes an image look ‘real’? How does one discern what is photographed and what it is CGI? Perhaps if I started a blog-page, a person, not unlike yourself, browsing the net, might just leave a comment containing the answer? My guess is that if such a comment was left, it would have something to do with light.

In the more likely event that I’ll have to find out for myself, I expect that exploring how the human eye works and how the brain interprets the signals received from the eye should provide some clues. I’d wager that I’ll get even closer to the answer by exploring how cameras work, after all, it’s not real life that I’m trying to emulate, it’s realism as found in a photograph. What ever the answer is, it must surely have something to do with light… mustn’t it?

If I can convince an audience that something I have produced entirely in 3D is a photograph of a real world object, then this study will have been successful.

Watch this space