Guide to Shooting S3D

Guide to S3D Shooting

This page is constructed from other blogs and articles that I have written, so may be a bit jumbled. I hope to cover some of the key points of shooting in S3D.

The most important thing to remember about S3D (Stereoscopic 3D) is that at the time of writing at least, we are not actually shooting or displaying a true 3D image. What we are doing is shooting two flat camera views that mimic the two views our left and right eyes see and the presenting those two images on a screen as two separate flat images delivered one to each eye so that our brains then interpret what it’s seeing as stereoscopic 3D. So what we are creating is an optical illusion, using 2 flat, 2D images to trick our brains int thinking they are seeing 3D. Because of this some of the shooting methods and techniques that work best are not always completely intuitive as we are not always trying to accurately re-create a stereoscopic image, but instead creating images that our brain can easily interpret as a stereoscopic scene with as little effort as possible.

Consider this: When we look at objects that are a great distance away, say the moon or stars in the night sky (we can for arguments sake consider these to be infinitely far away). Our eyes will in effect be parallel when we see these infinitely distant objects. When you project a point of light on a screen our eyes will converge on that screen, that is to say they will look slightly inwards towards each other, so to represent something that is at infinity we must separate the left and right views by the separation of our eyes, typically  65mm so that our eye’s are again parallel. That’s fine and can be done. But what happens when you change the size of the screen? The on screen separation of the projected image also changes as the images are scaled to fit the new screen size. As our eyes are not designed to ever look apart (diverge) it is dangerous to have a greater on screen separation (disparity)  than the distance between our eyes, typically 65mm or 2.5?. So that presents a problem as there is huge variation in screen sizes around the world from big Cinema screens to I-Phone screens. As a result when producing a stereoscopic production it’s vital that you know from the outset what the largest screen size will be as this will be your limiting factor when calculating your disparity limits or how far apart you left and right separation will be.

But what about viewers watching on a smaller screen? Well they will still see a stereoscopic image? Yes, but for them the scene will appear to have less depth. To combat this we can take advantage of the fact that as well as disparity our brains use other cues such as colour (things further away tend to be bluer) scale and parallax to judge distance and depth. By introducing additional depth cues within a scene you can minimise the depth differences you get with different screen sizes. So for S3D it is beneficial to have objects within the scene that have known size and scale, people for example or vehicles. So with a scene that contains scale and other cues, even if our eyes are still slightly converged because the screen being used is smaller than the one you have allowed for, you can still have distant objects that our brain will interpret as being at great distances or even infinity.

GETTING STARTED;

So how do you get started, how do you learn how to do it? Well in my opinion the best way is to go out and shoot some 3D and experiment. There are lots of things to learn, most of which can be really difficult to understand and comprehend unless you try them out for yourself. Teaching someone how to focus a camera is easy, when it’s sharp it’s sharp, but 3D is a different beast. Something that looks right on a small screen may not work on a big screen, certain shooting methods will result  in pictures that are un-useable until they are adjusted in post. There are many things that simply don’t work in 3D. When I started to shoot 3D some 6 years ago I found it was more about learning what you can’t do than what you can do. So I recommend you get a pair of cameras or something like a Nu-View, lens-in-a-cap or a Hurricane Rig and start shooting, editing and playing with 3D.

Cameras Pairs:

Clearly to shoot 3D you need a pair of cameras. One camera shoots the left eye view and the other camera shoots the right eye view. The two views are then projected or displayed using some kind of filtering so that each eye only sees the appropriate view.Ideally this should be perfectly matched and perfectly synced pair of cameras, but that’s not always possible and if you are simply experimenting it can get expensive. One exception to this is the use of XDCAM EX1?s and EX3?s. Optically these cameras are the same so this is a cheaper option than using a pair of EX3?s. The EX3 is a good camera to use as it has a genlock connection that allows you to sync it to an EX1. If you are using cameras with Lanc connectors (Most Sony and Canon consumer cameras) then you can get special 3D sync controllers such as the this one.

CAMERA SYNC.

Getting the two cameras in sync for 3D is crucial if anything moves in the shot, especially if anything is moving across the frame. Imagine a shot with a car traveling through the frame. If the cameras are not completely in sync, the position of the car when the image is captured will be slightly different for the left and right views. This will in effect move the car forwards or backwards in 3D space as the cars positional difference between the left/right frames will alter the convergence/divergance for the car. The static parts of the frame will be unaffected. Imagine also a shot of a person walking or running, if one camera takes its shot slightly behind the other, the persons legs will be in a different part of their stride so the left eye will see legs in one position while the right eye will see legs in a slightly different position and the 3D will break down.

Also consider what happens when you pan. If one camera captures its image slightly ahead of the other then the 3D depth will appear to either compress or expand as the cameras are panned because the left and right images will be shifted slightly left or right with respect to each other. Even at 30 frames per second a half frame sync difference would equate to a half degree difference between the 2 cameras with a 5 second 180 degree pan, which is not all that fast.

It’s not just a case of getting the cameras to go in to record together, as the it is the video streams that the cameras are producing that need to be in sync, so you need both cameras to switch on and power up in sync.

At 24P (the slowest typical frame rate) it is traditional to use a 1/48th shutter. To ensure that both cameras are exposing for at least half of the open shutter period together they must be within roughly 1/100th of a second of each other. This is the absolute minimum needed and wont be ideal for any fast movement. Ideally you want cameras running within 1/1000th of a second of each other. This can be achieved with a Lanc controller or genlock.

Camera Separation:

This is where it starts to get really confusing. The camera separation is know as the Interaxial (IA) and is a measurement taken between the lens centers. A good starting point is to aim for an interaxial of around 55mm to 65mm or 2? to 2.5? as this is similar to the average human eye separation. However you do need to understand that for many shots you will want different Interaxials as the interaxial governs the 3D depth in a scene, the disparity or difference between the left and right images and as a result it also determines how close you can get to your subject matter. An old 3D photography rule of thumb says that the closest object to the camera in a 3D scene can be no nearer than 30x the interaxial to the cameras. So you can see that even with a 70 mm interaxial you can’t have anything closer than 2.1m (6.5ft) to the camera, so if you want to do any close up or macro work your will need much less camera separation. In reality with larger display screen sizes that limit is closer to 60x the InterAxial. But as you bring the cameras closer together you will find that 3D in distant objects will reduce. Going back to the human 65mm lens separation (interoccular) you will find that beyond about 50m (150ft) there is virtually no 3D, so shooting a big panoramic scene with only a 65mm separation is not necessarily going to work. Us humans don’t actually see things that are 50m or more away in 3D, instead we use other depth cues such as scale, colour and shading to provide depth information.

Camera Mountings:

So how do you mount the cameras so you can get them the right distance apart? Well it’s reasonably easy to mount a pair of cameras side by side, but the size of the cameras often restricts how close together you can get them. If your doing scenics or landscapes this is probably not a problem, and for experimentation it’s a good way to learn. But for narrative work you really do need to get the cameras close together and this will require either very narrow cameras or the use of a beam splitter rig. Beam splitter rigs work by using a half silvered mirror (like an autocue or prompter mirror) to direct half the light to one camera while the other half of the light goes straight through to the second camera. This allows you to mount one camera above or below the other. This also means that in effect the camera interaxial can be reduced all the way down to zero. The problem is that the rigs are cumbersome, one of the cameras sees a mirror image, and the mirror also changes the polarisation of the light reaching the cameras so that reflections etc look different.

Shoot Parallel or Converged:

OK, so you have your cameras mounted together but how do you shoot? When we humans look at an object our eyes converge on that object. You can see how this works by holding a finger about 30cm (1ft) in front of your face and looking at it. While focussing on your finger notice what happens to things in the distance, you will see double images of distant objects.

Converged Cameras showing excessive background disparity or separation

If you do the same with a pair of cameras, converging them to point at one object, close to the camera in your scene, distant objects may separate or diverge so much that when your finished 3D video is viewed the image separation is too great for our brains to fuse them back together. This leads to headaches or 3D that simply doesn’t work. If you are shooting in a controlled environment with only limited depth, such as a small room and you have small camera separation then you may be able to converge the cameras. On some big budget productions sophisticated camera rigs with motorised convergence are used so that the cameras can follow and track objects as they move closer or further from the camera.

Shooting Parallel

For many shoots one of the easiest ways to shoot is with both cameras completely parallel to each other. This eliminates the problem of excessive divergence on distant objects, but it does mean that everything that you shoot will appear to be in front of the screen when viewed. This isn’t a problem though as you can easily adjust this in post production, indeed almost all 3D will require adjustment in post anyway. Trust me, no matter how carefully you shoot, post production tweaks will be required. This means that you will need to zoom in to your footage to make to be able to make these adjustments such as moving images left and right or small rotational adjustments. This zooming in will lead to some associated loss of image quality. With parallel shooting, as well as alignment tweaks you also need to enlarge the image enough to be able to shift your images left and right to adjust the on screen convergence, this process is often referred to as HIT (Horizontal Image Translation). So you need to avoid anything getting too close to the edge of frame and you also need to allow for the increase in disparity between the left and right views that occurs when you enlarge the image.

One advantage of parallel shooting is that as the cameras are not pointing in different directions there are no keystone differences between the left and right images. This can reduce the work required in post production to correct this. It is also preferable for any productions that will use extensive chroma key, rotoscoping or similar post production techniques.

Parallel shooting gives you the ability to set your convergence point in the edit suite and I would recommend that this is the way that you should shoot if you are using a rig that is difficult to adjust, or if you are doing run and gun style shooting without a 3d monitor. I don’t believe it always produces the best S3D results, although this is a subject of much debate, but it is a safer way to shoot as your disparity will solely a function of camera separation and further errors due to excessive toe in can be avoided.

Convergence and shooting Converged.

Two Cameras converged on block

Shooting converged means that the cameras will be pointing very slightly inwards, towards the subject you are shooting. The angle (angulation) that the cameras are pointing inwards will normally be very small often only a degree or so. On a stereoscopic screen, objects that line up exactly appear to be on the plane of the screen. Objects that are separated causing you to go cross eyed to view them appear to come out of the screen (negative parallax) and objects separated the opposite way appear behind the screen. There are two ways to change the convergence point within a scene.

Using Angulation (toe in) or Interaxial to change convergence

Adjusting the angulation or toe in of the cameras is one way to change the convergence, the other is to adjust the interaxial or camera separation. If you change the convergence by increasing the toe in (the amount the cameras point inwards) then as well as changing the convergence point, bringing it closer to the cameras, you also increase the separation or dissparrity of objects to the rear of the convergence point. What this means in practice is that the foreground moves forwards while the background moves backwards. This is a very un-natural effect and if you cut between shots with different angulation it takes the brain a few seconds to work out what’s going on. In a rapid cut sequence this can be so distracting that the 3D breaks down all together. At the very minimum it is tiring as the brain has more work to do. The other way you can change the convergence is by changing the interaxial. bringing the cameras closer together brings the convergence point forwards. However unlike increasing the angulation the amount of dissparity in the background (and foreground to some extent) hardly changes. This means that while the convergence point changes there is only a small change in far dissparity so overall the depth of the scene doesn’t change by a significant amount. This means that cutting between shots is less intrusive and the viewing experience is much easier.

As mentioned earlier, shooting converged can introduce keystone errors, especially with larger toe in angles so you need to keep an eye on this, sometimes you may want to shooting using a combination of Parallel and Converged and this is where experience really counts as it is only through experience that you will learn when to switch from one mode to the other.

On Set Monitoring.

While it is possible to shoot S3D without a 3D monitor, it is far better to have a 3D monitor on set. Not only can you use the monitor to check the rig alignment, but you can also check for differences in white balance and exposure between the left and right images. Most beam splitter rigs introduce tiny differences in the reflected and direct images and these need to be minimised through white balance adjustments and exposure tweaks. If you have a 3D monitor simply blinking your eyes so you alternately see the left and right views can reveal differences in the images. Better still is a monitor such as the excellent Transvideo range that includes dual waveform and vector scopes that allow you to measure the output signals of both cameras at the same time. If you are on a budget, you can use a pair of low cost USB capture devices with a PC and stereoscopic multiplexer. Better still if the cameras are genlocked you could use the BlackMagic HDLink 3D to feed a domestic 3D TV or 3D computer monitor.

One Response to Guide to Shooting S3D

  1. Pingback: Stereoscopic 3D | Hurricane Rig Guide to Shooting 3D | Enhanced Dimensions

Leave a Reply

Your email address will not be published. Required fields are marked *