“Ok. This is how I picture the final shot:
The camera will be low to the ground, moving up towards the gift. As it gets closer, it will move up and over the gift, and tilt down to look at it. As it’s tilting down, the ribbon will untie itself, and the gift will open, revealing the inside. How can we make this work?”
This conversation with Rob Thomas was the beginning of a long/fun/challenging process on creating the final visual effects shot of this years “The Easter Gift”.
In order to make the gift come alive on it’s own, we came up with three options we had to choose from: Stop-motion, a practical gift opening with fishing line, and recreating it digitally in 3D software. We concluded that stop motion wouldn’t provide the feel we needed and fishing line attached to different parts of the gift would be far too difficult to nail a good take, given that the camera move itself was quite elaborate. And so, we set off down the digital road to shoot the scene without a practical gift even in it, and add one in post.
The first thing we did was get some help from a few people who were more up to speed on the subject. We ended up hiring Rhett Blankenship to track the scene, while handing over the modeling and animation to a digital artist, Kent Trammell. Rhett spent some time explaining that he would need points to track, and sent me tracking markers to cut out and stick in our scene. In addition to this, he recommended I create a placeholder out of cardboard, that could represent the size of the gift, so it would be easy to figure out angles in post production. As you can see, the placeholder was pretty high-tech: utilizing a plastic CD spindle and lots of gaff tape.
In prepping for the shoot, I got out our 8-foot indie dolly track, and rented some extra gear, including a Losmandy Porta-Jib, and a Varizoom VZ-MC100 remote motorized Pan/Tilt head. I knew the final shot would need forward movement, elevating movement, and camera tilt... and that the tilt would be impossible to do without messing up our jib move, unless we could control it remotely - not to mention that we couldn’t get our feet in the frame.
On set, the shot was quite a challenge. With the RED Scarlet plus Canon lens, plus jib, plus counter weights, plus heavy duty tripod, we were severely weighing down our dolly. Every time we reset the shot to do another take, we would have to lift up the whole system to get it back on track, because the weight would cause the dolly to slide off to one side.
The move itself was very difficult too. We had 4 people involved just to get the camera to travel the way we wanted it to. Rob Thomas was on the remote controlling the tilt, David Chapman was pushing the dolly, Jeff Parker was on the jib controlling the elevation, and I was at the front of the jib arm putting pressure on it to avoid shake and wobble. It took us about 15 attempts to get it right. We also shot a couple takes with the gift in it, for Kent to reference when lighting in his 3D software.
The next step was to create and insert a digital gift into that scene. First off, the shot was stabilized with Mocha in After Effects and then tracked using SynthEyes, to get 3D camera information that could be used in Kent’s 3D software. It was pivotal that the movement of the gift in the digital environment would match our real camera movement in our shot. After tracking was finished, Kent began to create and animate a digital gift, in the open-source 3D animation program Blender, based off of photographs and practical shots we gathered from the shoot. In the meantime, Trent Armstrong got to work creating a clean plate in After Effects, removing the tracking markers we had placed.
While Kent and Trent were working the visual effects, I was cutting the rest of the piece. Our workflow was to preview the shot from time to time, getting occasional, low res videos from Kent, that we could approve, before moving forward with our high-res render.
After doing some tests, Kent discovered that given the nature of our frame size, which was 4K (4096 x 2160), and adding to that the complexities and lighting of a photo-realistic 3D image, that each frame was going to take well over an hour. Given that we had about 1000 frames, and a fast-approaching deadline (Easter), we had to make some decisions. We decided to render only the necessary frames we needed, and cut our frame size down to 2.5K, because our masters are saved at 1080p anyway. I decided on 2.5K because I wanted to have a few pixels to play with yet, as I wanted to do a slight digital push in After Effects after I got the render. I asked for help from our IT wiz, Clint Miller, who gathered a bunch of computers around the office, downloaded Blender onto all of them, and created a small render farm.
Clint set up a VNC client on my edit suite, that allowed me to monitor each machine remotely. With Kent’s linux farm, plus our grouping of Mac Pros, we had 8 different (and free!) copies of Blender churning out hundreds of 2.5K .exr frames! Each computer was given a different range of the sequence, and we grouped them all together into one folder.
After all the computers finished rendering, Kent then took the .exr image sequence, which contained the image, shadows, and light information, and added them to the clean plate that Trent had finished. Kent then sent me the final render, I put it in my timeline and added foley sound effects to match Kent’s animation. A few more tweaks, and we were out the door!
Below is a visual breakdown of all the steps it took to create this shot: