DSLR Media Management with Mike McCarthy (Part 2)
In the second part of DSLR Media Management with Mike McCarthy, Mike delves into the file format of the post-production workflow. Then, he discusses frame rates, editorial options, and finishing. Visit HD4PC for even more in-depth technical information on the post-production workflow process.
If you haven’t done so already, be sure to read Part 1 of DSLR Media Management with Mike McCarthy. In Part 1, Mike provides crucial details about Canon DSLR workflow, how to backup your footage for best results, and best practices for sorting and logging your footage.
DSLR Post-Production Workflows with Mike McCarthy
Here at Bandito Brothers, we have handled the post aspect of Shane’s DSLR-based projects since the first Terminiator Webisodes. The tools available have developed during the past year from a relative hack job to a reasonably well-supported workflow.
File Format
The first thing we need to understand about a work flow, is what we are starting with. In the case of Canon DSLR footage, we have full raster HD footage in YUV 4:2:0 with a full range (0-255) of 8bit color values at a variety of frame rates. This is saved into Quicktime files, encoded with H.264 compression at about 40Mb/s, with 44.1khz audio. While high bit rate H.264 files preserve a tremendous amount of detail into a relatively small file size, that level of compression makes it difficult to playback the native files in any editing program.
In almost all cases it will be easier and more efficient to convert the footage into an intermediate editing format before editorial. This choice of formats will probably be dictated by your NLE options. DNxHD will be the format of choice for Avid, with ProRes for FCP, and a couple other options like Motion-JPEG, MPEG2-IF, or Cineform for Premiere Pro.
Frame Rates
From a post-perspective, the most obvious unique workflow challenge presented by the original Canon 5DMk2 was “30P!?” Since a transcode to an intermediate format was already required by most workflows, we slowed the footage and the audio by .1% to 29.97 for our first few projects. So, 29.97-based workflows can be relatively simple. They are even easier now with the true 29.97 support in the 7D and 1D – and recently the 5D, as well.
Intercutting with film on the other hand usually requires editing and finishing in 24p. By which I always mean 23.976p – a much more complicated challenge with 5D footage. As Shane previously mentioned, the simplest solution requires that you edit in Avid, and online with Twixtor in AE and Premiere Pro CS4. We use Re:Vision Effects’ Twixtor plug-in to convert our 30p clips to 24p, with true motion compensated frame blending. It works quite well for more footage, but it is extremely render intensive. Plus, it takes a long time to process footage.
The details of the relinking process for Twixtored footage with Avid edits are fairly complicated, but can be found on my site, (Link to Avid page on my site) for anyone who is interested in going down that path. For footage shot at 24p on a DSLR, the on-lining process should be relatively straightforward by comparison, and have no unique challenges over 29.97p DSLR workflows.
Editorial Options
While Premiere and FCP are both useful tools that will work well on smaller DSLR-based projects, Avid is the most stable and responsive editing program, for large projects that encompass hundreds of hours of footage spread across thousands of individual clips. Most Avid edits of DSLR footage will use DNxHD as their editing codec. Since Canon MOVs have a full 0-255 color range, you have to select the RGB (0-255) color space when importing the files into Avid, in order to maintain the full range of the color space.
If you are going to use you Avid output as your master, without a separate online conform, using a 10bit editing codec like DNxHD175x will prevent you from losing bit depth during the Rec709 conversion on the initial import transcode. We use 8bit DNxHD36 offline files in our Avid edits, since this is offline, because we aren’t editing at the 5D’s native frame rate, and we use simple EDLs to online in CS4 via file name relinking after the frame rate conversion. There are other more expensive options for on lining Avid edits, but I am not as familiar with any of them, since Adobe’s Creative Suite satisfies most of our current needs.
The Advantages of Pro Res
Now as a PC guy, I will still be the first to admit that Macs do have their uses. (Specifically, generating Pro Res files and accessing HPFS volumes.) For Final Cut Pro workflows, life is a little simpler in that Pro Res is capable of 10bit color by default, as long as the host application supports it.
Batching your DSLR files to Pro Res in Compressor should allow you to maintain the full resolution and color space. Compressor also has the capability to solve the 30p to 24p issue through use of Apple’s Optical Flow technology. Compared to Twixtor, our tests have found this process to be slower and the results aren’t quite as good, but if you can’t afford a dedicated conversion plug-in, this is probably the next best thing.
Now, let’s consider Premiere based edits. While DSLR files can be played directly on the timeline, using an intermediate format will provide a more responsive and stable editing experience. Adobe Media Encoder will give you the proper processing bit depth to convert your files into a variety of possible third-party formats, for editing or on-lining in CS4.
At Bandito Brothers, we batch process our Canon 5D footage in After Effects. This allows us to use Twixtor to convert our 30p clips to 24p. If the footage is already in the right frame rate, AME is totally sufficient and processes the conversions much faster. We usually online with Cineform AVI files, to utilize the headroom that 10bit color offers. Especially since SpeedgradeXR can access native files, which is usually our next step after the conform.
Finishing
After exporting an online conform, preferably in 10bit color, there’s one more step that should be added to DSLR workflows. There are a number of cleanup processes that can be undertaken to deal with common imaging issues with DSLRs. This is similar to a dust-busting pass on film workflows.
Typically caused by dust on the sensor, dead pixels are more frequent on DSLRs. This is due to their large sensors and interchangeable lenses. This can also happen to any camera, by the way. These artifacts are usually static and can be fixed by overlaying nearby pixels that were unaffected. Usually directly above or below. You may also see rolling shutter issues caused by the top of the frame. It captures a slightly different moment in time than the bottom.
Certain types of rolling shutter artifacts, especially ones related to camera motion, can be fixed with plug-ins. Other rolling shutter artifacts like horizontal bands caused by flashes of light are much harder to fix. Well, unless you manually replace the image data with information from another nearby frame. And if you ran a frame conversion process like Twixtor on your footage, this is when you should replace any frames that interpreted poorly with frames from the original source files. These processes are all very labor intensive and require quite a bit of fine tuning to perfect your image. As with any step in the process, consider your available resources and carefully prioritize the issues you want to fix.
The Bottom Line
Once you are finished fixing any defects in the footage, the resulting files should be similar to any other workflow. Then, proceed to visual effects, color correction, tape lay back, web encoding, or disc authoring. The same as you would a project from any other acquisition source.
Most of the things that are key to an efficient DSLR-based workflow take place at the beginning of the process. Once you are off to a proper start, the subsequent steps should come together the same way as any other tapeless project.
Hopefully the tips above will provide a solid overview of the potential pitfalls along with solutions to stay one step ahead. You can find more detailed information available at HD4PC. At HD4PC, I continue to update as new developments are released.
Schedule 1-on-1 Video Call with Shane Hurlbut, ASC
Looking for mentorship in the film industry? Schedule a 1-on-1 meeting with Shane Hurlbut, ASC today! This is where you can get expert advice from an industry professional on your career or a particular project.
About Filmmakers Academy Cinematographer Mentor Shane Hurlbut, ASC
Director of photography Shane Hurlbut, ASC works at the forefront of cinema. He’s a storyteller, innovator, and discerning collaborator, who brings more than three decades of experience to his art. He is a member of the American Society of Cinematographers, the International Cinematographers Guild/Local 600, and The Academy of Motion Picture Arts and Sciences.
Hurlbut frequently joins forces with great directors: McG’s Netflix Rim of the World and The Babysitter, plus Warner Bros. We Are Marshall and Terminator: Salvation; Scott Waugh’s Need for Speed and Act of Valor; and Gabriele Muccino’s There Is No Place Like Home and Fathers and Daughters. His additional film credits include Semi-Pro; The Greatest Game Ever Played; Into the Blue; Mr 3000; Drumline; 11:14, which earned Hurlbut a DVDX nomination; and The Skulls. Notably, his television credits include the first season of AMC’s Into the Badlands.
With the advent of 23.98 and 29.97 on the 5D, and the release of the (Glue Tools) plugin for FCP acquisition of 5D files (by assigning a source number / timecode / XML info to clips), its seems that there is viable alternative work flow for professional FCP editors.
Tom Daigon, can you send Mike McCarthy and email about this. I would love to pass that on. Thanks
Certainly. Consider it done. By the way…kudos on your 3 minute piece. Inspiring.
Tom Diagon, thank you so much for your kind words
No doubt the new FCP workflow plugin brings a lot of new options to the table for FCP users. I haven’t investigated them much since we are a PC based Avid and Premiere shop. Currently I haven’t found any functionality I need that I can do only in FCP, so that application has not made it into our standard workflow. While the workflow I present is an efficient one that works well for our movie, there are of course many other options. It is just a matter of weighing the pro and cons, along with budget and existing workflow resources.
Thanks, Mike for the great article. I’ve got to say that as a still photographer just entering into this arena, the editing post process seems daunting to say the least.
Right now, I’m looking to buy a computer system to handle this kind of work flow. I’m considering the Imac 2.8GHz Quad-Core Intel Core i7. Would this be appropriate? It would be informative to know what the specs of your computer system are as well.
Thanks again for the great article.
There is no “right” setup for this. It totally depends on the needs of the project and the resources available. My “computer system” is a network of four or five systems, with two Avid editors, an AE Twixtor renderer, and a Premiere conform system. And then there are the VFX systems. On a smaller project it could all be done on one or two systems if needed. A Core i7 is a powerful system, but a Mac is rarely a budget conscious move. Also, new features in CS5 with CUDA will require a better GPU than the iMac for best performance. Not required, but definitely a noticable improvement.
My recommended starting point would be a Core i7 desktop PC with a GTX 285 or Quadro 3800 GPU. For full time professional work, an HP Z800 is definitely worth looking into.
Mike, is there any quality difference converting the H264 into Cineform.mov or ProRes for a FCP workflow?
Shane, do you still shoot 30fps and than convert it to 24 using Twixtor even the new 24p firmware is out?
jorn, for post reasons and delivering dates on my commercials I am shooting 24p. Still untested waters, but seeing my way through it.
In general Cineform has always had a lead over ProRes terms of image quality, but I have yet to do an exhaustive multi-generation side-by-side comparison.
I’d be interested to see a honest comparison of Cineform vs. Prores vs proresLT on a average-ish desktop editing setup at a variety of workflow stages. I’ve always strongly felt that cineform’s 4:2:2 space is an advantage for many types of work, so I’m surprised to see so many people going for 4:2:0 prores LT. Yep, I know that the Canon h.264 is 4:2:0 but IMO the intelligent algorithm cineform uses to up-scale the chroma works a small miracle: the difference when keying, for example, will knock your socks off. Why throw that away? It’s not like you have to archive the intermediate, just transcode, edit, and toss it. Costs you nothing.
FWIW, I edit cineform 4:2:2 (neoscene) on a quadcore2.66gh/8GB/NvidiaGeforceNothingSpecial PC that I picked up at bestbuy for $600. I get flawless real real time editing in PremiereCS4 for simple edits and colour correction, no dropped frames. I’d describe my system as the bare minimum though; just running a browser that’s stealing a bit of ram is the last straw for my setup.
Hello Again Shane:
This was my first opportunity to use the Canon 5D with the new camera software…the new log and capture software…and my new Cineslider from Kessler Crane. It started as a test of the equipment and being an editor for 30 years I just got carried away with the fun!
Shane,
How have you guys got round the motion strobing of 24P footage in the new firmware and 7D?
Gordon Segrove, I have not had any problems with it. It comes down to operating and knowing how fast you can pan. You have the same limitations in film. We don’t use any Foundry plug-ins. We just shoot the camera the way I would use a film camera and it responds beautifully. I truly don’t get all jacked up with the tech stuff. If the story is there and you are engaged then I could shoot it on mini-dv. It doesn’t matter. So if the 5D is the right tool to get the most out of the story, which is was in “The Last 3 Minutes,” then that is what I use.
Thanks shane for your reply.
I only ask as we tested a 7D at 24P and 25P at a rental house the other day with some Cooke S4’s for a job, and when played back on a 1080 professional monitor the footage strobes terribly with even the slightest movement in frame (manual mode, 25P, 1/50, 320 iso). We looked at it on a monitor via the cameras HDMI cable via an HDMI-SDI converter, and also on 2 computers from the hard drive. The footage looks like no 24/25P footage I’ve ever seen, as if the motion blur is missing, or if we had shot at a reduced shutter angle (which we didnt). We cant work out if this is just a viewing problem or if when filmed out it will look fine. 30P of course looks more fluid.
Im wondering if shooting 30P and transcoding to 24P is in fact a better way to do it. Thats why I ask if youve noticed a difference in your footage since the native 24P setting was available.
Gordon Segrove, that sounds fishy. Was the sharpening tool on? Was auto light optimizer on? What picture style were you shooting in. Did you have your camera set in movie mode? Sorry but there are so many things that you have to turn off to get this baby to perform.
The camera was set in movie mode at 25P 1/50th, Neutral picture style with sharpening and saturation all the way down, light optimizer off, hightlight tone priority off, etc etc.
We used a Sandisk Extreme Pro UDMA card to ensure no frames were dropped. We tested 2 cameras. I’ve looked a third since. (We also had an Arri D21 in the room)
We looked at the footage at a post production house in London as well, where we even turned it into a 25P image sequence, as well as different transcodings . We all agreed the footage looks just like “The Pacific”, or “Saving Private Ryan” (reduced shutter angle, strobing motion).
We panned at all speeds across a scene on a head, super slow, slow, and fast. And then shot some handheld work.
In a nutshell, panning across a room with these cameras was unwatchable on a large screen at any panning speed due to the lack of adequate motion blur (even adhering to panning speed rules that we use on film).
A few more phone calls and everyone seems to confirm thats what their footage looks like as well. A bit “lively” or “stroby”.
At 30P its not as obvious, but still there.
I own an entire kit myself with ZE primes and RedRock gear and have sadly yet to find a solution to the problem. Its particulary sad as a static frame from the camera on a 1080P monitor next to that of an Arri D21 was surprisingly good!
Gordon Segrove, something still seems fishy to me. I have 8 of these cameras and what you are describing sounds like it is in auto mode, do you see the aperature adjusting when you pan it into different lighting scenarios. I think that camera has got a problem. Have you tried another 7D,Micah Smith maybe a friends or a rental house in town. Just does not seem right. I feel like there is something wrong with that camera. Is all the firmware up to date. They just issued a new one about a month ago for the 7D to clean up some of the rolling shutter artifacts and the overheating. I would try another camera.
Shane – love your site, its been a fountain of knowledge for me. One question – I have Adobe Media Encoder, and I want to use this to convert files ready for Premiere. However, the only formats in can convert to are FLV and F4V. Any ideas on why I can only use these 2 formats? How do you get it to convert to AVI?
Thanks again,
Rich
Rich Savage, thank you so much for those kind words, I would have to direct you to jacob@banditobrothers.com for any adobe software information. He is the Adobe guru. Give him a shout and I know he can help you.
Oh i think I mioght know the answer actually. Im using the Media Encoder that was bundled with Adobe Flash, so it seems the formats are specific to Flash only.
Hi Shane,
I understand you’re on a PC workflow, and though you mentioned how the Mac/FCP/ProRes combo is nice too since it’s 10-bit, I found that the Mac workflow is pretty painful in terms of color accuracy. Going from H264 to ProRes into FCP to output is a very hairy process, and there doesn’t seem to be any good solutions to keeping colors accurate.
This post is good illustration of the Quicktime gamma nightmare:
http://motionlifemediablog.wordpress.com/2010/09/02/5dtorgb-color-tests
Have you had any experiences with this gamma/color issues even on the PC workflow? If you have any tips on how Mac-based post production handles conversion while keeping colors accurate, I’d love to hear about it.
Thanks for all your hard work!
Conrad
Conrad Chu, Hi Conrad, I wanted to get the best information for you. I emailed Mike Kanfer at Adobe who is an Academy Award winning VFX’s supervisor and Digital Production specialist. This is what he had to say:
Yes, the color and gamma inconsistency issues with QuickTime are definitely frustrating, both on MAC and PC. To their credit, Rarevision has made great strides to help users who are pulling their hair out trying to unravel the mystery of it all. However, please know that the way Adobe reads the H.264’s from Canon completely bypasses QuickTime, so it’s a non-issue in our CS5 workflow whether you do it on a Mac or PC with Windows. My post from the 28th of Sept. documents the imaging quality advantages in detail.
hi mr Mike McCarthy, im a young filmaker from Nigeria. Ive been following mr Shanes work for a while now, and it has really inspired and improved my cinematography skills. Mr.Mike McCarthy,ill like for you to do a comprehensive talk on cineform raw, how to get the codec, use it and how to use it expand color space. thans- Shelikia
The Cineform codec can be acquired directly from Cineform at cineform.com, in the form of the product NeoHD for about $500. You can contact me directly at mikemccarthy@hd4pc.com if you want further details.
Mike McCarthy, thanks so much Mike.
I have a completely new company that is retailing homemade baby toys on the internet and I am attempting to find an effective resourse to locate reasonable credit card processing rates that may work with me eventhough my company simply started out some time ago.
Shane,
Great blog. Thanks for taking the time to share your battle-hardened methods. (You’re not an engineer who shoots video on the weekends). My comment is about the importance of story versus technical issues that may not be visible in the final show, especially if the work isn’t headed for the big theatrical screen. I appreciate Mike’s guidance and attention to detail. He probably has made many D.P.’s look great. I follow the techno discussions, but, as you’ve said, the story is what rivets the audience. I’ve approached most of my work from more of a documentary camera style, i.e. a style that captures (or attempts to capture) action that is unstaged or feels uncontrived (not “Hollywood-ized”), and thus has raw power because it feels REAL. So, I’m speaking up for the grainy, battlefield camera shots of an Arri 16mmSR with a 5.9mm lens on the end of boom pole screwed into the top of the camera versus the gorgeous Super Panavision70mm color camera sliding along a perfectly smooth dolly track that the great director David Lean and his D.P., Freddie Young, might set up. (Although, if they had done a battle scene they would probably have shot with something that resembled the 16mm battlefield Arri. That’s the sign of greatness, matching the look to the story.) In conclusion, I’m just voting for the following priorities: first, have a great story, second, have a great storyboard that will engage the audience, and third, shoot with the most professional camera setup and crew the budget permits. Of course, we should know our equipment options but foremost get the shot that’ll make the audience say “wow”. That’s why I like the Hurlblog…you tell us how to get the shot with battle-hardened methods and tips that matter not minutiae. You’re about getting the shot. You’re not on the bandwagon that tries to convince young D.P.’s that technological accessories and software will compensate for a bad story, and a bad storyboard.
Fletcher Murray, in one word AMEN!!!! Thank you for your comments and support. You nailed it on the head. Story, story, story.