Dealing with 4K data, part 3: Handling 4K connectivity

Gary Adcock

After frightening you regarding 4K in your post and production future, I left off last time with this prophetic statement. The computer you edit your project on should never be older than the camera you created that project on.

Having been lucky enough to be part of the flotsam at the crest of post and production technology. I have lived by that statement, painfully as that can be at times, both mentally and fiscally. Technology is constantly advancing and users need to realize that the process is either the slowly evolutionary advance or it can be revolutionary, bringing catastrophic conversion.

Working in 4K is not really new. Film has resolved at 4K for 100 years. IMAX, around since the 70s, is one of the oldest processes for displaying high resolution images, is how most people got introduced to the stomach-churning aspects of higher quality visual stimulation.

Much like the data deluge we are addressing today, IMAX had to deal with handling the weighty, 15perf 70mm film, when the reel for a 30 minute amusement park ride physically weighs over 50 lbs. for IMAX’s “credit card”- sized frame.

The heavy lifting now is all data. When pro camera systems can generate nearly 1TB of 4K data for every 5-10 minutes they run, compression is your friend — as long as you understand that the lower the bit rate during capture, the less information and more work you will have in post.

Understanding the camera systems

Therein lies the dilemma. Affordable 4K cameras compress the hell out of your data, jamming 4K in to a 100Mb/s or less data stream, so your 4K/UHD signal is in the same data range as DVCProHD was, at 8bit and highly compressed. Yet anyone who has ever worked with compressed REDCode or ARRIRAW files quickly understands those camera systems often generate more data than can be handled efficiently or timely by post production.

Mezzanine or Intermediate codecs such as ProRes, DnxHD and the Cineform have allowed productions to transform their post process without having to fully return to the “offline/online” workflows from the earliest days of non-linear editing.

Throughput is still the limiting factor. Arrays full of spinning disks offer reliability and affordability, yet still need the speed of a solid-state interface to maintain a data stream efficiently to allow multiple users to work simultaneously.

Still, our limitations are quickly falling away, as long as you consider that every one of the currently available cadre of 10Gb/s connectivity solutions, whether Ethernet, Fibre Channel or USB 3.1, will be working at the very limit of the respective technology when handling a 10bit 3840 x 2160 UHD video signal at 30p.

Hopefully you now better understand the opening statement. In reality the only difference between computers and cameras nowadays will be whether you have optics or not.

Next week: Connecting the dots