When Special Effects Meet Machine Vision...All Missions are Possible
There’s a common excuse in the world of moviemaking: “The camera can only do so much.” In a world of compromises, a man named Joe Cicio aims to change that. A camera operator and problem solver by trade, he’s been involved in some 40 movies. If you’re the kind to stick around for the credits, you’ve seen him on the crew list of the first two Iron Man films, Black Swan and a number of other CGI-filled blockbusters. He’s worked in film since 1985, witnessing “all hell break loose” with the arrival of digital imaging.
Hell has continued to break loose in the cinema camera industry ever since, with innovations challenging excuses, and comfortable incumbents often struggling against new entrants. A perfect example is the introduction of the RED camera by Jim Jannard, the founder of Oakley, a sunglasses and sports equipment company. The RED camera helped make 4K part of the Hollywood vernacular and drove the industry leaders, who built their businesses on celluloid, to respond in kind.
Disrupting the Disrupters
Joe Cicio’s recent work could disrupt the industry again. Even the new generation of digital cinema cameras are not perfect. Though they shoot in 4K, many are large and heavy, and they are all expensive, often costing $50,000 each. They also tend to have rolling shutters. These qualities of modern digital cinema cameras make them awful for the kind of work that Cicio wants to do. Such as strapping them to a high-performance motorcycle and driving them at 100 miles an hour.
So the excuses flow. “The camera’s too heavy.” “It’s too expensive to risk on this shot.” “It’s not made for that.” “We can’t make it do what we want.” Or, in the case of the GoPro, “We don’t really have a choice, even if the picture quality isn’t there.”
Overcoming these barriers and excuses requires a bigger imagination and some serious problem solving. “Digital made so much sense. But we suddenly had to get comfortable working with new technologies to do things that just weren’t possible with film. That became part of our process.” So his team applied his engineering know-how and cinematic imagination to reevaluate what equipment he could use. High performance, low weight, (relatively) low cost, and practical to work with? That sounds like machine vision.
Lights, Camera, Opportunity
That’s exactly where Cicio turned. With the help of Al Meyer Jr. (Engineer/camera designer) and Matt Whalen (Video engineer) he created a system that uses Teledyne DALSA area scan cameras, which can capture even fast-moving objects at terrific speeds without distortion, thanks to their global shutter. He chose the Falcon2 for its high pixel configuration. At twelve megapixels, it’s actually higher resolution than what’s needed for the movie footage. This means when paired with an anamorphic lens, Cicio gets a wide-screen aspect ratio. Even if there’s action happening outside of the frame lines, the camera’s sensors still pick it up. Back at the lab, the production team can tilt up or down to accommodate the out-of-frame footage. This performance, along with the low weight and cost, allows Cicio to start putting cameras in new places—like zip lines, cranes, and those very fast motorcycles.
New Solutions Create New Challenges
Using machine vision cameras for these kinds of applications also presents unique challenges. Although the camera can deliver image data over both Camera Link or GigE, neither will help in the situations Cicio foresees. With a motorcycle that’s traveling at high speeds, obviously, it’s not practical to attach a computer tower to capture the data.
“We’re having to develop back-end solutions,” says Cicio. With two Camera Link ports coming from the camera, the raw data is transferred to three boards stacked on top of each other and attached to the back of the camera. Greg Johnson, the HD and UHD wireless systems designer, created a wireless system to transfer the data so it can be previewed as it’s recorded. The system allows the crew to preview 1080p footage at up to 10 miles downrange with a one frame delay!
“We send the files to the studio, they look at what we record and they say ‘good here, not so good here.’ We’re making discoveries as we go.” Joe doesn’t expect to know everything about the camera at first. The process starts at the basics, and then progresses from there. Sticking to what you know may work to get the job done, but to reach quality—the kind of quality Joe strives for, baby steps are needed.
The above test was performed at a track north of Los Angeles. The camera is running at 24fps with a 1/50 second shutter speed. The bikes were going at “a fairly decent clip” (that is Cicio-speak for straightaway speeds of around 240 kilometers an hour). The deep f-stop and debris from the track produced artifacts on the IR filter visible in the image. To minimize this, Cicio and his team are already working on a new IR filter that is mounted further away from the image plane.
In the factory, machine vision cameras and their applications are perfectly suited for passing on data while sitting still and aiming at one light-controlled area. When the camera itself is moving, and fast, new problems arise. For example, while the camera was able to capture incredible detail, it made even the slightest flaw glaringly obvious. The team had to tune a 4K lens to cover the CMOS sensors, letting the camera operator get that detailed shot without seeing the aberrations. The system also required a sophisticated roll axis system (designed by Bob Vogt, a mechanical engineer) to allow the camera image to remain steady, even as what it was mounted on rotated and turned. In the end, it’s about pooling expertise to solve these novel challenges: “It’s really just figuring out how to train the camera to do the type of photography we’re trying to do,” says Cicio.