« Back To Vision & Imaging Industry Insights
AIA Logo

Member Since 1984

LEARN MORE

AIA - Advancing Vision + Imaging has transformed into the Association for Advancing Automation, the leading global automation trade association of the vision + imaging, robotics, motion control, and industrial AI industries.

Content Filed Under:

Industry:
N/A

Application:
Other Other

Machine Vision Enters Augmented and Virtual Reality Environments

POSTED 10/05/2018  | By: Winn Hardin, Contributing Editor

In standard lenses, the entrance pupil of the imaging system (the aperture of the camera) is positioned deep within the lens. In this design, the aperture cannot be located close enough to the viewing position within an AR/VR device to replicate the human eye position, and the imaging system is unable to capture the full field of view of the display (it cannot “see through” the AR/VR device entrance aperture at this distance). Image courtesy Radiant Vision SystemsAugmented reality and virtual reality (AR/VR) applications are far-reaching. Workers assembling wind turbines at a GE Renewable Energy factory wear smart glasses that allow them to pull up digitized directions laid over assets. Car designers at Ford are using Microsoft’s HoloLens mixed-reality headsets to preview designs projected onto an actual car or clay model. Apple’s new iOS 12 offers an AR app called Measure, which allows users to measure objects with the phone’s camera.

Although the AR/VR market is ripe for the picking, machine vision is starting with a small harvest. But like many disruptive technologies, such as deep learning, AR/VR represents a growing bounty for the vision and imaging industry.

SLAM Dunk
One important technology in the field of vision and AR is simultaneous localization and mapping (SLAM). SLAM uses physical data to construct maps of unknown environments through feature points, allowing AR applications to recognize 3D objects. Augmented Pixels, a computer vision research and development company, has created SLAM Inside-Out Tracking. The technology conducts real-time tracking, mapping, and surface reconstruction of the environment, including obstacle avoidance, with submillimeter accuracy and low latency. 

SLAM Inside-Out Tracking uses two sets of sensors. One has a mono camera and the other a stereo camera. Both have a gyroscope and an accelerometer and run on a low-power CPU. Augmented Pixels’ SLAM library lets clients generate maps locally with minimal hardware required. 

“Our clients have a mapping structure where they can reuse the maps and integrate some content into them,” says Vit Goncharuk, CEO of Augmented Pixels. “So you can have three or four people wearing AR glasses all playing together in the same map and same time position estimation in real time.”
In 2017 Augmented Pixels partnered with LG Electronics to create a smart, compact 3D camera module with SLAM for the manufacturer’s autonomous vacuum cleaner. The module includes real-time calculation of 6DoF (six degrees of freedom) and an onboard point cloud, which is a collection of large data sets representing a 3D shape. “We also see a market for smart toys and warehouse robots programmed on a path to drive an item from point A to point B,” Goncharuk says.

Because the SLAM system requires only a mono camera and a mobile app to run, costs are lower, and flexibility and efficiency are higher, than they are for competitive systems from the likes of Google, Apple, and Facebook, according to Goncharuk.

Radiant’s AR/VR lens positions the imaging system’s entrance pupil (camera aperture) at the front of the lens, enabling it to be positioned at the same location as the human eye within the AR/VR device. This allows the imaging system to capture the full field of view of the AR/VR display through the headset. This is especially critical when AR/VR displays create a wide field of view that is meant to be visible to an extreme angle in every direction. Image courtesy Radiant Vision SystemsMimicking the Human Eye
Radiant Vision Systems, which develops lighting and display measurement technology, has created a lens that measures AR/VR display quality from the headset to ensure the intended user experience. “A standard, off-the-shelf display measurement system can be used to measure the monitor of a laptop the way a human would see it, for example, by imaging the entire display from a half meter to a meter away,” says Doug Kreysar, Chief Solutions Officer for Radiant Vision Systems. “But with AR/VR, the ‘pupil’ of the measurement system has to replicate the human iris, which is positioned only millimeters from the display.”


 

Typical lenses used with display measurement systems prohibit positioning the system at the position of the human eye. This is because the “pupil” (aperture) of the system is designed deep inside the lens, causing occlusion of the display when capturing images through the headset. To solve this, Radiant Vision Systems designed the aperture at the front of its new AR/VR lens. This enables positioning of the imaging system’s entrance pupil within AR/VR headsets at the same location as the human eye to view near-eye displays in full, as they would be seen by a user.

Radiant Vision Systems had to overcome another optical challenge during its lens design: capture a wide field of view up to 120° horizontal that could be calibrated to get accurate data within a spatial context to enable precise adjustment of display design. “We can’t just give a pretty picture,” Kreysar says. “We have to give absolute quantitative values.”

In addition to consumer electronics, Kreysar sees growing use of AR/VR in military, medical, and industrial applications. “We’re going to see an explosive growth in AR, VR, and mixed reality because it’s so useful to have information displayed within your field of view as you’re doing work,” he says. “And with more and more immersive experiences in the consumer electronics market, it’s hard not to be excited about the technology.”

Intel’s Project Alloy is a VR headset that detects and avoids physical objects, enabling users to navigate a virtual environment safely. Image courtesy IntelA Mixed Bag
Deep-pocketed tech companies are deploying AR/VR initiatives straight from the pages of sci-fi. Intel has used artificial intelligence and Intel RealSense depth-sensing camera technology to create a mixed-reality environment that merges AR and VR. Intel RealSense, which gives computing devices the ability to understand hand gestures and facial expressions, is now being integrated into VR headsets for a solution known as Project Alloy. 

Equipped with spatial and contextual awareness, Project Alloy provides the user free range of motion. This eliminates the need for consoles and controllers to operate the VR headset. Instead of using a controller to grab a virtual object, for example, wearers simply use their hands. Additionally, the headset detects and avoids physical objects, enabling users to navigate a virtual environment safely.

Despite progress, vision-enabled AR/VR solutions still face obstacles. AR/VR requires not only a camera but also AI, deep learning, and the ability to process enormous data sets. The technology also depends on bulky headsets that often deliver low-resolution imagery, not to mention making people nauseous. And while VR products such as Oculus Rift and HTC Vive are mainstream, AR headsets are still mostly targeted to developers.

But these challenges will eventually be overcome, according to Matthew Busel, author of “Game Changer: How Augmented Reality Will Transform the World of Sports.” As he puts it, “It is not the AR of today that will be changing the world, but the AR of 10 years from now.”

If AR/VR allows people to see the world in a new way, then machine vision can help them understand it. With solutions such as tracking and mapping software, specialized lenses, and next-generation VR headsets, machine vision is planting its flag in the AR/VR world.