Webinars

Event-Based Vision: Bringing More Performance and Efficiency to Improve Machine Vision Applications

Originally Recorded February 01, 2024 | 12 PM - 1 PM ET

Watch this Webinar

ABOUT THIS WEBINAR

The growing complexity and challenging operating conditions in industrial machine vision require innovative approaches to capturing and processing the necessary visual information that achieves the objectives of the system. These systems include high-speed inspection cameras that must deliver fast motion capture, robotic grinding or welding systems that need high dynamic scene capture, and smart Edge IoT cameras that must have both speed and accuracy as well as robust object tracking capabilities.

But for the most part, such systems have relied on an increasingly inadequate and incomplete vision method: frame-based vision. This method struggles to address many important challenges, such as capturing data in a high-speed, continuous fashion in dynamic scenes; working effectively in challenging lighting conditions; and functioning in operating environments where compute and power resources are limited.

Most important is a lack of completeness: Frame-based vision was never meant to address the challenges introduced by today’s industrial use cases. In frame-based image capture, an entire image (i.e. the light intensity at each pixel) is recorded at a pre-assigned interval, known as the frame rate. While this works well in representing the ‘real world’ when displayed on a screen, recording the entire image at every time increment oversamples all the parts of the image that have not changed.

Event-Based Vision introduces a new approach that, like our eyes and brains, uses independent receptors collecting all the essential information, and nothing else. With 10-1000x less data generated, >120dB dynamic range and microsecond time resolution (over 10k images per second equivalent), event-based vision opens vast new potential in areas such as industrial automation, robotics, security and surveillance, mobile, IoT and AR/VR.

Over the past several years, event cameras that leverage neuromorphic techniques have gained a strong foothold in machine vision applications in industrial automation, robotics, automotive and other areas. These are all applications where better performance in dynamic scenes, capturing fast-moving subjects and operating in low-light conditions are critical. In addition to benefits in terms of power consumption, data efficiency and dynamic range, event-based vision addresses a fundamental limitation of traditional camera techniques: how light is captured.

Among the advantages of event cameras are:

  • Blur-free images as there is no exposure time
  • High-speed data capture: 10K fps resolution equivalent
  • High resolution of up to 1280x720px
  • High dynamic range (>120dB)
  • Shutter-free operation: no global or rolling shutter needed

Uses case discussed include:

  • Object tracking that leverages the low data rate and sparse information provided by event-based sensors to track objects with low compute power. Well suited for pick and place, robot guidance and trajectory monitoring;
  • Fluid monitoring that uses event-based optical flows to perform fluid monitoring in real time and analyze unwanted dynamics due to residue build-up, spot contaminants or unwanted air or gas bubbles. Well-suited for continuous liquid flow monitoring in food and beverage, oil and gas, and biological processes;
  • Vibration monitoring that enables the monitoring of vibration frequencies continuously, remotely, with pixel precision by tracking the temporal evolution of every pixel in a scene. For each event, the pixel coordinates, the polarity of the change and the exact timestamp are recorded, thus providing a global, continuous understanding of vibration patterns. Used for predictive maintenance tasks such as motion monitoring, vibration monitoring and frequency analysis;
  • Particle and object sizes monitoring: Event cameras can better control, count and measure the size of objects moving at very high speed in a channel or a conveyor. Implemented in systems for high-speed counting, batch homogeneity and gauging.

Viewers will be able to learn about development tools, algorithms and open-source resources that can accelerate the understanding, experimentation and implementation of embedded event-based vision capabilities in machine vision systems

Prophesee is the inventor of the world’s most advanced neuromorphic vision systems. Composed of patented Metavision® sensors and algorithms, its event-based vision technology enables machines to process visual information more efficiently, thoroughly and in challenging conditions

Key Takeaways:

  • Introduction to the concept of event-based vision and how it compares to traditional frame-based methods
  • Understanding of key benefits of event-based vision in terms of power, performance dynamic range, integration with other data acquisition technologies
  • Overview of development tools and methods that can be used to integrate event-based vision into MV systems
  • Examples of common use cases that leverage the advantages of event-based vision in industrial applications

Watch this Webinar

Webinar Image

Exclusive Sponsor

Sponsor Logo Prophesee is the inventor of the world’s most advanced neuromorphic vision systems.

The company developed a breakthrough Event-Based Vision approach to machine vision. This new vision category allows for significant reductions of power, latency and data processing requirements to reveal what was invisible to traditional frame-based sensors until now. Prophesee’s patented Metavision® sensors and algorithms mimic how the human eye and brain work to dramatically improve efficiency in areas such as autonomous vehicles, industrial automation, IoT, mobile and AR/VR.

Prophesee is based in Paris, with local offices in Grenoble, Shanghai, Tokyo and Silicon Valley. The company is driven by a team of more than 120 visionary engineers, holds more than 50 international patents and is backed by leading international equity and corporate investors including 360 Capital Partners, European Investment Bank, iBionext, Inno-Chip, Intel Capital, Renault Group, Robert Bosch Venture Capital, Sinovation, Supernova Invest, Xiaomi.

Learn more: www.prophesee.ai

PRESENTER
Image of Gareth Powell, <b>Product Marketing Director, Prophesee</b>

Gareth Powell, Product Marketing Director, Prophesee

An industry veteran with more than 35 years of experience in semiconductors, specializing in CMOS imaging after the acquisition of VLSI Vision (early pioneers in CMOS sensing technology) by ST Micro as applications engineer, then later as technical marketing/business manager. Gareth joined (Teledyne-)e2v, in 2006 as strategic marketing manager appointed with the task of building a CMOS imaging business from the ground up targeting professional/industrial markets. In early 2022, Gareth joined Prophesee as Product Marketing Director to contribute to bringing to market their pioneering Event Based Sensing technology and products. After graduating with a degree in Electrical and Electronic Engineering from Swansea University in the mid-80s, Gareth relocated from the UK to France to join ST Micro in Grenoble and has remained there ever since with his wife and three children. Gareth is also passionate about music and is a performing musician and recording studio owner in spare time.

This webinar is filed under:

Robotics
Vision
Motion Control & Motors
AI