Blog
Neuromorphic Embedded Vision and the Miracle of iCub
Tweet This:
iCub is a humanoid robot developed by the Italian Institute of Technology that leverages neuromorphic embedded vision to help it see. #machinevision
The embedded vision system designed for iCub is inspired by the biology of the mammalian visual system. In addition to being able to see, it can hear too! #machinevision
iCub is a humanoid robot developed by the Italian Institute of Technology that leverages neuromorphic embedded vision to help it see. It has been adopted by more than 20 labs worldwide. iCub has the appearance of a small child and has 53 motors that move its head, arms, hands, waist, and legs. In addition to being able to see, it can hear. It moves with the assistance of accelerometers and gyroscopes. Even a capacitive skin system is being added that will measure contact and implement safe grasping strategies.
The embedded vision system designed for iCub is inspired by the biology of the mammalian visual system. Stimulus-driven sensors, a dedicated embedded processor, and an event-based software infrastructure for processing visual stimuli make up the system. The components are being integrated with machine vision modules. High resolution, color, frame-based vision, and neuromorphic low redundancy combine to complement one another.
Embedded Vision vs. Biology
Mainstream embedded vision uses digitized frames, obtained by sampling at a regular fixed rate and processing sequences of snapshots. The data transfer, storage, and processing require a lot of time and CPU performance. Tasks that are simple for animals, such as object characterization, motion estimation and direction of attention require vast amounts of computing resources in this model. Another problem with this model is that what is sensed is noisy and ambiguous.
The biological nervous system’s computation is stimulus-driven. Neural computation occurs to enhance variations and discontinuities and discard redundancies. Typical machine vision systems do not have these capabilities. So researchers looked to biological systems for ideas on how to take advantage of brain-like sensing and computation. Robots like iCub will need these functions to interact with the world in real time and act autonomously.
Researchers Develop Solutions
The use of asynchronous vision offers an advantage in robust and adaptive signal encoding. It also allows for efficient operation in differing environments and conditions (e.g. lighting conditions, fast-moving objects). The use of sparser temporal and spatial information leads to faster and lighter computation, also resulting in lower power consumption.
The form of a sophisticated humanoid challenges vision systems with complex tasks and movements, such as turning the head or torso. At the hardware level, researchers integrated cameras, a custom processor, and FPGA (field-programmable gate array). For software, they developed basic processing modules required for development, evaluation, and debugging of asynchronous artificial vision algorithms.
The iCub’s asynchronous visual system on its open source platform is the key to validating the neuromorphic approach and making its advantages available to a wide community.
Read more about how FPGSs are used in embedded vision system applications at VisionOnline.org.
Recent Posts
- Upskilling the Workforce in Trained AI Expertise with General Motors' Jeff Abell
- A3 Excels at the 2023 BrandSmart Awards
- End-of-Arm Tool Innovations in Industrial Automation
- Bin-Picking Robots Tackle Supply Chain Chaos
- Machine Vision Trends and Advancements in Industrial Automation
- AI in the Factory of the Future: 5 Ways Machine Learning and AI Can Accelerate Manufacturing Outcomes to Scale
- View All