News
Is Embedded Vision the future of image processing?
POSTED 02/13/2018
Embedded vision refers to the trend towards miniaturization in image processing with consistent or increasing performance at reasonable prices.
At first sight it sounds stunning, but it is quite realistic. Due to the technical progress in the integration of ICs and the placement on the circuit boards, it is possible to improve the efficiency of the products while reducing the size. In doing so, it is not yet evident, where the technical limits are.
These possibilities enable new application areas in different price segments.
Embedded Vision Systems make devices of all kinds intelligent and thus have the potential for the markets to provide added value and increase the competitiveness of companies.
For the Internet of Things, it is a prerequisite that things become intelligent, so they can interact with their physical world.
Pictures are very suitable as data sources because they provide comprehensive Information.
And as the IoT becomes more of a reality day by day, we need even more and better embedded vision systems.
Topics and trends
As the trends show, Embedded Vision Systems will be part of the future. First priority is production automation. That means that the production facilities are working more and more independent. This is only possible with the help of image processing. Image processing is supposed to collect and analyze data and to make decisions based on the results.
Another topic is sensor fusion. Various sensors collaborate to collect more information for a better perception. Furthermore a tendency to the decentralization of data processing is obvious. This means, vision data processors record and analyze data locally.
Security is remaining as one of the difficult topics here. Many companies shy away from network the machines completely and to save the data in clouds which shows that, according to them, there is no proper solution available yet.
A perennial issue is miniaturization in image processing with consistent or increasing performance at reasonable prices. Due to efficient processors and lower power consumption, an optimization is definitely imaginable.
The demands on the Embedded Vision systems increase with the technical possibilities. It is not just about size or computing power but also about interfaces, communication possibilities (in a network) and reliability. Another approach is to develop vision systems which can be easily used by non-vision-experts, but this idea is yet a vision.
Areas of application
The areas of application are very diverse. First priority is production automation, which is an important topic when it comes to Industry 4.0.
Image processing plays a very important role, because it makes it possible for machines to act self-sufficient or to guarantee the quality of products. Imaging processing systems are faster than humans, which mean that the systems can act quicker than the human eye.
That is why Inline quality controls such as 100% controls can be implemented very quickly.
Damaged parts can be corrected or replaced, which makes production facilities more efficient.
If the recorded data shows a specific trend, for example if a robot lays a sheet metal into a punch lopsidedly by and by, it is easy to react immediately.
With the help of imaging processing, even whole production facilities can be managed.
Another example is robotic. Without 3D data, a robot would not be able to remove a part from a box and pass it properly to the machine.
Selection of an embedded vision system
Embedded vision systems have the advantage that they are specially assembled and adapted for an application. This results in even more advantages, but requires a thorough analysis before an optimal system can be designed. Therefore, one should think carefully, which requirements the system should comply and which are unnecessary. Especially as a developer, you should focus on what you really need.
First, it is important to clarify where the system should be used and what it should be able to do.
In the economic interest, the desired reduction of costs should also be considered. Available options are the following:
Don't Miss These Industry-Leading Events!
- The classic image processing systems consisting of camera, lens, lighting and processing unit, but the software has to be programmed. It gives the greatest possible flexibility, but also has a high programming effort.
- Combining classical systems with an artificial intelligence, the algorithms learn independently and as a result the development effort is reduced.
- Smart Cameras, where everything is coordinated in these complete packages. The associated software package is easy to use. The implementation is low in risk and fast.
Smart Cameras by EVT
The base of an embedded vision system is mostly a microcontroller, FPGAs or ASICs, which are intelligent semiconductor devices.
The developer has to decide which semiconductor device fits best for his application.
EVT cameras are working with FPGAs for many years now. Embedded developers can program these freely. Superfluous processes, as they normally run on the CPU of a PC, have been omitted, because they bind too much computing power, have a low data throughput or simply consume too much energy. By using the FPGA module for image processing tasks, the speed of embedded vision systems can be increased significantly. However, programming requires a high degree of experience.
Compact and affordable
With a combination of, for example, an Odroid board and a suitable camera, small and compact image processing systems can be built to meet the requirements for space-saving solutions in, for example, quality control or automation. Certainly there will always be applications for the current computing power where an Odroid will not be enough. But especially for the installation in a control cabinet and applications with an ARM processor, this solution is very optimal.
Several years ago EVT started their image processing software on the small and energy-saving processors and offers solutions with various camera manufacturers such as, Allied Vision Technologies, IDS or Basler. Various cameras can now be operated via the Odroid platform. Due to cost-effective and energy-efficient ARM-based hardware platforms, system costs can be saved. If necessary, existing applications can be ported to an ARM based platform without redevelopment, to reduce development time.
Even more cost-effective is possible...
An absolutely inexpensive solution would be a Raspberry Pi, Orange Pi or Banana Pi, which are suitable for specific application with fewer demands, in short: less data.
A company in northeastern from North Rhine-Westphalia asked EVT for a low-cost embedded vision system with extremely little space. The amount of data and the software requirements were manageable.
Thereupon EVT developed a compact and very lightweight embedded vision system with raspberry PI and a corresponding Raspberry Pi-camera. The costs were reduced to a minimum, what first seemed impossible. The hardware platform was actually only for the test included and was not intended as a system in the image processing project. Since the tests were very positive for the targeted customer, the platform was adopted. For this purpose, a special software adaptation was necessary; otherwise the necessary commands would not have had space on an 8 GB µSD card.
This software application is now available for other projects, because the required know-how already exists.
However one should be careful with Raspberry Pi. These are simple circuit board computers, which are not necessarily suitable for industrial use and do not have the required processing power for demanding application software.
Michael Beising from EVT about the current topic: Anchor “The computing power grows, while the computers are getting smaller. This is a result from the mobile sector, which can now suddenly produce the smallest units. For smart phones, 'octacore' is state-of-the-art, while for PCs this has yet to find its way. With an Odroid XU4 and the EyeVision software, image processing with eight cores is already possible, even for image acquisition and data analysis of 3D point clouds.”