« Back To Vision & Imaging Case Studies
Visio Nerf Logo

Component Supplier

Member Since 2018


Designer and manufacturer of 3D vision sensors

Content Filed Under:

Aerospace , Automotive , and Miscellaneous Manufacturing Aerospace , Automotive , and Miscellaneous Manufacturing

Component Verification , In Process Alignment , and Measurement (Non Contact) Component Verification , In Process Alignment , Measurement (Non Contact) , and Vision Guidance for Robotics

See More

Process : Robot Guidance for finishing operations using 3D vision

POSTED 10/08/2019  | By: Alexie FERNANDES, Product Manager

Robotization of industrial processes is a major challenge for worldwide companies. The first objective, of this industry of the future revolution, is the reduction of production times and associated production costs to remain competitive in the worldwide market. In addition, awareness of the difficult working conditions of the operators accelerates the desire to automate the production lines to limit the MSDs affecting the operators.  Finally, in an ever more competitive world, ensuring the quality of production is a key element.

In this context, it is imperative to develop vision systems robot guidance.

Many applications require the recognition of complex shapes, which is only possible with the 3D vision systems. For several types of manufacturing processes, the parts are subjected to various constraints (temperatures, cooling times, etc.), which have an impact on their geometry. There is also uncertainty about the positioning of each part for precision operations such as deburring, drilling, polishing, finishing, etc...

Examples of applications needing this type of robot guidance technology are: castings finishing, composite parts machining, etc...

The robotization of such applications involves the installation of a 3D vision sensor to guide the robot quickly and accurately.

Principle of robot guidance using 3D vision.

The following is an example of an application done by the company VISIO NERF using their cirrus3D sensor, a reference in the 3D vision field.

The vision sensor can be mounted on the robot arm, or fixed above the working volume. The 3D sensor must be calibrated in the robot coordinate frame so that the two systems share coherent spatial information in a common working frame.

Our case study: milling of a dashboard, accomplished using robot guidance by a cirrus3D, the 3D vision system developed by VISIO NERF.  The application was carried out in a partnership between the companies AXIOME and VISIO NERF.

In our example, the cirrus3D sensor is mounted the robot arm. The robot moves the sensor in front of the fixed part, following a trajectory which allows scanning of different zones of the part. Once the scanning operation is finished, the different generated 3D point clouds are compared with the CAD model to determine the precise location of the part.

Whatever the part position and deformation suffered during the production process, the 3D vision provides more than just rough global repositioning and is able to precisely locate (to within a few tenths of millimeter) each part sub assembly previously scanned in order to determine  local alignment specific to each area of interest.  For each working area, the robot applies the local alignment to the associated milling trajectory.


Figure 1: Adapting to part size variations


Figure 2: Alignment

Figure 3: local alignment depending on the part geometry

Finally, the robot has also to test if it is able to reach all the trajectory positions. The trajectories must not include singularity positions.  

In summary, the three robot guidance steps are:

- Measurement: the robot moves the part in front of the fixed sensor, or moves the sensor in front of the fixed part, to measure the different areas

- Alignment: the robot assumes that the part is locally compliant with its CAD model and asks the processing software to compute a local part frame for each of the areas to be machined.

- Machining: the robot machines each area, applying a correction to its tool trajectory based on the local frame determined for each area by the 3D vision system.

Some precisions regarding the 3D sensors technology:

The 3D scan process consists of acquiring a real working scene in the form of 3D points. In our stereovision example with the cirrus3D sensor, several steps are needed to get the 3D information.

Projection of a pattern, generated by a LED light source, on to the working volume:

- Information acquisition by the two cameras in the system

- Correlation and data processing

- Generation of a cloud of 3D points of the real scene

Figure 4: localization of a dashboard sub assembly with the combination of 3 clouds of points

The technological advantages brought by the 3D vision solutions are:

- Adaptability to the production lines constraints with a fast processing time

- Economical with a "solution" avoiding a mechanical setting pose for each part model

- Fast implementation and part change thanks to the use of the part CAD model

- Independence of the machine tool used 

- Optimization of the local alignment in order to take in to account af part shape variations  

In conclusion, robot guidance using the 3D vision sensor, for localization of parts with potential shape defaults, is a crucial point for production process optimization. These 3D vision systems have been developed to be integrated in industrial environments (resistant to dust, oil projection…). They also allow significant flexibility for the production means, with intuitive use. This technology associating robot + 3D vision can be integrated in production processes such as foundry and forging. They can be use for deburring, cutting, contouring.