Camera-guided Feeding of Optics in Robotbased Laser Assembly
Daniel Zontar1, Sebastian Haag1* and Christian Brecher2
- Fraunhofer Institute for Production Technology IPT, Steinbachstr. 17, 52074 Aachen, Germany
- Head of the department of Production Machines at Fraunhofer Institute for Production Technology IPT and Chair at the Laboratory for Machine Tools and Production Engineering (WZL) at RWTH Aachen University, Steinbachstr. 19, 52074 Aachen, Germany
KEYWORDS : optics assembly, sensor-guidance, part feeding, computer vision
Today, the assembly of laser systems requires a large share of manual operations due to its complexity regarding the optimal alignment of optics. Although, the feasibility of automated alignment of laser optics has been shown in research labs, the development effort of alignment algorithms does not meet economic requirements – especially for low volume laser production. This paper presents a robot-based assembly station consisting of a desktop robot covering a large workspace and a compact micromanipulator attached to the robot providing high precision locally. Additionally, the robot carries a camera system observing the gripping area. The part feeding magazine is in the view of a stationary camera. The system assists the operator in repetitious and time-consuming operations such as the pick-up of optical parts from a part carrier without mechanical references and it offers human-machine interfaces for carrying out optical alignment processes in a collaborative way. In order to make use of the vision systems different calibration routines are required. The focus of the paper is put on the in-process image-based metrology and information extraction. Results will be presented regarding the processes of automated calibration of the robot camera as well as the local coordinate systems of part feeding area and robot base. Statistical evaluations of the achieved precision will be depicted and discussed. Then, it will be shown exemplarily how the system can assist the operator during the pick-up process of optical components after successful calibration.
- TCP = tool center point
- ROI = region of interest
- GRIN = gradient index (optics)
- HR = high reflection (mirror)
- MGA = Miniature Modular Arm 
- API = application programming interface
- DOF = degree of freedom
- SME = small and medium enterprises
Optics and laser technology are sectors in high technology with significant perspectives for the near future. Especially, photonics industry is regarded to be highly innovative and new products are developed and brought to market frequently. Once photonics has established a product, the production often migrates to countries with low labor costs. This is caused by the fact that production processes in this sector are dominated by manual assembly that determines the lion’s share of overall production costs. Two main reasons can be identified to explain this situation. First, the industrial structure in Europe is characterized by numerous small and medium enterprises (SMEs) which – under the pressure of innovation – are forced to focus their technical competencies on product technologies instead of production technologies. SMEs develop their prototypes in lab environments and subsequently implement production processes on highly customized hardware solutions. Second, the innovativeness of the industry also leads to short life cycles and high numbers of product variants, which increase the necessary flexibility and adaptability of automated solutions. For this situation lightweight robots promise considerable improvements, as they are far more cost-efficient and flexible than today’s automation solutions. A key enabler for a wider acceptance and usability in the targeted industries could be the combination with modular micromanipulation tools in a hybrid assembly scenario.
2. Hybrid Precision Assembly
A hybrid assembly concept allows involving user input to complete not yet automated processes. This enables deployment of final assembly tools in early stages of a product’s development process. At the same time an operator can be supported by already existing and adopted procedures.
In this experimental setup the macro-workspace is realized through a light-weight SCARA kinematic. A micromanipulator is attached at the robots tool-center point. This concept provides spatial flexibility while retaining high-precision at the local assembly point
2.1 High-precision Micromanipulator Family
Two micromanipulators have been developed to enable common robotic systems to undertake micro assembly tasks. The first achieves six degrees of freedom with a step resolution smaller than 50 nm. It is therefore suitable for a wide variety of assembly tasks. Additionally a version with three degrees of freedom has been derived to be more cost efficient in use-cases where less flexibility is sufficient. 
2.2 Modular Lightweight Robot and HMI
The SCARA kinematic by Schunk is composed of modular components. “Usual preconditions for automated build ups, like safety systems, weight- and space restrictions or special supplies do not need to be considered” . The Miniature Modular Arm (MGA) provides a central aperture as a distinctive feature which has been equipped with a camera system for visual process control. Finally matching tool exchange systems and storage racks provide the flexibility necessary for a variety of specialized gripper
The part identification has been implemented as a stand-alone application. Through standard networking APIs, process control scripts can retrieve detailed information about every detected part on the magazine (cf. Fig. 1).
Fig. 1: A hybrid assembly setup consisting of a SCARA kinematic and an attached micromanipulator is shown. The system is mounted on a breadboard which can be equipped to satisfy the needs of laser assembly tasks.
3. Part Feeding Task for Optics
Gel-Paks® provide a convenient way to handle optical components during transport. Due to a proprietary elastomeric material, parts can be placed freely on the carrier and kept in position to ensure save transportation and storage. Hence, this kind of magazine is a standard way of presenting optical components (cf. Fig. 2). Pick-up positions can no longer be statically defined and therefore have to be identified through sensor evaluation. In the setup presented in this paper a camera fixated above the Gel-Pak® covers the complete lens presentation area as well as a set of reference marks in its field of view. Applying image processing, optics can be located in the local 2D coordinate system. In order to carry out the robotbased pick-up, the local coordinate system needs to be calibrated with respect to the robot base coordinate system. The procedure incorporating a mobile camera attached either to the robot TCP or integrated in the micromanipulator is depicted in more detail in the following section.
Fig. 2: Gel-Pak® vacuum release tray with randomly positioned optical components.
4. Calibration of Stationary and Mobile Camera
Two camera systems are deployed in the experimental setup. The first is fixated perpendicular above the Gel-Pak. The second is mounted to the SCARA TCP. In order to use image data as input for further calculations both cameras have to be calibrated first. The calibration process allows compensating optical and perspective distortion and determining a scaling factor between image pixel and real-world metrics.
Fig. 3: Model of the position and orientation of the stationary and mobile camera systems.
4.1 Calibration of Stationary Camera
The stationary camera system is equipped with a common entocentric optic and is appointed to monitor parts on the Gel-Pak magazine. The predominant kinds of distortion consist of a radial barrel distortion, which is generally associated with the deployed kind of lens, and a trapezoidal distortion resulting from a misalignment of the camera with respect to its optimal perpendicular orientation. The scaling factor is calculated for the surface plane of the Gel-Pak because it depends on the object’s distance to the camera
Determining and compensating camera distortion is a common task in computer vision. Hence, algorithms are widely available as frameworks in many programming languages
The scaling factor can be obtained during the determination of the distortion or the local coordinate system simply by comparing known physical features like the calibration pattern or the distance between two reference marks with their representation in the image.
The calibration process has to be carried out only when the position or orientation of the stationary camera changes..
4.2 Calibration of Mobile Camera
The local camera system is equipped with a telecentric lens to provide local image data of components during assembly tasks. Due to specific properties of telecentric lenses there is no need to compensate any distortion. Therefore the calibration process only includes the identification of the relationship between the camera’s local coordinate system and the robot’s tool center point (cf. Fig. 4).
Fig. 4: Relationship between the image and TCP coordinate system.
The telecentric lens is mounted approximately at the center of the robot’s TCP while the fixed focus plane is tuned to be aligned with an attached gripper. For a mathematical coordinate transformation three parameters have to be identified. Namely the x and y offset of the image center from the z-axis of the TCP, the camera orientation described by the angle between both x axes and finally the scaling factor to transform pixels into millimeters.
To obtain the scaling factor the camera is positioned above a calibration pattern (dot target) with known physical features. The distance between two points in the image is then compared to its physical equivalent.
Fig. 5: The scaling factor is obtained by comparing the detected pattern with known physical features (center point distances).
The angular offset is determined in a two-step approach. First the robot camera is positioned above a reference mark. The camera image is then analyzed to determine its center point, which is stored along with the current robot coordinates. For the second step the robot is moved in a plane so that the reference mark stays in the image region, which will be analyzed again. The orientation can then be calculated by comparing the vector described by the movement of the robot with the movement of the reference mark in the image.
Figure 6: Images before and after a movement have been overlaid. The camera orientation is calculated by comparing the robots movement and the vector described by the reference mark.
Due to the fact that the camera is positioned parallel to the z-axes of the TCP its x- and y-offset can be determined easily by stepwise rotating the TCP and analyzing the path of a reference mark in the image. The path is expected to describe a circle which can be fitted to measured points. Its center point depicts the origin of the x- and yaxes of the TCP.
Fig. 7: A reference mark describes a circle while the camera system is rotated. The pivot point identifies the z-axes of the TCP.
4.3 Calibration of Local Coordinate Systems
In order to calculate the position and orientation of components in robot coordinates a local coordinate system has to be defined first. Therefore two reference marks have been placed alongside the GelPak to identify the origin and the direction of the y-axes. The z-axis is defined to be perpendicular to the surface pointing upwards. Finally the x-axis is positioned to complete a right-handed coordinate system
The reference marks must be positioned in the image area of the stationary camera, in order to be identified and used to describe parts on the Gel-Pak in a local coordinate system. However, the distance of both reference marks should be maximized in order to minimize any error on the measured y-axes orientation.
The local coordinate system can be automatically measured in robot coordinates by positioning the robot with its mobile camera above each reference mark. Image data can then be analyzed to detect the center point of each reference mark. Based on the calibration of the mobile camera coordinates can further be transformed into TCP- and finally into robot base coordinates.
5. Application: Part Localization and Identification
For an automated pick-up process optical components on the GelPak have to be localized and identified. The localization step determines the x- and y-position and orientation of all parts in a local coordinate system. In a following step parts are distinguished and grouped by their type. This is accomplished by comparing visual features, which in combination allow a reliable identification of the investigated optical components. These features include without limitation the length and width, the visible area and its perimeter, as well as different ratios of these parameters. The grayscale histogram is also suitable to distinguish and group parts. Formed groups can finally be mapped to templates which have to be configured only once for every new component type.
Part descriptions based on salient points are not suitable because of small and mostly homogenous surfaces which do not offer many features.
The image segmentation is based on binary thresholding with a watershed algorithm to achieve accurate edges. In order to enhance the contrast between the Gel-Pak and mostly transparent optical components dark field lighting has been introduced in the experimental setup.
Occasionally, parts such as cylindrical GRIN optics lead to separated blobs which have to be combined in a postprocessing step. This has been implemented as a heuristic rule which combines closely lying blobs (cf. the two deflections in the plot of Fig. 8).
Fig. 8: The left image shows the enhanced contrast achieved through dark field lighting. The right diagram presents a corresponding normalized intensity profile.
6. Height Measurement through Variation of Focus
Information of the stationary camera can only be used to obtain two-dimensional information about the position and orientation of a part. For a fully automated process the height information has to be detected as well.
Due to the fact that the focus plane of the mobile camera is in a fixed and known distance to the lens, focus measurements can be utilized to determine the z-coordinate of an investigated surface in analogy to an autofocus feature of a camera. Therefore the robot is moved in small steps towards the surface of the Gel-Pak. At each step the camera image is analyzed. In [8,9] different algorithms are presented to quantify the focus quality. The presented results are based on the Laplace algorithm. The focus of an image correlates with the smoothness of edges in the image which can be extracted with a Laplace filter. To weaken the effect of noise the Laplacian of Gaussian Filter is applied to the investigated image region. The focus is then quantified by the weighted average of the obtained pixel intensity
In Fig. 9 normalized focus measurements are plotted against the z-coordinate of the robot. The focus plane is determined by the absolute maximum which can be numerically calculated.
Fig. 9: The left model illustrates the proceeding of a focus measurement. On the right side normalized focus measurements are plotted against the z-coordinate of the robot. Measurements start above the focus plane, so the measurement points were taken from right to left.
7. Evaluation of Results
The implemented camera-guided feeding allows for a reliable pick-up process for the investigated optical components, including cylindrical GRIN lenses and HR mirrors. Currently, height information of components is provided by the operator once for each component type. This ensures collision avoidance since the presented measurement via focus determination needs further investigation and optimization.
Based on focus measurements a robot can be positioned in a defined pose above a surface. The estimated distance depends on the investigated optics and should stay constant within a small margin of error, when repeated in different situations. Fig. 10 shows the absolute deviation after 40 repetitions.
Autofocus algorithms must have a reliable and early abort criterion because the focus plane is tuned to be aligned with an attached gripping tool. The assigned micromanipulator allows pulling up the gripper two millimeters, which is generally enough vertical space for an automated positioning of the robot.
Fig. 10: The z-coordinate of a surface has been determined 40 times by an autofocus algorithm. The margin of error shown by the plot is suited for pick-up tasks. (Laplace algorithm, step size 0.5 mm)
The image processing results are presented to the operator through a stand-alone application, which can be automatically accessed by process control scripts via networking APIs. This approach allows for manual error detection. Notably dust particles can alter the perceived shape of components as they light up in dark field lighting conditions. Detected parts are grouped and colored for convenience as shown in Fig. 11.
Fig. 11: Image data of the stationary camera is shown. Three regions (ROI, red) are used to isolate details of interest. Identified parts (a-d) are grouped and colored by their type (GRIN, HR).
After calibrating the local coordinate system the robot is positioned above each detected component. The mobile camera is then evaluated to analyze the achieved precision. The component center is therefore detected in analogy to the algorithm of the stationary camera and then compared to the image center. An example is given in Fig. 12.
Fig. 12: The identified parts of Fig. 11 have each been approached by the robot in a way that the image center (yellow) of the mobile camera is overlaid with the center point of each part (orange). The achieved quality with the investigated robot system was sufficient for a reliable automated pick-up process using vacuum grippers.
8. Summary and Outlook
Techniques necessary for a computer vision based feeding of optical components have been presented and evaluated. Implemented in manual laboratory processes this allows for a convenient way to support operators. A reliable calibration routine has been presented for identifying camera parameters such as perspective distortion and for determining the scaling factor between image pixels and realworld metrics. Another routine was depicted for calibrating a local coordinate system with respect to the robot base coordinates. Such calibration allows for picking up randomly aligned optical components. This approach was enhanced by a strategy for identifying the z-coordinate of a plane through a sequence of images collected by the mobile camera attached to the robot TCP. Results of this work were presented by depicting the achieved robot positions in comparison with the ideal target positions.
The developed methods will be optimized and transferred to industrial setups in order to analyze their robustness and applicability in industrial scenarios.
Research has been supported by European Clearing House of Open Robotics Development ECHORD funded in the Seventh Framework Program.
- Brecher, C., Pyschny, N., Haag, S., and Lule, V., „Micromanipulators for a flexible automated assembly of micro optics“, in SPIE Photonics Europe, International Society of Optics and Photonics, 2012
- Loosen, P, Schmitt, R., Brecher, C., Müller, R., Funck, M., Gatej, A., Morasch, V., Pavim, A., and Pyschny, N., „Self-optimizing assembly of laser systems“, Production Engineering, Vol. 5, No. 4, pp. 443-451, 2011
- Schmitt, R., Pavim, A., Brecher, C., Pyschny, N., Loosen, P., Funck, M., Dolkemeyer, J. and Morasch, V., „Flexibel automatisierte Montage von Festkörperlasern. Auf dem Weg zur flexiblen Montage mittels kooperierender Roboter und Sensorfusion“, wt Werkstatttechnik online, Vol. 98, No. 11/12, pp. 955-960, 2008
- Laganière, R., “OpenCV 2 computer vision application programming cookbook”, Packt. Publ. Limited, 2011
- Bradski, G., “The OpenCV Library”, Dr. Dobb’s Journal of Software Tools, 2000.
- Tahmasebi, F., “Kinematic synthesis and Analysis of a Novel Class of Six-DOF Parallel Minimanipulators”, Dissertation, Thesis Report Ph.D., Institute for Systems Research, The University of Maryland, 1992
- Tahmasebi, F., “Kinematics of a New High-Precision ThreeDegree-of-Freedom Parallel Manipulator”, in ASME Journal of Mechanical Design, No. 129, pp. 320-225, 2007
- Nayar, S. and Nakagawa, Y., “Shape from focus: An effective approach for rough surfaces”, in Robotics and Automation Proceedings, IEEE International Conference, pp. 218-225, 1990
- Firestone, L., Cook, K., Culp, K., Talsania, N., and Preston Jr., K., “Comparison of autofocus methods for automated microscopy”, Cytometry, Vol. 12, No. 3, pp. 195-206, 1991
- Hoch, A., Haag, M., Härer, S., „Construction Kit for Miniaturised Handling Systems: Further Developments and First Applicatoins“, in “Precision Assembly Technologies and Systems”, Vol. 371, pp.51-56, Springer Berlin Heidelberg, ISBN 978-3-642-28162-4, 2012