3D hand-eye calibration for logistics automation

3D hand-eye calibration for logistics automation

NARI SHIN
2020-10-26
  1. Introduction
  2. What is hand-eye calibration?
  3. Why is hand-eye calibration necessary for logistics automation?
  4. Benefits of using a Zivid 3D robot vision camera for hand-eye calibration
  5. A step-by-step guide on hand-eye calibration
  6. Hand-eye calibration example (UR5e robot + Python)
  7. Case study: DHL’s 3D Vision House
  8. Conclusion

Download the full free eBook.

handeye-ebook-preview (2)

Introduction

Many logistics companies look for robot-based automation solutions as the demand for e-commerce increases. Logistics automation includes automated picking, classifying, assembly, and classifying of goods. The process usually requires a robot, a camera, robot/vision software, and calibration of the components. A complete automation solution can yield a significant return on investment such as

  • Efficiency
  • Scalability
  • Reduction of errors
  • Reduction of employee wear or injuries
  • Customer satisfaction thanks to increased accuracy and faster delivery

According to McKinsey Global Institute, transportation and warehousing are the highest potential automation areas (source). Despite the keen interest and possible ROI, many developers and business owners are still uncertain about where to start. To help developers understand and get started with designing a robust vision-guided robot system for the logistics industry, we created this e-book.

Hand-eye calibration is the binding process between the vision component (camera) and a robot. As you understand more about how hand-eye calibration works, you will learn more about how to make run your automation system more precisely and efficiently.

The first chapter covers the definition of hand-eye calibration and why hand-eye calibration is a must. In the second chapter, we walk through the details of hand-eye calibration, including practical tips and suggestions. You will find an example of hand-eye calibration using a UR5e robot as well as a case study of DHL and their smart warehouse in the last chapters. Please note that all the hand-eye calibration examples are based on Zivid One+ 3D vision cameras.

After finishing this e-book, you will understand the key considerations and processes required for automation development.

What is hand-eye calibration?

Even if you do not think about it, you use hand-eye calibration every day. All the tasks you solve with your hands, from picking objects of all textures and sizes to delicate dexterity tasks, like for instance, sewing, involves that your hands and eyes are correctly calibrated.

Four main contributors are enabling us to master such tasks:

Your eyes. Our vision can capture high-resolution, wide dynamic range images in an extreme working distance, with color and depth perception on virtually any object.

Your brain. Our brain is incredibly good at quickly processing large amounts of data and performing stereo matching of the images captured by our eyes. (By the way, this algorithm is orders of magnitude better than any computer algorithms currently available.)

Your arms and hands. Capable of moving effortlessly and gripping objects correctly in our surroundings.

Hand-eye calibration. From we were kids, our brain has used trial and error, experiences, and knowledge to create a perfect calibration between where our eyes, arms, and body are related to each other.

Our eyes capture images of the object. Our brain processes these images, finds the object, and tells our arms and hands where to go and how to pick up the object. The eye-to-hand calibration makes it possible.

Logistics automation with 3D vision

In robotics, hand-eye calibration is used to relate what the camera (“eye”) sees to where the robot arm (“hand”) moves.

In a nutshell,

Eye-in-hand calibration is a process for determining the relative position and orientation of a robot-mounted camera with the robot's end-effector. It is usually done by capturing a set of images of a static object of known geometry with the robot arm located in a set of different positions and orientations.

Eye-to-hand calibration is a process for determining the position and orientation of a statically mounted camera with the robot's base frame. It is usually done by placing an object of known geometry in the robot's gripper and taking a series of images of it in a set of different positions and orientations.
We will learn more about why hand-eye calibration matters for your automation applications in the next chapter.

Why is hand-eye calibration necessary for logistics automation?

In vision and robot-based applications, hand-eye calibration is a must. Not only does it simplify the integration, but it increases the accuracy of the application.

Consider an application with target objects on a pallet. A Zivid 3D camera is statically mounted over the pallet, and a robot with gripper is positioned next to the pallet. If you have performed an eye-to-hand calibration, a 3D automation sequence could look like this:

  • The Zivid One+ camera grabs a 3D image of the scene. The output is a high precision point cloud with 1:1 corresponding color values.
  • Run a detection algorithm on the point cloud to find the pose of the desired object. A pose typically includes a picking position + a picking orientation.
  • Use the eye-to-hand calibration to transform the picking pose to the robot's coordinate system.
  • The robot program can now move the robot's gripper to the correct pose and pick the object.

Hand-eye calibration is the binding between the robot and the camera, which makes it easy to understand why having an accurate hand-eye calibration is essential to solving the automation task. With a high-accuracy 3D camera, you only need a snapshot to know the target object's position in space before successfully picking it.

3D-hand-eye-calibration

Without hand-eye calibration, the possibility of making errors is much higher. All of these have many negative financial consequences. These include increasing delivery costs, maintaining excessively high availability, improper planning and coordinating processes, future penalties for documentation errors, etc.

A hand-eye calibration is a minimizing scheme using robot and corresponding camera poses. Robot poses are read directly from the robot, while camera poses are calculated from the camera image.

Benefits of using a Zivid 3D robot vision camera for hand-eye calibration

A common way to perform this calibration originates from 2D cameras, using 2D pose estimation. Simplified, you capture images of a known calibration object, calibrate the camera, and estimate 2D to 3D poses.

Anyone that tried camera calibration knows that this is hard to get right:

1. Use a proper calibration object, like a checkerboard. This means very accurate corner-to-corner distances and flatness.

2. Take well-exposed images of your calibration object at different distances and angles (the calibration volume). Spreading out the images in the calibration volume and capturing both center and edges of your camera frame is key to achieving a good calibration.

While neither step is trivial, step two is especially challenging. Even the most experienced camera calibration experts will agree. At Zivid we factory calibrate every single 3D camera and have dedicated thousands of engineering hours to ensure accurate camera calibration.

So, the question is, should you recalibrate the 2D camera and estimate 2D to 3D poses when your 3D camera already provides highly accurate point clouds?

Well, the short answer is no.

Zivid’s hand-eye calibration API uses the factory-calibrated point cloud to calculate the hand-eye calibration for temperature and aperture ranges. Not only does this yield a better result, it does so with fewer positions. And more importantly, the result is repeatable and easy to obtain.

The following graphs show typical rotational- and translation errors as a function of the number of images used per calibration.

Zivid 3D vs. OpenCV calibration for pose estimation

Number of images vs. translation error

As you can see, a typical calibration improvement is 3x enhancement in translation error and 5x in rotational error.

Download the eBook to read more.

Subscribe by Email

No Comments Yet

Let us know what you think