Robotic machine tending is one of the fastest growth areas for robotic applications. It has been a part of manufacturing processing for a few decades now, initially performed by humans, and now increasingly being served by robots. While machine tending can be achieved without 3D sensors, there are circumstances where highly accurate imaging using sensors and cameras, such as the Zivid Two industrial 3D color camera can play an integral role in more capable, and more reliable machine tending robot cells.
Table of contents
Machine tending is essentially looking after, and serving, another machine that is performing a function. These machines are most typically involved in machining, forming, and shaping parts for use in a production assembly process. Some typical applications are shown below:
You can think of the machine tending robot or collaborative robot as a sort of butler to the machine it is assigned to. It efficiently and reliably takes the part from a tray, then places it in the exactly right place and position in a lathe holding chuck or similar depending on the process been undertaken.
Across multiple industries
We often associate machine tending with the machining of metal parts in CNC machines, and that is probably the chief use case. However, it is seen across multiple industries, including some listed below.
We won’t dwell too long on this topic as we have previously discussed it. But it is a common theme across manufacturing and industry that many of the jobs that robots are taking on are in many circumstances unrewarding, dirty and dangerous.
There is a good argument to be made this is why a generation have not entered industry in the numbers they once did, and that doesn’t look to be changing in the short-term. Machine tending can be among the most mundane jobs whilst simultaneously highly dangerous with very powerful rotating machinery present.
While robotic machine tending brings great gains in improved productivity and quality assurance, it is limited in its traditional guise. It is very reliant on a very rigid set of tasks and how parts are presented to the robot. Without adequate 3D vision the robot is of course blind. If something is not placed exactly where the robot expects to find it, there will almost certainly be a missed pick and likely a stoppage that an operator must resolve.
This leaves the automated cell at the mercy of a parts tray being incorrectly placed, or a knock causing parts to be out of place compared to where the robot expects them to be.
From a flexibility and productivity perspective it also limits the cell to single task operation, and prevents the possibility of presenting bins and trays containing random placed parts.
The machine tending operation has a few distinct phases where system trueness plays a critical role. This is illustrated in the figure below.
Industrial robots and collaborative robots, or cobots, are used in machine tending. Industrial robots are used where much larger items and parts are being fed into another machine, typically, such situations include large forge press machines. In these situations, the working area must be caged off due to its danger to humans.
The most commonly seen deployment is the use of cobots, with a human operator monitoring the operation of one or a number of machines. Cobots offer a great deal of flexibility as they can be programmed very simply, even by just moving the cobot arm to grasp and place points manually.
These robots can take care of multiple extra functions such as closing CNC doors and pressing operation buttons on the CNC console. All in all, they are the perfect companion to a machine tending cell offering a very high degree of flexibility.
While cobots are used extensively for their inherent flexibility, there are limitations without high-quality 3D machine vision. They can only handle specific parts in a specific scenario. The parts they will lad, and unload must be presented in an ordered way in a precise position. Bins with randomly positioned parts will be a great challenge without a suitable 3D sensor.
A 3D sensor that is mounted on the robot arm immediately allows the robot to have awareness of its workspace. This means randomly positioned parts can be presented to the robot and pick and grasp operations can be made successfully using the 3D sensor. This also means complex arrangements of parts and multiple types of parts can be used in the process without reprogramming or rearrangement of the workspace area. This can bring great cost reductions and improvements in quality as the 3D imaging can see to a much high degree of accuracy than a human operator can.
High-mix, low-volume production runs
One distinct advantage of use of 3D vision in machine tending is it makes high-mix, low volume production runs much easier to achieve and control. As the name describes this type of production is in great demand today as manufacturers want to produce parts in line with orders coming in. This approach to production has been a modern holy grail in manufacturing enabling making only what you need and are going to sell, and drastically reducing wastage and unnecessary losses.
Zivid Two, perfect partner for your machine tending operation
The Zivid Two 3D camera is the perfect choice for flexible machine tending operations. It has superb point cloud performance characteristics with dense point clouds with a point precision of < 55 μm, and an average dimensional trueness error of < 0.2%. If you are wondering what trueness error is, and maybe you have not encountered the term before, you can read about it in this blog.
Zivid Two delivers superb, dense point clouds even with shiny metal and colors:
The Zivid Two industrial 3D camera has also been designed from the ground up to operate with excellence on the robot arm. This means it is small, lightweight, yet very tough and reliable. It also comes with a wide range of accessories for mounting, cabling, and calibration.
Zivid Two tested along its entire working distance and area, while being subjected to full operating temperature sweeps, along with vibration and shock testing. This means it has been tested to deliver excellence for any possible use in deployment.
Dimensional trueness error is the amount of deviation you can on average expect to see between a point in the point cloud and where that point exists in reality. As overall accuracy is a function of both precision and trueness. With point precision of < 50 μm, and dimension trueness error of < 0.2%, Zivid Two offers point clouds with astonishing accurate details and very dense point clouds with over million points.
Dimensional trueness quality is a very important parameter for any precise pick and place operation, which is what machine tending actually is. 3D sensors and cameras do offer great possibilities, but these possibilities are dependent on the trueness quality of the 3D sensors or cameras being used.
Machine tending is already prevalent in manufacturing industry, and in most cases currently, it is fair to say, 3D cameras are not in use. This really stems from a legacy of ‘doing things the why they’re done’. Machine tending operations are geared up to perform a certain task, with specific parts and does not deviate without a major change in the operational setup.
The past few years have seen the advent of affordable, high-quality 3D sensors that are fast enough to offer immense flexibility and deliver attractive productivity gains that far outweigh the initial cost of the 3D camera. A camera that can perform to such high standards must also be of a true industrial grade so that it can tolerate industrial situations. The Zivid Two is that 3D camera.
Want to learn more about 3D vision-guided robotics for machine tending? Download our free ebook: