Zivid has released a new camera lineup – the Zivid 2+ R-series – which takes everything you have come to know and love in the current Zivid 2+ model and makes it better, faster, and stronger.
Don’t feel like reading the blog? How about you check out our launch video for the Zivid 2+ R-series instead? Watch the video now!
This camera series is all about setting a new standard in picking efficiency, consistency, and reliability. The innovation they contain is all designed to one end: robotic picking and manipulation cells that run as fast as possible and confidently handle any item or object.
With these new cameras, Zivid has again raised the bar for high-quality 2D and 3D data in the same device, making it the top choice for e-commerce and logistics applications. The Zivid 2+ R-series cameras feature a completely reimagined 2D camera with 5 megapixels, giving you quality images that have previously only been accessible in stand-alone industrial 2D cameras of much higher resolution alongside your 3D sensors.
All this data is available in one easy-to-use, coherent package that requires no 2D camera to 3D sensor calibration, extra installations, illumination, or hardware. We can do it at speeds 3 times faster than ever before.
These cameras can be used in e-commerce to capture transparent goods and other consumer items in only 150ms, and you will get full data coverage for logistics and parcel handling scenarios in as little as 50ms.
Improvements have also been made for the manufacturing applications as well, bringing speed boosts to current capture cycles, making robot mounting a viable option. To top it off Zivid will be introducing a new structured light 3D technology targeting vertical reflections and capturing the most difficult shiny and reflective items.
Curious to find out how you can simplify your automation cell and get better performance than ever before? Let’s dive in and find out.
Speed is always one of the first questions that comes up when you purchase a vision system for an automation cell.
“How fast is it?”
“What is the frame rate?”
“How will this impact my cycle time?”
The first step in calculating the cycle time of a cell is knowing how long the data acquisition takes. All of the decisions that are made for pick pose estimation, robot movement, gripper position, and speed are based on the information that the system gets back from the vision sensors. The 2D and 3D data act as the ground truth for a picking cell and give the robot all the information about its environment. To a great extent, the vision system will make, or break, the success of your robot cell.
This is especially relevant when it comes to bringing automation to logistics and e-commerce. Picking products and packages for order fulfillment has the toughest cycle time requirements in the robotic manipulation industry. In parcel handling you are often looking at picking 1200 to 1700 packages every hour.
The key ambition with the Z2+ R-series was to make piece piking robots in e-commerce faster to boost the applicability of such robots in warehouses. To do so we set out to speed up the Omni 3D engine, the world’s first structured light technology made to see transparent objects without any prior knowledge.
Common items found in piece picking: bubble wrap, transparent bottles, and poly bags.
In only 150 ms, the Zivid 2+ R-series can capture a combined 2D image and point cloud with unmatched quality. This achievement is three times faster than our current Zivid 2+ models while maintaining data quality. This enables the industry target of 5 seconds effective cycle time for piece-picking robots, where very small margins are available for performing image capture, detection, and pose estimation without causing the robot to idle and make mistakes such as mispicks, double picks, or damaging items.
Parcels come in a wide variety of textures and colors, check out the black plastic bags, cardboard boxes, and shiny yellow plastic.
In only 50 ms, the Zivid 2+ R-Series captures high-quality 2D images and point clouds of every parcel. These applications require a high dynamic range to handle various packaging materials, from basic cardboard to shiny white and black surfaces.
True-to-reality data is critical in logistics environments, where the robot’s end effector often grips surfaces that are or could be easily deformed or damaged. Inaccurate surface information risks damaging the package and its contents, affecting both the recipient and the sender's reputation. Very precise 2D data is used to differentiate and segment different parcels and boxes, this requires very good edge detection to avoid seeing two items as a single entity which leads to double picks and dropped items. Often a 3D camera is capable of reaching extremely fast acquisition times, but this is done at the sacrifice of data quality. At Zivid we do not accept this compromise.
These advancements bring considerable benefits to manufacturing tasks as well as logistics and warehousing applications. With the Z2+ R-series there is a notable speed up when using the stripe engine, Zivid’s 3D reconstruction technology geared towards shiny metallic objects and difficult parts. This opens up more options for automation in manufacturing.
This increases the opportunities to mount the camera on the robot. With 4x faster acquisitions, the time the robot needs to remain stationary for acquisition becomes negligible compared to the robot's total cycle time for manufacturing tasks. This will open doors to one robot and camera unit being able to service multiple bins or objects.
Another area that Zivid has invested in with the Z2+ R-series is delivering point clouds free from false points created by vertical reflections. This has been introduced with the newest 3D reconstruction technology in the Zivid Sage Engine. This vision engine is perfect for those highly reflective and difficult scenes, where you often have a large number of interreflections that are detrimental to the robot's detection and path-planning algorithms.
A before and after of the sage engine vs the stripe engine when dealing with parts stuck against the bin wall.
This engine will deliver point clouds with greater confidence in each point, making complicated parts with small holes easier to detect and match, as well as, yielding a higher number of picks in complicated scenes.
With this camera release, Zivid has kept an eye on the market when it comes to understanding the 2D requirements for robotic automation. The quality of 2D data is as important as the 3D data when it comes to detection methods that rely on AI or template matching methodologies. Most of these systems will rely on a wide range of equipment to get good data for object detection. You will see just about everything: 2D sensors, 3D sensors, lighting, panels, basically anything to control the overall environment to get just good enough data to pick their inventory. How nice would it be if you could simplify that?
The Zivid 2+ R-series reinvents the 2D camera, bringing the best 2D data to the market when it comes to industrial applications. No external lighting or panels are needed, this camera is self-illuminating and will provide you with high-quality data regardless of the environment.
The image on the left is from the Zivid 2+, while the image on the right is from the Zivid 2+ R-Series. Note the difference in the line quality between the two images.
Even though this camera still uses a 5MP sensor, we have been able to dramatically increase resolution. The level of detail in the image is only limited by the absolute pixel pitch of the sensor. Often when you have a 5MP or other camera that is considered mid-range when it comes to resolution, you are subjected to a wide range of color aberrations and effects due to the sensor. This usually manifests in zippering on the edges of lines and color fringing. The Z2+ R-series does not suffer from these effects.
The image on the left is from the Zivid 2+, while the image on the right is from the Zivid 2+ R-Series. Note the difference in the leaves and find detail resolution between the two images.
One of the biggest struggles with using a vision system is getting reliable data from the sensors. Often your output will be dependent on the amount of light in the scene and its color temperature.
How can you know that the automation cell at your R&D facility will function the same when you deploy it at the warehouse? The goal when designing a robot cell is to be able to deploy and let it work anywhere, regardless of lighting or temperature conditions.
Check out a live capture from the Zivid camera of a scene and what you see with a handheld camera of the same scene.
Most cameras will see degrading data quality in both their 2D and 3D results as the light of the environment increases. This will lead to incomplete point cloud coverage in a 3D camera, and varying colors and overexposed 2D images. Check out the example below to see the difference in performance across multiple cameras in changing light environments compared to the Z2+ R-series.
Compare the abilities of different 3D cameras across increasingly brighter lighting conditions.
The Zivid 2+ R-series puts all those worries to rest. This camera is designed to deliver consistently high-quality results regardless of the lighting conditions, delivering robust 2D and 3D data in one easy-to-use package. You can use this camera with the realistic expectation that it will provide consistent and complete data no matter where you want to deploy your automation cell.
Making an automation cell that can be deployed anywhere is one of the hardest tasks for an engineer. Robustness and ease of deployment are key to making this happen. At Zivid we work closely with our customers to understand where these pains come from so we can help make a vision system that is easy to use, and dependable when deployed at scale.
To take it a step further Zivid studio has been redesigned with the newest release, SDK 2.14, to make those first steps of R&D and deployment as smooth as possible. Finding the best settings has never been easier with Zivid Studio now providing presets for both 2D and 3D captures.
Let’s not forget that all the functionality you find in Zivid Studio is fully available in the Zivid API as well. If you do not want to fuss with programmatically setting the camera settings via code, you can also just export the YAML file and let your robot see exactly the same that you see in Zivid Studio. Check out the GitHub repository for all the code samples to get started with the Zivid cameras.
One of the most frustrating, and expensive things that can happen to an automation cell is when it does not work consistently. When you have a picking cell that fails, it will require some type of intervention. Often this intervention requires the system to stop operation until the problem is fixed, then the whole system needs to be restarted, and the restart will also take time. This downtime and intervention means using a lot of resources, leading to an increase in the effective cycle time of the cell.
Time is not the only investment you have to make with a robot cell that keeps stopping, there is also the man-hours it takes to fix it. Each time the robot cell stops operation, an operator or engineer is required to go in, troubleshoot and problem-solve what is going wrong, and find a way to fix it. The man hours will add up quickly and stack on top of the cost of the downtime of the stopped robot.
The vision system is the ground truth for any robot cell that is doing unstructured pick-and-place tasks. Investing in a vision system that can provide you with complete, correct, and consistent data is the first step to ensuring a high-productivity design. This is what the Zivid 2+ R-series provides.
It's also important to understand the impact of investing in an industrial-grade device, you want your vision system to be hardy and robust. The camera should be able to work anywhere, in dirty and complex environments. What if one warehouse is in a warm environment like Spain or Arizona and the other is in Poland or Minnesota?
Let us revisit what this camera line-up is all about:
The Zivid 2+ R-series comes in three variants that address a wide range of applications in logistics, e-commerce, and manufacturing: The MR60, MR130, and the LR110. MR60 offers robot-mounted flexibility for the ultimate point cloud quality for fine-detailed picking and manipulation in applications such as assembly and robot guiding. MR130 is designed to perfectly address piece-picking, parcel induction, and bin-picking from standard-sized bins and totes. LR110 offers very high-quality results coupled with a generous FOV that is suited for robot-mounted bin-picking from large bins and multiple bins.
The Zivid 2+ R-series is available now, get a quote today: