The logistics industry is a vast network of interconnected operations across countries and the globe. Every statistic related to the industry is vast. In 2023 the global logistics industry was valued at $8.96 trillion and is expected to hit $15.79 trillion by 2028.
Automation is already playing a considerable role within logistics, with an annual spend of $65.25 billion in 2023, projected to increase to $217.26 billion by 2033.
The volume of parcels handled daily is hard to gauge accurately, but in the US alone, over 23 million parcels are delivered every day. Speed of handling and throughput are very much at the top of this business's agenda.
Table of contents
Before the parcels can be sorted and distributed, they need to be fed or inducted into the sorter. This process is called parcel induction and needs to be very fast to keep up with the speed of the sorting system. Separating packages from these piles into singular items at the fast speed necessary is a task humans are very well equipped to do, and this has meant that logistics sorting relies heavily on human workers to get the job done.
However, there is a snag: human workers may be well adapted to perform this task, but it isn’t something they want to do for any length of time. Indeed, over extended periods, the monotony of such a task leads to boredom, distraction, and mistakes. Hiring for this type of work, and employee turnover is consequently a very significant problem. In 2019, the turnover rate in logistics and postal sorting was 38.5%; by 2022, this had skyrocketed to 59% and currently shows little sign of reversing. This situation means that the industry is increasingly turning to robotics to ensure that its hubs can continue to operate.
Logistics hubs must process parcels at incredible speeds to meet growing demand. Robot cycle times of less than 5 seconds are mandatory and can even target as fast as 1 second per pick. Within this constraint, 2D and 3D image capture and processing times are critical bottlenecks for the robot’s vision system. In an industry that demands reliability simply due to the sheer quantity of operations, traditional high-quality 3D cameras struggle to deliver results quickly enough, with capture and processing often exceeding the required 200 ms window for picking cycles. This delay creates a significant barrier to maintaining throughput and efficiency.
When parcels arrive at a sorting facility, they arrive as an unordered mass of items. They are then unloaded into feeding systems that can empty the parcels and packages onto a conveyor at a manageable rate via a chute. The goal is to singulate each item from this pile so it is ready for sorting systems further downstream.
Typical layout of a robotic parcel induction cell.
2D and 3D imaging play integral parts in this process. Typically, high-quality 2D images are used for ML/AI software to identify and segment items in the scene. Once this happens, the system can begin to assess candidate items to be picked, in which order, and the associated pick poses the robot must make to pick them successfully. This is a detailed process, and please read our next blog on the subject of why quality matters in parcel induction.
As mentioned, while human workers are inherently capable of the task parcel induction singulation, they are not suited to performing the task over lengthy time periods. Sorting facilities would like to adopt robotic parcel induction to overcome the constant turnover and hiring challenge. However, the challenge of adopting a performant robotic parcel induction cell is not trivial.
The fundamentals of what the robot needs to be able to achieve can be framed in 4 questions:
In this blog, we’ll explore pain point number 1, and we’ll delve into pain point number 2 in our next blog. But both are inseparable requirements. You need to be able to handle all items, no matter their shape and material, and you need to do this at the required speed.
To identify and singulate an item, it must be scanned by the machine vision camera to perform the following:
We know that the cycle time for the robot can be as fast as 1 or 2 seconds, which isn’t much time. However, to compound the challenge, the camera must make the 2D and 3D acquisition while the robot does not obstruct the camera’s view. This is typically 10% to 20% of the overall cycle time because the robot should not obstruct the camera’s view and must capture early enough for the next pick task to be available in time for the robot when it returns for the next pick operation. Therefore, the actual time available for capturing the scene can be as short as 100 to 200 milliseconds.
Once the 2D and 3D data is passed to the segmentation software there is still plenty of work to be done. This software must have enough time to carry out the segmentation tasks using 2D data and to generate a list of good candidate pick poses for the robot to use. The considered selection of this software is a critical part of the system design. Parcel induction specialists such as Fizyr have state-of-the-art solutions and numerous deployments in industry, such as the ROSI parcel induction cell from AWL. Fizyr's software is extremely fast, but it will still need a few hundred milliseconds to be able to offer the next picking choice to the robot.
This means that when selecting a 3D camera for parcel induction, it is essential to consider this very short timeframe for your camera to capture the data it needs and obtain the necessary quality of data.
Only 10% to 20% of robot cycle time is available for image capture.
The Zivid 2+ MR130 is designed to meet the high-speed requirements of picking robots in logistics and warehousing. Here is what you can expect from your Zivid 2+ MR130 camera:
This camera series is known for its lightning-fast acquisition and capture times. It can capture parcels and boxes in 50 milliseconds. For extra reliability when handling the most challenging materials, the camera can capture a high-quality point cloud in 150 milliseconds.
2D image detail and quality are comparable to the very best industrial 2D camera available today. Each pixel exhibits superb data characteristics that are unaffected by ambient light from 0 lux to 2000 lux.
Watch how the Zivid 2+ R-series offers stable 2D and 3D under ambient light extremes in this short demo:
Points clouds will be fully complete without missing areas. Additionally, they will be pristine and free of artifacts that may confuse your robot cell and cause miss-picks and double-picks.
The nature of packaging in logistics is in a state of flux with the newer plastic poly mailers increasingly prevalent. This situation will continue to evolve. You can expect this camera to deal with the toughest of today’s packages with plenty of adaptability to cope with those introduced in the future.
When employing the MR130 camera, be prepared to ditch a long list of expensive extras that you just won’t need. With an integrated 2D and 3D camera and a high-power light source, say goodbye to separate cameras, cables, and lighting fixtures.
Robot cell simplification and cost reduction using Zivid 2+ R.
By implementing Zivid 2+ R-Series cameras, logistics operators have achieved transformative results:
This is the case of market leader Mujin:
Logistics is growing and evolving at a steep clip. With the Zivid 2+ MR130, there is now a fully integrated 2D and 3D camera that can meet and surpass the speed requirements expected of human operators.
Ultra-fast 2D and 3D processing, which makes no compromise in offering the very highest standard of both 2D and 3D imaging, is separating Zivid 2+ MR130 from the incumbents in parcel induction. This simple and effective solution that offers never-before-seen levels of speed and reliability is already proving to be a winner with customers in the logistics and postal service fields. Whether capturing simpler items in 50 ms or processing complex parcels in 150 ms, these cameras ensure that no time is wasted and no parcel is left behind.