A status report on the physical AI revolution

When we in 10, 20 or 50 years look back at November 30, 2022, we might very well remember it as a historic turning-point.

The launch of ChatGPT may possibly be seen to have started an era of widespread use of AI and since that day artificial intelligence and machine learning have been the hot topic of conversation.

All of this despite the fact that artificial intelligence and machine learning are not new technologies. We have known them for decades, but the recent revolution is, to simplify, basically down to the advancements in compute power enabling us to finally handle the enormous amounts of data needed to take on the complex tasks we’re starting to use AI for.
The companies behind all this, like NVIDIA, are enjoying extraordinary growth, and rightly so.

Ahead of this year’s COMPUTEX technology conference, NVIDIA founder and CEO Jensen Huang highlighted the transformative power of Generative AI, predicting a major shift in computing. “The intersection of AI and accelerated computing is set to redefine the future,” Huang stated, setting the stage for discussions on cutting-edge innovations, including the emerging field of physical AI which is posed to revolutionize robotic automation.

But here in mid-2024, what progress have we made in the physical AI revolution?

On a scale from 1 to 5…

To be honest, we really haven’t come very far. I like to compare robotics with the development of self-driving cars. The automotive industry has defined five stages for the transition from manual to fully autonomous driving. Currently, the industry isn’t on level 5, as recent experiments in the US have shown, but the upside is that there are a lot of level 2, 3 or 4 technologies along the way that can have a major impact. Like adaptive cruise control in cars, which has turned a very manual process into a semi-automated process, making driving smoother, easier and safer.

The same goes for robotics. AI will certainly one day lead to humanoid robots that can think and figure out how to solve problems by themselves without prior programming - that would be level 5. But, as with self-driving cars, we will see, and are seeing, plenty of breakthroughs on level 2, 3 and 4 that are providing true value to businesses.

One of these breakthroughs, for example, can be seen in logistics. In partnership with Siemens and Zivid, we have developed a solution where a cobot performs order picking with total autonomy based on Siemens’ SIMATIC Robot Pick AI software and Zivid’s vision technology. Compared to manual processes, this significantly enhances the speed and accuracy of order fulfilment in warehouses and helps logistics centres to meet increasing global demand, while also dealing with the increasing difficulty in attracting labor for this kind of manual work.

Getting to a level 5 humanoid robot will rely, among many things, heavily on having outstanding vision technology and software at a level we are yet to see. But intermediate stage technological innovations are delivering a lot of value along the way.

Three impacts of generative AI

Getting a handful of robotics experts to align on where we currently are on the abovementioned scale could start a lengthy discussion. But it’s obvious that, when looking at the disruptive potential of physical AI, we still have much ground to cover - despite great advancements being made in 2023 and 2024.

Looking forward, let me highlight three of the impacts physical AI will have on robotics:

For one, AI will largely eliminate the need for experts. We of course still need robotics engineers, integrators and other skilled experts in the future, and plenty of them too. But the potential of robot automation is so large that there can’t be an expert on every factory floor (as an industry, cobots have only reached about two percent of the current potential market). Many tasks in robotics today still require an expert. With AI, we will soon be able to remove some of the current hurdles, and this will significantly accelerate the introduction of robots in many areas.

Secondly, generative AI can help us standardize solutions. If you look at the challenges we face in the automation industry, the problems are very similar in many companies. With generative AI, we are increasingly able to standardize both problems and solutions and thus create more reusable robot behaviors. There is no need to reinvent the wheel every time a new robot is installed, and AI can help with that, making integration as well as return on investment much faster.

Thirdly, AI enhances robots’ ability to navigate in unpredictable environments. As with the logistics solution mentioned earlier, vision technology with real-time feedback from 3D cameras is a huge enabler of not just autonomous navigation but also obstacle detection. This capability opens the possibilities for introducing robots outside of the very structured environment of a factory floor, for example in construction where robots must handle project variations while working side-by-side with workers.

At Universal Robots, we already have numerous partners in our ecosystem making great advancements with AI based applications - in construction and beyond. And like so many of the other automation trends we will experience in the coming years across applications and industries, AI will very much be in the center of the future progress.

Anders Billesø BeckVice President Strategy & Innovation, Universal Robots

Anders Billesø Beck leads the development of cobot technologies to keep global businesses agile, productive and innovative. He holds a PhD in robotics from DTU, the Technical University of Denmark, and has also held leading positions at the Danish Technological Institute. Anders combines his scientific background with contributions to the global collaborative automation industry to change the way the world works.

Local Office
  • Universal Robots USA, Inc
  • 27175 Haggerty Road, Suite 160
  • 48377 Novi, MI
Contact us: +1 844-462-6268
Contact us: + 1-844-GO-COBOT