Physical AI for cobots today

With Universal Robots' launch of the AI Accelerator at ROSCon, we opened a new bridge into the world of physical AI, but how does this benefit our existing customer base and the applications our cobots already perform?

Physical AI for cobots today

The future of programming robots

AI is set to revolutionize the way we program robots, moving from existing line by line programming of motions and I/O handling, to commanding higher level behaviors to successfully complete tasks. Approaches based on imitation/reinforcement learning, vision language action models and robotics foundation models are promising avenues of research for adaptive robot control. These will not only massively simplify how we interact with and control robots, but they will also open solutions to tasks that are currently very hard to achieve, such as connecting flexible cables or handling textiles.

Bringing AI into existing applications

For the average manufacturer who has just got to grips with cobots, it's understandably a little hard to see how these are going to help you in the near term. That's part of what we've been focusing on with the AI Accelerator -,merging the flexibility that modern AI brings with more mature programming paradigms, resulting in a solution that can handle workspace variance and also more readily provides the speed and precision that industrial customers expect.

This approach offers a way forward for those manufacturers who are still on the fence as to whether AI is at a maturity level suitable for integration into their processes.

Deep Learning AI Vision

Machine vision is already quite common in manufacturing environments, with locating and inspecting parts being two of the major use cases. Still less than 20% of cobot applications use vision systems. Why? Most non-AI vision systems today require expert configuration, and they are seen as being complex and costly to maintain. Most users will choose to spend time and money on producing fixturing for their parts, forgoing the increased flexibility that's made possible with vision.

So how is AI vision different?

When a person sees an object they know, they can still recognize that object if it has a different surface finish, is under different lighting conditions or is placed on a different background - or even if it's a slightly different size or shape. This is hard for a traditional vision system that relies heavily on high contrast between background and object and repeatable size and shape of object to detect it.

With deep learning vision systems, we can train this kind of variability into a single model, so that it is prepared for all the different environmental variations that we throw at it. It's also no longer necessary for a user to obtain tens or hundreds of thousands of images to train a model. There are a wide range of pre-trained models available off the shelf with permissive licensing that can be repurposed for a wide range of industrial tasks with only a short retraining process requiring around 50 images (that can be automatically segmented and labelled).

Physical AI for cobots today

AI Accelerator functionality

The interfaces included in the AI Accelerator make it straightforward to bring advanced perception capabilities from NVIDIA's Isaac ROS and Isaac manipulator capabilities into your existing robot program. Here are some of the use cases we've demonstrated so far:

Object detection - We can use these object detectors to locate and pick up objects in the robot workspace, reducing requirements for rigid mechanical fixtures.

Workspace state/sanity check - This could be called inspection, but we're not talking about measuring tolerances on paths down to micron level. It’s about finding out “is this thing in the workspace in the state that it should be for the robot to continue/successfully complete its task?”. For example, in CNC machine tending, is my work holding clear? Are all the tools in my machine intact and clean?

Workspace realignment - many of our customers move their robots around their production environments to complete different tasks at different times. Realigning to a workspace can be a burden, requiring precise physical placement of the robot, or a process of reteaching coordinate frames. With a camera on the end of the robot this process can be automated so the robot can check where it is relative to the rest of the workspace and carry on with minimal fuss.

Path planning - Plotting waypoints across the workspace to produce the optimal trajectory for your robot to get around and in and out of machines can be tricky, especially for novice users. With automatic path planning this can be much easier, but difficulties in providing the path planner with a detailed model of the robot environment has traditionally been a blocker to widespread adoption. We've got some cool additional functionality coming to help with this in the AI Accelerator, so watch this space.

End user value

I spent the majority of my first eight years with UR in customer facing roles, building support teams that provided guidance on applications and helped resolve issues when things didn't work. So many times, issues were caused by the world around the robot not being fixed in place well enough for the robot to do its job. It's not the robot's fault, but it's extremely reliant on things remaining exactly as they were when their program was created. With the perception capabilities opened by the AI Accelerator, we have a chance to change this, make our robots more flexible than ever before, and prevent many of these issues from ever occurring.

UR's role

At Universal Robots, our mission has always been to take advanced technologies and make them accessible, practical, and impactful. This principle is at the heart of our approach to AI. We understand that many businesses, particularly small and medium-sized manufacturers, face barriers like limited resources and technical expertise. That’s why we focus on delivering products that are intuitive, user-friendly, and adaptable to diverse production environments.

The AI Accelerator is a prime example of this commitment. With the AI Accelerator, we’re not just introducing cutting-edge capabilities, we're building a bridge that seamlessly integrates tomorrow's innovations into today’s production lines, even those rooted in older processes. By prioritizing simplicity, reliability, and real-world functionality, we empower manufacturers to adopt AI without the steep learning curve or infrastructure overhaul often associated with new technologies, supporting them to confidently explore new ways to innovate and improve their operations.

Andrew Pether

Andrew Pether is Innovation Manager and Perception Team Lead at UR, and has been with UR since 2014, previously leading applications and technical teams in Asia before joining the Technology Innovation team in the US in 2022. In his current role in the perception team Andrew brings his extensive experience of customer challenges into the AI Accelerator product, facilitating cobot solutions more flexible than previously possible

Local Office
  • Universal Robots USA, Inc
  • 27175 Haggerty Road, Suite 160
  • 48377 Novi, MI
Contact us: +1 844-462-6268
Contact us: + 1-844-GO-COBOT