Robots learn complex tasks with help from AI
When it comes to training robots to perform agile, single-task motor skills, such as hand-stands or backflips, artificial intelligence methods can be very useful. But if you want to train your robot to perform multiple tasks—say, performing backward flip into a hand-stand—things get a little more complicated.
“We often want to train our robots to learn new skills by compounding existing skills with one another,” said Ian Abraham, assistant professor of mechanical engineering. “Unfortunately, AI models trained to allow robots to perform complex skills across many tasks tend to have worse performance than training on an individual task.”

A dog-like robot from Abraham’s lab executes a controlled flip, demonstrating how hybrid control theory enables smooth transitions between learned motor skills.
“Think of how we learn new skills or play a sport. We first try to understand and predict how our body moves, then eventually movement becomes muscle memory and so we need to think less.
Ian Abraham
Assistant Professor of Mechanical Engineering and Computer Science
To solve for that, Abraham’s lab is using techniques from optimal control—that is, taking a mathematical approach to help robots perform movements in the most efficient and optimal way possible. In particular, they’re employing hybrid control theory, which involves deciding when an autonomous system should switch between control modes to solve a task. One use of hybrid control theory, Abraham said, is to combine the different methods in which robots learn new skills. These include, for instance, methods of learning from experience with reinforcement learning, or through model-based learning in which robots plan their actions through observing. The trick is getting the robot to switch between these modes in the most effective way so that it can perform high-precision movements without losing performance quality.
“Think of how we learn new skills or play a sport,” he said. “We first try to understand and predict how our body moves, then eventually movement becomes muscle memory and so we need to think less.”
Abraham and his research team applied methods from hybrid control theory to a dog-like robot, training it to successfully balance and then flip over
As part of the robot training, AI methods are used to develop the more challenging motor skills that require whole-body precision. Hybrid control theory is then leveraged to schedule and synthesize the various mechanisms for robots to learn diverse motors skills to gain more complex, compounding behaviors.
Ideally, this could lead to robots working in homes and other unstructured environments.
“If a robot needs to learn a new skill when deployed, it can draw from its arsenal of learning modality, using some level of planning and reasoning to ensure safety and its success in the environment, with on-the-job experience,” Abraham said. “Once a certain level of confidence is achieved, the robot can then use the more specialized learned skills to go above and beyond.”
More Details
Published Date
Nov 20, 2025


