When machines join the team
It takes some time for any team of workers to get used to each other’s habits - their quirks, ways of communicating, and work patterns—and become an effective, efficient working unit. But how does that dynamic play out when one of the team members is a robot?
With the rapid advance of both robotics and AI across every sector, this question takes on a growing urgency. It’s the crux of a recent collaboration between researchers at Yale and Tata Consultancy Services (TCS). This collaboration is part of a greater Tata-Yale alliance that also includes Tata Sons and Titan.
Read more • Approximately 6 minutes
Members of Yale’s Interactive Machines Group, led by Professor Marynel Vázquez, collaborate with a robot on the ‘pizza-building’ experiment – part of a Yale–Tata Consultancy Services research project exploring how robots and humans can learn to work side by side. Members of the Interactive Machines Group (L-R): Undergraduate student Alyssa Quarles '28, Professor Marynel Vázquez, and Computer Science Ph.D. students Kate Candon and Qiping Zhang.
A window on human–robot cooperation
In the Interactive Machines Group laboratory on Yale’s engineering campus, human subjects have teamed up with a robot to build a toy pizza: The robot is in charge of passing ingredients to the human, and then the human takes those ingredients and places them on the pizza. It’s the kind of task that calls for the same fundamental cooperative skills required in many settings, from homes to factories and restaurants.
On the surface, the experiment looks like a game, but it’s backed by cutting-edge computer science.
“This is a project about being able to, in a natural and quick way, adapt the behavior of a robot to human preferences during a collaboration,” said Marynel Vázquez, assistant professor of computer science and Yale’s lead researcher on the project. Chayan Sarkar, a senior scientist with TCS and a Tata Visiting Scholar at Yale, is the project’s co-leader.
Teaching machines to adapt

Typically, in establishing a robot-human partnership, the robot learning and collaboration are separate phases. First, the robot receives instructions and feedback on how to complete the task. Then, the human and robot can try to complete the task together. The Yale and TCS researchers wanted to see if they could combine the two phases, enabling the robot to adapt its behavior as the collaboration progresses.
In this case, the humans offer the robot binary feedback by pushing one of two buttons. One button tells the robot it performed a task correctly, the other is pressed to express disapproval. That’s the simplest and most explicit form of feedback. But then there’s also the unspoken, or implicit, communications that are part of every working team. That could include a smile, a shrug, or even the physical actions of the human in the collaboration. Could robots pick up on that kind of feedback and use it to improve their performance?
“What we’re trying to do in this project is see if we can combine those two types of communication modalities, the explicit and the implicit,” Vázquez said. “That’s what would probably end up happening in real-world deployments, where someone is working with a robot. The robot would say, ‘Let me try to adapt my behavior as we’re working together.’”
Both kinds of communication are critical in a human-robot team. Ideally, the workflow will become more natural. As it does, though, it’s common for the human to forget to provide the explicit feedback.
“People often forget to press the button because they get engaged in the task,” she said. “It’s very common for this kind of feedback to kind of reduce over time. But then the robot can use both the explicit signal in the button pressing and the implicit signal from looking at what actions the human is taking to make inferences about human preference based on them.”
For instance, the robot might pass cheese to the human, but the human opts to not use it. “Then the robot thinks, ‘Hmm, maybe my model that says you want to put cheese first is not accurate. I need to change that.’”
The project also looks at how people’s reactions to their robot co-workers change over time as they become more accustomed to the robot’s work habits.
“We believe this happens because the person is updating their mental model of the robot,” Vázquez said.
Small talk, big impact

And then there’s the role of small talk among co-workers. Can robots engage in it? And perhaps, more to the point, should they?
“What do co-workers talk about while working? They might discuss something related to the task, but how long can you really talk about the task?” Sarkar said. “Typically, people talk about other things; they engage in social talk. A similar thing can happen with a robot.”
Using Large Language Models (LLMs), the researchers are looking at ways to instill some chattiness in the robots. The challenge, though, is how to make a robot’s conversational skills just good enough to keep things interesting without being distracting.
“If the conversation is only social talk, it might be distracting and slow down the progress of work,” Sarkar said. “If conversation is only about the task, it can become boring. So, how can we mix task-related discussion with social talk, and when should we switch between them?”
As part of Robotics Research at TCS Research and Innovation, Sarkar is exploring human-robot interactions, multi-robot systems, and the future of robotics.
Rajesh Sinha, chief scientist at TCS Research and Innovation, said “This research illustrates how we can create better human-robot collaborative cells within the Industry 5.0 paradigm. The ability of robots to adapt and personalize their behavior to human collaborators is crucial for seamless integration in both industrial and service domains.”
Preparing the workplace of tomorrow
Sarkar stated that TCS is invested in physical AI and robotics as a focused area of research.
“This research will change the way robots interpret and adapt to both clear and subtle human cues—both verbal and non-verbal, implicit and explicit—enabling them to collaborate more intuitively with people,” he added.
Longer term, the researchers note, the generality of the task means that the results of the project can apply to many settings, including home-care robots, which need to personalize their behavior to specific individuals. It can also inform the design of service robots in hospitals and other workplaces. If experts’ predictions are correct, it won’t be long before many of our coworkers are robots. Thanks to the kind of research that Vázquez and Sarkar are doing, we’ll be a lot more prepared.
More Details
Published Date
Sep 22, 2025


