image: Alexandra Ion, faculty in the Human-Computer Interaction Institute at Carnegie Mellon University's School of Computer Science, and Violet Han are part of a team using AI to turn everyday objects into proactive personal assistants.
Credit: Carnegie Mellon University
A stapler slides across a desk to meet a waiting hand, or a knife edges out of the way just before someone leans against a countertop. It sounds like magic, but in Carnegie Mellon University's Human-Computer Interaction Institute (HCII) researchers are combining AI and robotic mobility to give everyday objects this kind of foresight.
Using large language models (LLMs) and wheeled robotic platforms, HCII researchers have transformed ordinary items — like mugs, plates or utensils — into proactive assistants that can observe human behavior, predict interventions and move across horizontal surfaces to help humans at just the right time.
"Our goal is to create adaptive systems for physical interaction that are unobtrusive, meaning they blend into our lives while still dynamically adapting to our needs," said Alexandra Ion, an HCII assistant professor who leads the Interactive Structures Lab. "We classify this work as unobtrusive because the user does not ask the objects to perform any tasks. Instead, the objects sense what the user needs and perform the tasks themselves."
The Interactive Structures Lab's unobtrusive system uses computer vision and LLMs to reason about a person's goals, predicting what they may do or need next. A ceiling-mounted camera senses the environment and tracks the position of objects. The system then translates what the camera sees into a text-based description of the scene. Next, an LLM uses this translation to infer what the person's goals may be and which actions would help them most. Finally, the system transfers the predicted actions to the item. This process allows for seamless help with everyday tasks like cooking, organizing, office work and more.
"We have a lot of assistance from AI in the digital realm, but we want to focus on AI assistance in the physical domain," said Violet Han, an HCII Ph.D. student working with Ion. "We chose to enhance everyday objects because users already trust them. By advancing the objects' capabilities, we hope to increase that trust."
Ion and her team have started studying ways to expand the scope of unobtrusive physical AI to other parts of homes and offices.
"Imagine, for example, you come home with a bag of groceries. A shelf automatically folds out from the wall and you can set the bag down while you're taking off your coat," Ion said during her episode of the School of Computer Science's "Does Compute" podcast. "The idea is that we develop and study technology that seamlessly integrates into our daily lives and is so well assimilated that it becomes almost invisible, yet is consistently bringing us new functionality."
The Interactive Structures Lab aims to create intuitive physical interfaces that bring safe, reliable physical assistance into homes, hospitals, factories and other spaces. The team's work in unobtrusive physical AI was accepted to the 2025 ACM Symposium on User Interface Software and Technology, held recently in Busan, Korea.
To learn more about the research, visit the project website.