Eyes on the prize: Decoding eye contact
Peer-Reviewed Publication
This month, we’re focusing on artificial intelligence (AI), a topic that continues to capture attention everywhere. Here, you’ll find the latest research news, insights, and discoveries shaping how AI is being developed and used across the world.
Updates every hour. Last Updated: 22-Dec-2025 17:11 ET (22-Dec-2025 22:11 GMT/UTC)
For the first time, a new study has revealed how and when we make eye contact—not just the act itself—plays a crucial role in how we understand and respond to others, including robots.
They discovered that the most effective way to signal a request was through a specific gaze sequence: looking at an object, making eye contact, then looking back at the same object. This timing made people most likely to interpret the gaze as a call for help.
How do you develop an AI system that perfectly mimics the way humans speak? Researchers at Nagoya University in Japan have taken a significant step forward to achieve this. They have created J-Moshi, the first publicly available AI system specifically designed for Japanese conversational patterns.
J-Moshi captures the natural flow of Japanese conversation, which often has short verbal responses known as "aizuchi" that Japanese speakers use during conversation to show they are actively listening and engaged. Responses such as “Sou desu ne” (that’s right) and “Naruhodo” (I see) are used more often than similar responses in English.
Traditional AI has difficulty using aizuchi because it cannot speak and listen at the same time. This capability is especially important for natural-sounding Japanese AI dialogue. Consequently, J-Moshi has become very popular with Japanese speakers who recognize and appreciate its natural conversation patterns.