Back to all posts

AI Now Reads Social Cues Like Humans Do

2025-09-10Neuroscience News3 minutes read
Artificial Intelligence
Neuroscience
Social Perception

Humans are constantly making split-second judgments about each other's behaviors and interactions. It's a fundamental part of how we navigate our social world. Now, it appears this deeply human skill is no longer exclusive to us. Advanced AI models, like OpenAI's ChatGPT, can describe what's happening in images and videos, but a recent study has revealed their capabilities go much deeper.

This shows the outline of a digital head and a brain, and people networking. AI’s evaluations were even more consistent than those made by a single person. Credit: Neuroscience News

Until now, it wasn't clear if AI’s interpretive skills were limited to simple object recognition or if they could grasp complex social information. Researchers have found that AI can indeed interpret nuanced social dynamics with astonishing accuracy.

The Groundbreaking Study: AI vs. Human Perception

Researchers at the Turku PET Centre in Finland set out to test how accurately ChatGPT could assess social interactions. The AI was tasked with evaluating 138 different social features from a collection of videos and pictures. These features covered a wide spectrum of social cues, from facial expressions and body movements to complex interaction traits like cooperation and hostility.

To measure the AI's performance, the researchers compared its evaluations to over 2,000 similar assessments made by human participants. The results were remarkable: ChatGPT's evaluations were not only very close to those made by humans, but they were also more consistent than the ratings from any single person.

"Since ChatGPT’s assessment of social features were on average more consistent than those of an individual participant, its evaluations could be trusted even more than those made by a single person," notes Postdoctoral Researcher Severi Santavirta from the University of Turku. He adds, "However, the evaluations of several people together are still more accurate than those of artificial intelligence."

Revolutionizing Neuroscience Research

The study's second phase highlighted the profound implications for neuroscience. Researchers used both the AI and human evaluations to model the brain networks involved in social perception using functional brain imaging. Before brain activity can be analyzed, the social situations shown to participants must be meticulously assessed—a task where AI proved to be an invaluable tool.

"The results were strikingly similar when we mapped the brain networks of social perception based on either ChatGPT or people’s social evaluations," Santavirta explains. This suggests AI can automate a laborious part of neuroscience research, saving time and money.

To put the efficiency gains in perspective, Santavirta summarizes, "Collecting human evaluations required the efforts of more than 2,000 participants and a total of more than 10,000 work hours, while ChatGPT produced the same evaluations in just a few hours."

Beyond the Lab: Real-World Applications

While the study focused on benefits for brain imaging research, the findings point to a wide range of practical applications. Automating the evaluation of social situations from video could transform various industries.

In healthcare, it could help medical staff monitor patient well-being. In business, it could be used to evaluate how a target audience might receive audiovisual marketing campaigns. For security, it could predict and flag abnormal situations from camera footage.

"The AI does not get tired like a human, but can monitor situations around the clock," says Santavirta. "In the future, the monitoring of increasingly complex situations can probably be left to artificial intelligence, allowing humans to focus on confirming the most important observations."

This research was conducted by author Tuomas Koivula from the University of Turku. The original open-access study, "GPT-4V shows human-like social perceptual capabilities at phenomenological and neural levels," was published in Imaging Neuroscience.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.