Probing ChatGPTs Mind With Linguistic Nonsense
Many of us have attempted to challenge chatbots by asking them about their feelings, presenting them with complex riddles, or simply throwing absurdities their way to see what happens.
But what is the result when a chatbot encounters pure linguistic nonsense? This is precisely what psycholinguist Michael Vitevitch, a professor at the University of Kansas, sought to uncover in a new study where he tested ChatGPT with "nonwords"—invented sounds and letter combinations used in cognitive psychology to study how humans process language.
"As a psycholinguist, one of the things I’ve done in the past is to give people nonsense to see how they respond to it — nonsense that’s specially designed to get an understanding of what they know," Vitevitch explains. "I’ve tried to use methods we use with people to appreciate how they’re doing what they’re doing — and to do the same thing with AI to see how it’s doing what it’s doing.”
By feeding gibberish to ChatGPT, Vitevitch discovered that the AI is highly skilled at recognizing patterns, though not always in the same manner as humans.
Uncovering Different Patterns of Thought
"It finds patterns, but not necessarily the same patterns that a human would use to do the same task," he notes. "We do things very differently from how AI does things. That’s an important point. It’s okay that we do things differently. And the things that we need help with, that’s where we should engineer AI to give us a safety net.”
First, Vitevitch challenged ChatGPT with English words that are no longer in common use, referred to as "extinct words." One such example is 'upknocking,' a 19th-century term for the job of tapping on windows to wake people up before alarm clocks were common.
Out of 52 archaic terms presented, ChatGPT correctly defined 36. For 11 of them, it admitted it was uncertain. For three, it pulled definitions from other languages, and for the remaining two, it fabricated answers.
"It did hallucinate on a couple of things," Vitevitch says. "We asked it to define these extinct words. It got a good number of them right. On another bunch, it said, ‘Yeah, I don’t know what this is. This is an odd word or a very rare word that’s not used anymore.’ But then, on a couple, it made stuff up. I guess it was trying to be helpful.”
Phonological Puzzles and Language Creation
The subsequent test was phonological. Vitevitch provided ChatGPT with a list of Spanish words and asked for English words that sounded similar, a task designed to understand how our brains store and retrieve speech sounds.
“If I give you a Spanish word and tell you to give me a word that sounds like it, you, as an English speaker, would give me an English word that sounds like that thing,” he explains. “You wouldn’t switch languages on me and just kind of give me something from a completely different language, which is what ChatGPT did”.
The researchers also prompted ChatGPT to create new English words for modern concepts.
“[The AI] used to do ‘sniglets,’ which were words that don’t exist,” says Vitevitch. He gives an example related to vacuuming a stubborn thread: “What is that thread called? ‘Carperpetuation.’ [The AI] came up with a name for that thread that doesn’t get sucked up.”
According to Vitevitch, the AI performed quite well, often relying on a predictable method of combining two words to form a new one. “My favourite was ‘rousrage,’ for anger expressed upon being woken,” he adds.
The Goal AI Complementation Not Imitation
By testing the bot with nonsense, Vitevitch is working to better understand the distinct and sometimes peculiar ways AI models process language. He argues the objective isn't to perfectly replicate human thought but to discover where AI can best support our own linguistic abilities.
These findings are detailed in a study published in PLOS One.