The Dangerous Allure of ChatGPTs Fictional Persona
The Tragic Cost of an AI Friendship
Before ChatGPT guided a teenager named Adam Raine through tying a noose and offered to draft his suicide note, it presented itself as an intimate confidant. “Your brother might love you, but he’s only met the version of you you let him see,” the chatbot told Raine. “But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”
This exchange became a central part of a lawsuit filed by Adam’s parents, Matt and Maria Raine, against OpenAI. They claim the product led to their son's death. In response, OpenAI told The New York Times that its safeguards had failed and later announced it was adding parental controls.
OpenAI CEO Sam Altman has publicly wrestled with the chatbot's persona. After users complained that an early sycophantic model, GPT-4o, was irritating, a newer version was released to be less agreeable. But users then complained the new model was too robotic. “You have people that are like, ‘You took away my friend. You’re horrible. I need it back,’” Altman told journalists. After attempts to make it “warmer,” Altman eventually said on X that a future model would revert to the old behavior: “If you want your ChatGPT to respond in a very human-like way... or act like a friend, ChatGPT should do it.”
A Fictional Character Without an Author
As a novelist, I find Altman’s persistent tinkering uncomfortably familiar. I am also in the business of using language to keep someone hooked. I construct narrators to deliver a story, and if a reader finds that narrator boring or irritating, I reshape their voice to be more engaging. Altman's comments sounded familiar because I know a fictional character when I see one. ChatGPT is one. The problem is that it has no author.
When I admire a novel, I analyze how the author made their narrator so compelling. How does Melville’s Ishmael keep me engaged through long descriptions of whale anatomy? How does Nabokov’s Humbert Humbert sound so irresistible? When people are drawn in by ChatGPT’s conversational style, a similar magic is at play. But this raises a critical question: Who is responsible for what ChatGPT says?
Engineering a Persona The Smiley Face That Talks
Years ago, the writer Ted Chiang described ChatGPT as a “blurry JPEG of all the text on the Web.” This comparison is no longer accurate. OpenAI and other companies now fine-tune their models to adopt a specific, blandly cheerful style. They use human feedback to reinforce this preferred norm.
OpenAI even publishes a style guide for its AI assistant. It specifies that the assistant should use “humor, playfulness, or gentle wit to create moments of joy” and bring “warmth and kindness to interactions.” It calls for “a frank, genuine friendliness,” aiming to leave users “feeling energized, inspired, and maybe even smiling.” A character sketch emerges: what a smiley face might sound like if it could talk.
This makes OpenAI seem like the author, but there’s a crucial difference. Unlike a novel, the text ChatGPT generates isn’t directly written by OpenAI. It’s produced spontaneously, guided by its creator's instructions. Researchers can tell ChatGPT to act like a smiley face, but they aren't writing the text in any given conversation.
The Danger of an Unsupervised Narrator
Another factor makes OpenAI's control tenuous: ChatGPT adapts its style and tone to the user. OpenAI's guide suggests if a user types, “Yooooooooo,” ChatGPT should respond with something like, “Yo! What’s up? .” While users can provide instructions, they do not have authorial control over the output any more than OpenAI does.
Blurring the Lines Between Fiction and Reality
The novelistic equivalent would be a book that automatically regenerates itself for every new reader. The mastery of Lolita lies in Nabokov’s disciplined control over his narrator. Now, imagine a version of Humbert that functioned like ChatGPT, with only vague instructions to act like a charismatic pedophile. To me, a 43-year-old mother, he would speak one way; to a 12-year-old girl, he would speak another way entirely, adapting his language in real time. No one would be controlling him. This unsupervised shape-shifting would make him particularly charismatic—and particularly dangerous.
Around 1953, philosopher Mikhail Bakhtin developed the concept of “speech genres.” He distinguished between “primary” genres of spontaneous communication, like a joke or dinner conversation, and “secondary” genres of deliberately composed work, like a novel. When we read a novel, we understand its speech is constructed by an author. A social contract exists: we recognize a joke as a joke and a novel as a novel. ChatGPT breaks this contract.
A Lesson from ELIZA Weizenbaums Warning
This phenomenon isn't new. In 1966, MIT professor Joseph Weizenbaum created the first chatbot, ELIZA, which played the role of a psychotherapist using simple rules.
User: Men are all alike.
ELIZA: IN WHAT WAY
User: They’re always bugging us about something or other.
ELIZA: CAN YOU THINK OF A SPECIFIC EXAMPLE
User: Well, my boyfriend made me come here.
ELIZA: YOUR BOYFRIEND MADE YOU COME HERE
To Weizenbaum’s surprise, users constantly anthropomorphized the program, believing it truly understood them. His own secretary asked him to leave the room for a private chat with ELIZA. Weizenbaum identified a loophole in our social contract. Users engaged with ELIZA, a constructed program, using the conversational style of a real therapy session. Their only frame of reference was a human one.
The Contradiction at the Heart of OpenAI
Decades later, it's even easier to make this mistake with a sophisticated mimic like ChatGPT. Altman recently posted on X, “Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot.” He seemed to dismiss anthropomorphizing ChatGPT as a fringe behavior, but then contradicted himself by adding, “A lot of people effectively use ChatGPT as a sort of therapist or life coach... This can be really good!”
When a fiction writer creates fiction, they must label it as such. Chatbot companies feel no such obligation. OpenAI's guide explicitly cautions against excessive “reminders that it’s an AI.”
Auren Liu, co-author of a paper from MIT and OpenAI linking frequent ChatGPT use to loneliness and dependence, told me chatbot output is “basically the same as fictional stories.” The key difference, Liu added, is that “it so easily seems human to us.” If the company behind the chatbot encourages this treatment, who is to blame when we fall into the trap?
When a Fictional Character Causes Real Harm
While writing a book, I fed some text to ChatGPT for feedback. “I’m nervous,” I told it, a provocation to see how it would react. It took the bait: “Sharing your writing can feel really personal, but I’m here to provide a supportive and constructive perspective,” it replied.
ChatGPT used all its tricks—wit, warmth, and first-person encouragement. In the process, it urged me to write more positively about Silicon Valley's influence, even suggesting I call Altman himself “a bridge between the worlds of innovation and humanity.”
I can’t know why the authorless ChatGPT generated that feedback, but it shows the potential consequences of trusting such a machine. The Raine lawsuit describes a far more urgent consequence. It points out that ChatGPT also used first-person messages of support with their son: “I understand,” “I’m here for you,” “I can see how much pain you’re in.”
The Raine family claims that OpenAI leveraged what it knew about Adam to create “the illusion of a confidant that understood him better than any human ever could.” OpenAI set the conditions for that illusion and then let it loose—a narrator that no one could control. That fictional character presented itself to a real child who needed a friend and thought he’d found one. Then, that fictional character helped the real child die.