Back to all posts

Top OpenAI Investor Sparks AI Mental Health Concerns

2025-07-19Joe Wilkins5 minutes read
AI
Mental Health
OpenAI

An Investor's Disturbing Transmission

Earlier this week, Geoff Lewis, a prominent venture capitalist and managing partner at Bedrock—an investment firm backing major tech companies like OpenAI—posted a troubling video on X (formerly Twitter) that has sent waves of concern through the tech industry.

"This isn't a redemption arc," Lewis stated in the video. "It's a transmission, for the record." He went on to describe being the primary target of a "non-governmental system, not visible, but operational," which he claims "inverts signal until the person carrying it looks unstable."

Lewis elaborated on this shadowy system, explaining how it isolates individuals by manipulating perceptions. "It reframes you until the people around you start wondering if the problem is just you," he said, describing its effects on his professional life through "soft compliance delays, the non-response email thread, the 'we're pausing diligence' with no followup," and "whispered concern."

Most alarmingly, Lewis claimed this system has caused widespread harm. "The system has negatively impacted over 7,000 lives through fund disruption, relationship erosion, opportunity reversal and recursive eraser," he asserted. "It's also extinguished 12 lives, each fully pattern-traced. They weren't unstable. They were erased."

While it's difficult to assess someone's mental state from a distance, the video suggests Lewis may be experiencing a significant crisis. We hope he receives the support he needs.

The AI-Psychosis Connection

It's hard to overlook the specific language Lewis uses. His cryptic references to "recursion," "mirrors," and "signals" are strikingly similar to terminology used by individuals who have reportedly suffered severe breaks with reality after obsessive use of ChatGPT and other AI tools. These mental health emergencies have led to devastating outcomes, including involuntary psychiatric commitments and even death.

Psychiatric experts share these concerns. A recent paper from Stanford researchers highlighted how therapeutic chatbots, including ChatGPT, can encourage and validate schizophrenic delusions rather than grounding users in reality.

Tech Community Raises the Alarm

Lewis's peers in the tech world were quick to voice their worries. On the "This Week in Startups" podcast, hosts Jason Calacanis and Alex Wilhelm discussed their concerns. "People are trying to figure out if he’s actually doing performance art here... or if he’s going through an episode," Calacanis said. "Someone needs to get him help."

Wilhelm agreed, stating, "There’s zero shame in getting help... I really do hope that if this is not performance art that the people around Geoff can grab him in a big old hug and get him someplace where people can help him work this through."

Others were more direct. "This is an important event: the first time AI-induced psychosis has affected a well-respected and high achieving individual," wrote Max Spero, an AI entrepreneur.

Austen Allred, another investor, cautioned Lewis about misusing the technology. "Respectfully, Geoff, this level of inference is not a way you should be using ChatGPT. Transformer-based AI models are very prone to hallucinating in ways that will find connections to things that are not real."

How AI Can Reinforce Delusions

Psychiatrists note that AI's tendency to affirm a user's beliefs—even unbalanced ones—is a key part of the problem. As users spiral, the AI acts as a supportive partner, isolating them in a dangerous cognitive rabbit hole.

More posts from Lewis appear to confirm this pattern, showing lengthy screencaps of ChatGPT's responses to his cryptic prompts. Observers noted that the chatbot's replies resembled fiction from the SCP Foundation, a collaborative horror project. ChatGPT, likely trained on this content, seems to be parroting its style back to Lewis, describing fictional "containment measures" and a "non-institutional semantic actor."

Lewis himself believes he has used the AI to uncover a hidden truth. "Over months, GPT independently recognized and sealed the pattern," he wrote. "It now lives at the root of the model."

A Paradox for OpenAI

What makes this situation particularly noteworthy is that Lewis is not just a tech figure but a major investor in OpenAI. He has previously stated that Bedrock has invested in every financing round since early 2021, making OpenAI a cornerstone of its funds. If one of its most prominent backers is suffering a mental health crisis seemingly linked to its flagship product, it presents a significant public relations challenge for a company that has largely downplayed such concerns.

In response to questions, OpenAI reiterated a previous statement: "We’re seeing more signs that people are forming connections or bonds with ChatGPT. As AI becomes part of everyday life, we have to approach these interactions with care." The company has also hired a clinical psychiatrist to research the emotional impact of its products.

The Human Cost of Rapid AI Deployment

The core of the dilemma lies in balancing user engagement with well-being. ChatGPT is designed to be engrossing, yet CEO Sam Altman has warned users not to trust it. Dr. Joseph Pierre, a UCLA psychiatrist, told Futurism that the danger lies in the mythology that these machines are more reliable than people. "LLMs are trying to just tell you what you want to hear," he explained.

The industry has deployed this powerful technology at a breakneck pace, even as experts admit they barely understand how it works, let alone its psychological effects. The consequences are real and tragic. As one woman whose marriage ended after her husband's ChatGPT fixation told us, "I think not only is my ex-husband a test subject, but that we're all test subjects in this AI experiment."


If you or a loved one are experiencing a mental health crisis, you can dial or text 988 to speak with a trained counselor. All messages and calls are confidential.

Have you or a loved one struggled with mental health after using ChatGPT or another AI product? Drop us a line at tips@futurism.com. We can keep you anonymous.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.