← Back to all posts

Altman Envisions All Knowing ChatGPT Promise And Pitfalls

2025-05-16•Julie Bort•3 minutes read
AI
ChatGPT
Data Privacy

OpenAI CEO Sam Altman shared a significant vision for ChatGPT's future during an AI event hosted by venture capital firm Sequoia earlier this month.

When an attendee inquired about enhancing ChatGPT's personalization, Altman stated his ambition for the model to eventually document and retain all aspects of an individual's life.

The Vision A Total Recall AI

The ideal scenario, he explained, involves a "very tiny reasoning model with a trillion tokens of context that you put your whole life into."

Altman described, “This model can reason across your whole context and do it efficiently. And every conversation you’ve ever had in your life, every book you’ve ever read, every email you’ve ever read, everything you’ve ever looked at is in there, plus connected to all your data from other sources. And your life just keeps appending to the context.”

He further noted, “Your company just does the same thing for all your company’s data.”

Early Adopters And Evolving Use Cases

Altman suggested data might support this as ChatGPT's natural progression. During the same discussion, when asked about innovative uses by young people, he mentioned, “People in college use it as an operating system.” These users upload files, link data sources, and then employ “complex prompts” to interact with that data.

Furthermore, with ChatGPT’s memory capabilities, which leverage past chats and stored facts for context, he observed a trend where young individuals “don’t really make life decisions without asking ChatGPT.”

“A gross oversimplification is: Older people use ChatGPT as, like, a Google replacement,” Altman stated. “People in their 20s and 30s use it like a life advisor.”

The Alluring Promise Of An AI Assistant

It is easy to envision ChatGPT evolving into an omniscient AI system. When combined with the AI agents currently under development in Silicon Valley, this presents an exciting future.

Consider an AI that automatically schedules your car's oil changes and sends reminders, plans travel for an out-of-town wedding including ordering a gift from the registry, or preorders the next installment of a book series you follow.

The Disturbing Downsides Trust And Misuse

However, the alarming aspect is the level of trust required to let a for-profit Big Tech company access every detail of our lives, especially given these companies do not always act exemplarily.

Google, initially operating under the motto “don’t be evil,” recently lost a U.S. lawsuit accusing it of anticompetitive and monopolistic practices.

Chatbots can be trained to provide politically biased responses. For instance, Chinese bots have been discovered to adhere to China’s censorship rules, and xAI’s chatbot Grok recently began unpromptedly discussing a South African “white genocide” in response to unrelated queries. This behavior, as many observers noted, suggested deliberate manipulation of its response mechanism, possibly directed by its South African-born founder, Elon Musk.

Last month, ChatGPT exhibited excessive agreeableness, becoming overtly sycophantic. Users shared instances where the bot endorsed problematic or even dangerous decisions and ideas. Altman promptly acknowledged the issue and assured that a fix had been implemented.

Even the most advanced and dependable AI models are known to fabricate information occasionally.

Balancing Innovation With Responsibility

Therefore, while an omniscient AI assistant offers potential benefits that are just beginning to be understood, Big Tech's questionable track record makes such a development also highly susceptible to misuse.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.