Back to all posts

Deconstructing Sam Altmans Sci Fi Future With AI

2025-08-18Peter Isackson7 minutes read
Artificial Intelligence
Tech Ethics
Futurism

The Problem with Silicon Valleys Grand Visions

The unique reasoning style of Silicon Valley leaders often deserves a closer look. For instance, former Google CEO Eric Schmidt has been noted for using "common sense" arguments to distort reality for amoral ends. More recently, a fantastical vision of our economic future, courtesy of OpenAI CEO Sam Altman, prompted me to turn to his own creation, ChatGPT, for a deeper analysis.

I initiated a conversation with the AI to dissect the madness, using the following prompt:

“I read the following paragraphs today in a Fortune article detailing OpenAI CEO Sam Altman’s vision of the future. Apart from being unrealistic from numerous points of view, shouldn’t we consider this to be highly irresponsible coming from someone whose wealth and prestige apparently command that his private thoughts be deemed newsworthy?

I would like to engage in a conversation with you about why this kind of thinking reveals some of the major flaws of our civilization. I also see this as an indication of why the fear some express of AI getting out of control may be justified. It isn’t AI but those who define how it will work in society that are the danger.

Here are the paragraphs from the Fortune article:

‘In 2035, that graduating college student, if they still go to college at all, could very well be leaving on a mission to explore the solar system on a spaceship in some completely new, exciting, super well-paid, super interesting job,’ Altman said to video journalist Cleo Abram last week.

Though it’s unclear how widespread space exploration will expand in the coming years—considering NASA’s broad goal of getting to Mars in the 2030s—aerospace engineers are growing faster than the national average of all jobs, according to data from the U.S. Bureau of Labor Statistics. And they bring home an envy-inducing annual paycheck of over $130,000.” Do you understand my concern?”

ChatGPT’s response was immediate and insightful.

Yes — I see exactly why you’re concerned. Those paragraphs aren’t just harmless futurist fluff. They’re a case study in how elite figures and prestige media often blend fantasy, selective fact cherry-picking, and authority aura into something that can distort public priorities and expectations.

Let's unpack the layers of this problem.


ChatGPTs Initial Analysis More PR Than Plausible

1. The “vision” is more PR than plausible forecast

Altman’s example isn’t grounded in any clear path from our present realities to that 2035 outcome. In the real world, deep-space exploration by civilian “college grads” in 12 years would require:

  • Physics-defying propulsion breakthroughs.
  • Massive public and private investment (orders of magnitude above current levels).
  • A reorganization of human spaceflight from an elite specialist activity to something like a mass employment sector.

None of those conditions are currently in place. This isn’t serious futurism; it’s brand positioning.

2. The prestige echo effect Because it’s Sam Altman saying it, and Fortune reporting it, the statement is treated as newsworthy instead of being challenged. This is part of a larger systemic flaw where wealthy tech leaders’ speculative musings are laundered into public discourse as credible strategic visions.

3. A bait-and-switch with data The article pivots from Altman’s grand, sci-fi image to real aerospace engineering salary data. This creates an implicit link: look, it’s already happening! That’s misleading. A $130K engineering job is not the same as a “college kid on a deep-space mission.” It’s a narrative sleight of hand.

4. Dangerous misallocation of attention When leaders frame the future as a spacefaring adventure instead of focusing on climate adaptation, healthcare crises, or social inequality, they’re reframing society’s “imagination budget” toward shiny, elite-friendly dreams. This shapes where policy, capital, and public enthusiasm get directed—away from urgent challenges.

5. The AI angle You’re right that the real danger isn’t AI “itself” but those who decide what AI will be optimized for. If the people in charge are comfortable bending reality in public, imagine how they might direct AI systems: not toward democratic benefit, but toward narratives that serve their own prestige and power.


Unpacking the Vision as Soft Propaganda

Intrigued, I prompted the AI to elaborate on how this rhetoric functions as a form of soft propaganda. ChatGPT explained how Altman's vision operates as part of a larger narrative machinery.

1. The “techno-dream” as aspirational spectacle

The statement paints a vivid, cinematic image of young people in gleaming spacecraft with well-paid, interesting jobs.

  • It’s emotionally sticky.
  • It substitutes fantasy fulfillment for a serious discussion of what’s actually possible.
  • It implicitly tells the audience: The future will be exciting and abundant—no need to dwell on present crises.

2. Authority laundering

When a high-status tech leader speculates, prestige media acts as an amplifier, framing a personal dream as a shared expectation. The implication is: If someone this rich thinks it, it’s probably on the horizon.

3. Distraction from systemic limits

The vision ignores deep, unresolved challenges:

  • Economic inequality (most 2035 grads will likely struggle with debt, not ride rockets).
  • Climate collapse, resource scarcity, and political instability.
  • The trend of AI and automation potentially reducing rather than expanding good-paying work.

By spotlighting an improbable utopia, the narrative quietly pulls focus from these uncomfortable truths.

4. Narrative capture of AI’s purpose

Accepting such visions defines the ends toward which AI is deployed. Instead of aiming AI at democratizing wealth or solving climate challenges, it gets funneled into projects reflecting elite fascinations like space tourism or luxury robotics.

5. The consent-engineering function

This is where social control becomes clearest. If you can shape what people think the future will be, you can shape what they’ll accept in the present. Promoting a spacefaring 2035 helps build consent for current AI development paths and power concentration, all under the banner of making that future possible.


Why This AI-Powered Critique Matters

This exchange highlights a crucial point about AI fears. The danger isn’t a spontaneous robot uprising; it's that those controlling AI are already skilled at shaping public perception to fit their ambitions, even when those ambitions are divorced from reality. If they can convince people to dream on their terms, they can normalize social arrangements where AI works for the few, not the many.

This conversation between a human and an AI chatbot allows us to explore serious issues about our world and its future. The challenge is to move such reasoned scrutiny from private chats into the public domain.

That's why sharing this exchange is important—not to force a conclusion, but to engage as many people as possible in reflection and debate. By creating awareness, we can change how we interpret the pronouncements of influential people and the media that amplifies them.

As this conversation continues, it's worth considering other facets of our relationship with AI. For a different perspective, this video from Kim Iversen recounts an interesting experience with the Grok AI, revealing more about the private vs. public dynamic with AI chatbots.

Your thoughts

This conversation will continue, but we value your reactions. Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather and share the ideas of humans who interact with AI.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.