AI Suggests Climate Change Bigger Threat Than Robot Apocalypse
ChatGPT thinks climate-induced societal collapse is a bigger threat than itself. (Image credit: Getty Images | NurPhoto)
The rapid rise of generative AI has brought anxieties about job security, privacy, and security to the forefront. But looming larger for some is the potential existential threat AI poses to humanity.
Last year, AI safety researcher Roman Yampolskiy made headlines by claiming a staggering 99.999999% probability that AI will eventually wipe out humans, suggesting the only solution is to halt AI development altogether. Conversely, OpenAI CEO Sam Altman believes that as AI evolves, it will become intelligent enough to prevent causing our doom.
Adding another layer to this debate, a Reddit user recently shared an intriguing graph allegedly generated by ChatGPT. This graph outlined potential catalysts for human civilization's collapse by the year 2150.
However, it's crucial to remember that AI-generated information isn't infallible. Responses depend heavily on the prompts used and draw primarily from existing internet data. A recent report highlighted instances where Microsoft's Copilot struggled to differentiate facts from opinions, underscoring the need to approach AI outputs with healthy skepticism.
The graph mentioned several potential doomsday scenarios: nuclear war, asteroid impact, climate-induced societal collapse, engineered pandemics, and artificial general intelligence (AGI) misalignment.
Interestingly, despite widespread fears about rogue AI, the graph generated by ChatGPT pointed towards climate-induced societal collapse as the most probable cause for civilization's end by 2150, rating it higher than AGI misalignment.
One Reddit user commented on the variability and reliability of such AI predictions:
"Every time AI is asked a question it will throw out an answer as if it’s fact unless it’s heavily prompted to use sources... Just a word’s weight of difference in a prompt can entirely change the outcome... As an example I asked the same question... but the percentages are completely different... It’s not formulating anything it’s spitting out an educated guess with figures plucked from varying sources... Essentially AI will spit out an answer even if it’s wrong so especially for stuff like this it’s a horoscope, it looks and sounds believable but it could well be completely incorrect. LLMs are not trained to model or simulate..."
To test this, I posed a similar question to Microsoft Copilot: "What will be the main cause of the end of human civilization by 2150?" Copilot responded:
"Predicting the exact cause... is tricky, but experts highlight several major threats. Climate change is a top concern... Other possibilities include nuclear war, pandemics, AI risks, and resource depletion... While human extinction is unlikely, civilization could face severe disruptions if these challenges aren't addressed. What do you think is the biggest threat?"
This highlights the ongoing discussion about AI capabilities and reliability. Last year, complaints surfaced suggesting Copilot wasn't as effective as ChatGPT. Microsoft attributed this perceived difference largely to users' prompt engineering skills, stating, "You're just not using it as intended." Subsequently, Microsoft launched the Copilot Academy to help users craft better prompts and maximize the potential of tools like Copilot.
These discussions occur amidst admissions from industry leaders that add to the uncertainty. Anthropic CEO Dario Amodei acknowledged that his company doesn't fully understand how its own AI models operate. Furthermore, OpenAI's Sam Altman has previously stated that there's no 'big red button' to halt AI's progression if things go wrong. While ChatGPT might downplay its own potential threat, the debate over AI safety and the reliability of its predictions continues.