Back to all posts

AI Challenges Core Purpose Of Our Institutions

2025-06-09Gary Grossman, Edelman11 minutes read
Artificial Intelligence
Institutional Change
Future Of Work

Grossman/ChatGPT

Grossman/ChatGPT

Cognitive migration isn't solely an individual experience; it's a collective and institutional one. As artificial intelligence fundamentally alters how we approach thought, judgment, and coordination, the very essence of our schools, governments, corporations, and civic systems faces scrutiny. Institutions, much like individuals, are confronted with the need for rapid adaptation: they must rethink their original purpose, modify their structures, and rediscover their unique value in a world where machines increasingly possess the ability to think, decide, and produce. Similar to people navigating their own cognitive migration, institutions and their leaders must reassess their foundational reasons for existence.

When Continuity Meets Disruption The Modern Institutional Crisis

Institutions are inherently designed to foster continuity. Their core function is to endure, offering structure, legitimacy, and coherence over time—qualities that build trust. We depend on them not just for services and norm enforcement, but for a sense of order in a complex world. They are the enduring vessels of civilization, intended to remain stable as individuals pass through. Without robust institutions, society faces potential upheaval and an increasingly unpredictable future.

Today, however, many of our key institutions are struggling. Long the backbone of modern life, they are being tested in ways that feel both abrupt and systemic.

While AI contributes significantly to this pressure by reshaping the cognitive landscape these institutions were built upon, it's not the sole factor. The last two decades have witnessed growing public distrust, partisan division, and challenges to institutional legitimacy that predate the current wave of generative AI. Issues like rising income inequality, attacks on scientific processes, politicized judiciaries, and declining university enrollments all contribute to this erosion of trust, with compounding effects.

In this environment, the emergence of increasingly sophisticated AI is more than just another challenge; it's an accelerant, adding fuel to the fire of institutional disruption. This disruption compels institutions to adapt their operations and re-examine their fundamental assumptions. What is the role of institutions when credentialing, reasoning, and coordination are no longer exclusively human capabilities? All this necessary reinvention must occur at a pace that inherently conflicts with their nature and purpose.

This is the institutional aspect of cognitive migration: a transformation not only in how individuals find meaning and value, but in how our collective societal structures must evolve to embrace a new era. And like all migrations, this journey will be uneven, contested, and profoundly impactful.

Outdated Architectures Institutions in an AI World

The institutions currently in place were not designed for this era. Most were established during the Industrial Age and refined in the Digital Revolution. Their operational models are based on the logic of previous cognitive paradigms: stable processes, centralized expertise, and the implicit assumption that human intelligence would always be paramount.

Schools, corporations, courts, and government agencies are structured to manage people and information on a grand scale. They depend on predictability, expert credentials, and clearly defined decision-making hierarchies. These traditional strengths, even when sometimes seen as bureaucratic, have historically provided a basis for trust, consistency, and broad societal participation.

However, the foundations of these structures are now under pressure. AI systems can now perform tasks once reserved for knowledge workers, such as summarizing documents, analyzing data, writing legal briefs, conducting research, creating lesson plans, teaching, developing software applications, and executing marketing campaigns. Beyond simple automation, a more profound disruption is occurring: those leading these institutions must now defend their continued relevance in a world where knowledge itself is diminishing in value or is no longer a uniquely human possession.

The relevance of some institutions is challenged by external forces like tech platforms, alternative credentialing systems, and decentralized networks. This means traditional gatekeepers of trust, expertise, and coordination are being contested by faster, more agile, and often digitally native alternatives. In some instances, even long-standing institutional functions like dispute resolution are being questioned, ignored, or bypassed.

This doesn't imply that institutional collapse is certain. But it does indicate that the current model of stable, slow-moving, authority-based structures may not last. At the very least, institutions are under immense pressure to evolve. To remain relevant and play a crucial role in the age of AI, they must become more adaptive, transparent, and attuned to values that algorithms cannot easily replicate: human dignity, ethical consideration, and long-term stewardship.

The choice is not whether institutions will change, but how. Will they resist, become rigid, and fade into irrelevance? Will they be forcibly reshaped to serve fleeting agendas? Or will they proactively reimagine themselves as partners co-evolving in a world of shared intelligence and shifting values?

Early Glimmers Institutional Adaptation in Action

A growing number of institutions are starting to adapt. These responses vary and are often preliminary—more signs of movement than complete transformation. These are early indicators; collectively, they suggest that the cognitive migration of institutions may have already begun.

Yet, a deeper challenge lies beneath these experiments: many institutions are still constrained by outdated operational methods. The environment, however, has changed. AI and other factors are reshaping the landscape, and institutions are only just beginning to adjust.

One example of this change is an Arizona-based charter school where AI is central to daily instruction. Known as Unbound Academy, the school uses AI platforms to deliver core academic content in condensed, personalized sessions for each student. This approach shows promise for improving academic results while also giving students time for life skills, project-based learning, and interpersonal development. In this model, teachers act as guides and mentors rather than primary content deliverers. It's an early look at what institutional migration in education might entail: not just digitizing the old classroom, but redesigning its structure, human roles, and priorities around AI's capabilities.

The World Bank highlighted a pilot program in Nigeria that used AI to support learning through an after-school program. The results indicated “overwhelmingly positive effects on learning outcomes,” with AI acting as a virtual tutor and teachers offering support. Testing showed students achieved “nearly two years of typical learning in just six weeks.”

Similar developments are appearing elsewhere. In government, numerous public agencies are experimenting with AI systems to enhance responsiveness, such as triaging constituent inquiries, drafting initial communications, or analyzing public sentiment. Leading AI labs like OpenAI are now developing tools specifically for government applications. These initial efforts provide a glimpse into how institutions might reallocate human effort towards interpretation, discretion, and trust-building—functions that remain distinctly human.

While most of these initiatives are framed around productivity, they also raise profound questions about the evolving role of humans in decision-making structures. In essence, what is the future of human work? Futurist Melanie Subin, in a CBS interview, expressed the conventional view: “AI is going to change jobs, replace tasks and change the nature of work. But as with the Industrial Revolution and many other technological advancements we have seen over the past 100 years, there will still be a role for people; that role may just change.”

This seemingly gradual evolution contrasts sharply with the stark prediction from Dario Amodei, CEO of Anthropic, a leading AI technology creator. He believes AI could eliminate half of all entry-level white-collar jobs and cause unemployment to spike to 10 to 20% within the next 1 to 5 years. “We, as the producers of this technology, have a duty and an obligation to be honest about what is coming,” he stated in an interview with Axios. His severe prediction could materialize, though perhaps not as quickly as he suggests, as the widespread adoption of new technology often takes longer than anticipated.

Nevertheless, the potential for AI to displace workers has been recognized for some time. As early as 2019, Kevin Roose wrote about discussions with corporate executives at a World Economic Forum meeting. “They’ll never admit it in public,” he wrote, “but many of your bosses want machines to replace you as soon as possible.”

In 2025, Roose reported that signs of this trend are emerging. “In interview after interview, I’m hearing that firms are making rapid progress toward automating entry-level work, and that AI companies are racing to build ‘virtual workers’ that can replace junior employees at a fraction of the cost.”

Across all institutional sectors, there are early signs of transformation. However, the overall picture remains fragmented, representing initial signals of change rather than comprehensive blueprints. The more significant challenge is to move from experimentation to structural reinvention. In the meantime, there could be considerable collateral damage, affecting not only those who lose their jobs but also the overall effectiveness of institutions during this turbulent period.

How can institutions transition from experimentation to integration, from reactive adoption to principled design? And can this be accomplished at a pace that matches the rate of change? Recognizing the need is merely the first step. The true challenge lies in designing for it.

If AI's acceleration persists, institutions will face immense pressure to respond. If they can adapt quickly, the question then becomes: How can they shift from reactive adoption to principled design? They require not just innovation, but an informed vision and principled intent. Institutions must be fundamentally reimagined, built not solely for efficiency or scale, but for adaptability, trust, and long-term societal coherence.

This calls for design principles that are neither purely technocratic nor overly nostalgic, but are grounded in the realities of the ongoing migration. These principles should be based on shared intelligence and human vulnerability, aiming to create a more humane society. With that in mind, here are three practical design principles.

Build for Responsiveness Not Longevity

Institutions must evolve beyond rigid hierarchies and slow feedback mechanisms. In a world reshaped by real-time information and AI-assisted decision-making, responsiveness and adaptability become essential competencies. This involves flattening decision-making layers where feasible, empowering frontline staff with tools and trust, and investing in data systems that quickly surface insights—without solely outsourcing judgment to algorithms. Responsiveness is more than speed; it's about sensing change early and acting with moral clarity.

Integrate AI to Free Humans for Human Tasks

AI should be deployed not as a replacement strategy, but as a tool for refocusing human effort. The most forward-thinking institutions will use AI to handle repetitive tasks and administrative loads, thereby freeing human capacity for interpretation, trust-building, care, creativity, and strategic thinking. In education, this could mean AI-created and delivered lessons that allow teachers more time with students needing extra help. In government, it might involve greater automated processing, giving human staff more time to resolve complex cases with empathy and discretion. The goal should not be to fully automate institutions, but to humanize them. This principle advocates for using AI as a support, not a substitute.

Keep Humans in the Loop Where It Matters Most

Institutions that endure will be those that ensure human judgment remains at critical junctures of interpretation, escalation, and ethics. This means designing systems where human-in-the-loop is not merely a formality but a clearly defined, legally protected, and socially valued structural feature. Whether in justice systems, healthcare, or public service, the presence of a human voice and moral perspective must remain central when stakes are high and values are contested. AI can inform, but humans must ultimately decide.

These principles are not intended as rigid rules, but as guiding choices. They are starting points for reimagining how institutions can stay human-centered in a machine-enhanced world. They reflect a commitment to modernization without abandoning morality, to speed without superficiality or callousness, and to intelligence shared between humans and machines.

Beyond Adaptation The Deeper Question of Institutional Purpose

In times of significant disruption, individuals often ponder: ‘What was I made for?’ We must pose the same question to our institutions. As AI reshapes our cognitive landscape and quickens the pace of change, the relevance of our core institutions is no longer assured by tradition, function, or status. They, too, are subject to the forces of cognitive migration. Like individuals, their future involves deciding whether to resist, retreat, or transform.

As generative AI systems undertake tasks involving reasoning, research, writing, and coordination, the foundational assumptions of institutional authority—including expertise, hierarchy, and predictability—begin to erode. But what follows cannot be a mere hollowing out, because the fundamental purpose of institutions is too vital to discard. It must be a re-founding.

Our institutions should not be replaced by machines. Instead, they should become more human: more responsive to complexity, grounded in ethical deliberation, and capable of maintaining long-term visions in a short-term world. Institutions that fail to adapt intentionally may not withstand the coming turbulence. The dynamism of the 21st century will not wait.

This is the institutional dimension of cognitive migration: a reckoning with identity, value, and function in a world where intelligence is no longer exclusively our domain. The institutions that endure will be those that migrate not just in form, but in spirit, venturing into new territory with tools that serve humanity.

For those shaping schools, companies, or civic structures, the way forward is not in resisting AI, but in redefining what only humans and human-led institutions can genuinely offer.

Gary Grossman is EVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.