Empower Learners for the Age of AI conference

The great unlearning: notes on the Empower Learners for the Age of AI conference

Reda SadkiArtificial intelligence

Artificial intelligence is forcing a reckoning not just in our schools, but in how we solve the world’s most complex problems. 

When ChatGPT exploded into public consciousness, the immediate fear that rippled through our institutions was singular: the corruption of process.

The specter of students, professionals, and even leaders outsourcing their intellectual labor to a machine seemed to threaten the very foundation of competence and accountability.

In response, a predictable arsenal was deployed: detection software, outright bans, and policies hastily drafted to contain the threat.

Three years later, a more profound and unsettling truth is emerging.

The Empowering Learners AI 2025 global conference (7-10 October 2025) was a fascinating location to observe how academics – albeit mostly white men from the Global North centers that concentrate resources for research – are navigating these troubled waters.

The impacts of AI in education matter because, as the OECD’s Stefan Vincent-Lancrin explained: “performance in education is the learning, whereas in many other businesses, the performance is performing the task that you’re supposed to do.” 

The problem is not that AI will do our work for us.

The problem is that in doing so, it may cause us to forget how to think.

This is not a distant, dystopian fear.

It is happening now.

A landmark study presented by Vincent-Lancrin delivered a startling verdict: students who used a generic, answer-providing chatbot to study for a math exam performed significantly worse than those who used no AI at all.

The tool, designed for efficiency, had become a shortcut around the very cognitive struggle that builds lasting knowledge.

Jason Lodge of the University of Queensland captured the paradox with a simple analogy.

“It’s like an e-bike,” he explained. “An e-bike will help you get to a destination… But if you’re using an e-bike to get fit, then getting the e-bike to do all the work is not going to get you fit. And ultimately our job… is to help our students be fit in their minds”.

This phenomenon, dubbed “cognitive offloading,” is creating what Professor Dragan Gasevic of Monash University calls an epidemic of “metacognitive laziness”.

Metacognition – the ability to think about our own thinking – is the engine of critical inquiry.

Yet, generative AI is masterfully engineered to disarm it.

By producing content that is articulate, confident, and authoritative, it exploits a fundamental human bias known as “processing fluency,” our tendency to be less critical of information that is presented cleanly. 

“Generative AI articulates content… that basically sounds really good, and that can potentially disarm us as the users of such content,” Gasevic warned.

The risk is not merely that a health worker will use AI to draft a report, but that they will trust its conclusions without the rigorous, critical validation that prevents catastrophic errors.

Empower Learners for the Age of AI: the human algorithm

If AI is taking over the work of assembling and synthesizing information, what, then, is left for us to learn and to do?

This question has triggered a profound re-evaluation of our priorities.

The consensus emerging is a radical shift away from what can be automated and toward what makes us uniquely human.

The urgency of this shift is not just philosophical.

It is economic.

Matt Sigelman, president of The Burning Glass Institute, presented sobering data showing that AI is already automating the routine tasks that constitute the first few rungs of a professional career ladder.

“The problem is that if AI overlaps with… those humble tasks… then employers tend to say, well, gee, why am I hiring people at the entry level?” Sigelman explained.

The result is a shrinking number of entry-level jobs, forcing us to cultivate judgment and adaptive skills from day one.

This new reality demands a focus on what machines cannot replicate.

For Pinar Demirdag, an artist and co-founder of the creative AI company Cuebric, this means a focus on the “5 Cs”: Creativity, Curiosity, Critical Thinking, Collective Care, and Consciousness.

She argues that true creativity remains an exclusively human domain. “I don’t believe any machine can ever be creative because it doesn’t lie in their nature,” she asserted.

She believes that AI is confined to recombining what is already in its data, while human creativity stems from presence and a capacity to break patterns.

This sentiment was echoed by Rob English, a creative director who sees AI not as a threat, but as a catalyst for a deeper humanity.

“It creates an opportunity for us to sort of have to amplify the things that make us more human,” he argued.

For English, the future of learning lies in transforming it from a transactional task into a “lifestyle,” a mode of being grounded in identity and personal meaning.

He believes that as the value of simply aggregating information diminishes, what becomes more valuable is our ability “to dissect… to interpret or to infer”.

In this new landscape, the purpose of learning – whether for a student or a seasoned professional – shifts from knowledge transmission to the cultivation of human-centric capabilities.

It is no longer enough to know things.

The premium is on judgment, contextual wisdom, ethical reasoning, and the ability to connect with others – skills forged through the very intellectual and social struggles that generic AI helps us avoid.

Empower Learners for the Age of AI: Collaborate or be colonized

While the pedagogical challenge is profound, the institutional one may be even greater.

For all the talk of disruptive change, the current state in many of our organizations is one of inertia, indecision, and a dangerous passivity.

As George Siemens lamented after investing several years in trying to move the needle at higher education institutions, leadership has been “too passive,” risking a repeat of the era when institutions outsourced online learning to corporations known as “OPMs” (online programme managers) that did not share their values: “I’m worried that we’re going to do the same thing with AI, that we’re just going to sit on our hands, leadership’s going to be too passive… and the end result is we’re going to be reliant down the road on handing off the visioning and the capabilities of AI to external partners.”

The presidents of two of the largest nonprofit universities in the United States, Dr. Mark Milliron of National University and Dr. Lisa Marsh Ryerson, president of Southern New Hampshire University, offered a candid diagnosis of the problem.

Ryerson set the stage: “We don’t see it as a tool. We see it as a true framework redesign for learning for the future.” 

However, before any institution can deploy sophisticated AI, it must first undertake the unglamorous, foundational work of fixing its own data infrastructure.

“A lot of universities aren’t willing to take two steps back before they take three steps forward on this,” Dr. Milliron stated. “They want to jump to the advanced AI… when they actually need to go back and really… get the basics done”.

This failure to fix the “plumbing” leaves organizations vulnerable, unable to build their own strategic capabilities.

Such a dynamic is creating what keynote speaker Howard Brodsky termed a new form of “digital colonialism,” where a handful of powerful tech companies dictate the future of critical public goods like health and education.

His proposed solution is for institutions to form a cooperative, a model that has proven successful for over a billion people globally.

“I don’t believe at the current that universities have a seat at the table,” Brodsky argued. “And the only way you get a seat at the table is scale. And it’s to have a large voice”.

A cooperative would give organizations the collective power to negotiate with tech giants and co-shape an AI ecosystem that serves public interest, not just commercial agendas.

Without such collective action, the fear is that our health systems and educational institutions will become mere consumers of technologies designed without their input, ceding their agency and their future to Silicon Valley.

The choice is stark: either become intentional builders of our own solutions, or become passive subjects of a transformation orchestrated by others.

The engine of equity

Amid these profound challenges, a powerfully optimistic vision for AI’s role is also taking shape.

If harnessed intentionally, AI could become one of the greatest engines for equity in our history.

The key lies in recognizing the invisible advantages that have long propped up success.

As Dr. Mark Milliron explained in a moment of striking clarity: “I actually think AI has the potential to level the playing field… second, third, fourth generation higher ed students have always had AI. They were extended families… who came in and helped them navigate higher education because they had a knowing about it.”

For generations, those from privileged backgrounds have had access to a human support network that functions as a sophisticated guidance system.

First-generation students and professionals in under-resourced settings are often left to fend for themselves.

AI offers the possibility of democratizing that support system.

A personalized AI companion can serve as that navigational guide for everyone, answering logistical questions, reducing administrative friction, and connecting them with the right human support at the right time.

This is not about replacing human mentors.

It is about ensuring that every learner and every practitioner has the foundational scaffolding needed to thrive.

As Dr. Lisa Marsh Ryerson put it, the goal is to use AI to “serve more learners, more equitably, with equitable outcomes, and more humanely”.

This vision recasts AI not as a threat to be managed, but as a moral imperative to be embraced.

It suggests that the technology’s most profound impact may not be in how it changes our interaction with knowledge, but in how it changes our access to opportunity.

Technology as culture

The debates from the conference make one thing clear.

The AI revolution is not, at its core, a technological event.

Read the article: Why learning technologists are obsolete

It is a pedagogical, ethical, and institutional one.

It forces us to ask what we believe the purpose of learning is, what skills are foundational to a flourishing human life, and what kind of world we want to build.

The technology will not provide the answers.

It will only amplify the choices we make.

As we stand at this inflection point, the most critical task is not to integrate AI, but to become more intentional about our own humanity.

The future of our collective ability to solve the world’s most pressing challenges depends on it.

Do you work in health?

As AI capabilities advance rapidly, health leaders need to prepare, learn, and adapt. The Geneva Learning Foundation’s new AI4Health Framework equips you to harness AI’s potential while protecting what matters most—human experience, local leadership, and health equity. Learn more: https://www.learning.foundation/ai.

Learn more

Image: The Geneva Learning Foundation Collection © 2025