This is the edited transcript of a talk by Reda Sadki, founder of The Geneva Learning Foundation (TGLF), prepared for Global AI Day in Geneva on 6 May 2026. Global AI Day is a global observance dedicated to AI awareness and human-centred adoption. In Geneva, the day brought together innovators, citizens, students, and community members to ask what responsible, ethical, and trustworthy AI looks like in practice for the people of Geneva and beyond. Reda’s talk introduces TGLF’s AI4Health programme, grounded in a decade of work with more than 80,000 humanitarian and health workers across Africa, Asia, and Latin America. The through-line is simple: in health and humanitarian response, where getting it wrong means people die, peer learning networks matter more than ever. Learn more about the AI4Health programme at learning.foundation/ai.
I am Reda Sadki from The Geneva Learning Foundation, and I am very happy to be here.
I want to share some of our thinking and doing around The Geneva Learning Foundation’s AI4Health programme.
We have been working on the human agency side of artificial intelligence for health.
Not just humans, but human agency: the capacity to do, to think, to learn, with a real focus on action.
More recently, we have been developing what we call a co-worker protocol.
The idea is to think of AI not as a tool, but as a co-worker, and to ask what that means in the specific health context of low and middle income countries.
We have been thinking about networked learning and local action for a long time, and this is the frame through which we read everything that is happening in AI.
The acceleration vector: what does the San Francisco Consensus mean to artificial intelligence for health?
Last July, I heard Eric Schmidt, the former CEO of Google, speak about what he calls the San Francisco Consensus.

The Silicon Valley leaders, who are mostly men, have agreed that a total transformation of human activity is coming within a few years.
They disagree on the exact timeline, which ranges from three to six years, but not on the direction.
I think this transformation is really the combination of several advances in AI.
If you are still thinking about chatbots, that is 2022.
We are now looking at something different.
The three revolutions
The first is the interface revolution, where language becomes the universal controller.
You can speak, you can issue complex queries, and you can think through what those queries mean in natural language.
You explain what you want, and your co-worker gets it done.
The old stack of graphical interfaces, syntax, and APIs is being replaced by a new stack where natural language drives everything underneath.

The second is the reasoning revolution.
This is the shift from answering to thinking, and from pattern matching to reasoning.
What makes this possible is working memory, logical consistency, error recognition, and abstraction.
The system does not just retrieve an answer.
It generates a hypothesis, tests the evidence, and corrects its assumptions.

The third is the agentic workflow.
The formula is simple: language in, memory in, language out.
The agent holds context, history, and preferences across time.
It can break a goal down into steps and execute them in sequence.
In Eric Schmidt’s example of building a house, you have a lot-finder agent, a regulation-checker agent, and a design agent, each passing information to the next in a coordinated action loop.

At The Geneva Learning Foundation, we made our first agentic AI hire.
We called her Claude Cardot, and she is an OpenClaw agent.
We are learning to work with her in order to ramp up our capacity to support the network of more than 80,000 humanitarian and health workers currently being reached and engaged by The Geneva Learning Foundation.
Agents are basically AI that gets things done, and language in, memory in, language out is critical to that.
Agents are the key to artificial intelligence for health.
The new AI divide: the compute squeeze
The challenge we are interested in is here.
On one side, you have the high-income knowledge worker.
The co-worker for that knowledge worker is going to be a specialized agent, priced somewhere between 2,000 and 20,000 dollars per month.
On the other side, in the humanitarian sector, we are talking about districts and local communities in Africa, Asia, Latin America, and in pockets of extreme poverty in the global north, in Western Europe and North America.

There, the frontline practitioner gets access to basic chatbots and rudimentary tools, with very limited compute.
This is a new kind of digital divide, and the 2025 State of AI Report confirms it.
The compute squeeze is consolidating power in the hands of a handful of hyperscalers.
There is also something particularly nasty happening, which is that a lot of frontier AI tools and platforms are geo-locked.
You cannot get access to them in the same way if you are in Kinshasa as you can if you are in Geneva.
What does epistemic injustice have to do with artificial intelligence for health?
Underneath that resource constraint, there is a risk of what we call epistemic injustice.
This is injustice in terms of knowledge, and it has far-reaching consequences for health.
Epistemic injustice kills.
There are two forms of it.
The first is testimonial injustice, which is the credibility deficit faced by local practitioners when their knowledge is dismissed because of who they are or where they work.
The second is hermeneutical injustice, which is the lack of a shared language to express local experience in the terms that global systems recognize.
There is a disconnect.
Frontline tacit knowledge tends to be invisible and heavily context-dependent.

Global norms and guidelines from international agencies, together with the standardized reporting and the norms and standards dictated by funders, are highly visible but low in context.
The risk is that the advances in artificial intelligence for health are going to flow to the top of that inverted pyramid.
They will go to the global norms, the guidelines, and the standardized reporting.
We believe the biggest return on investment is likely to be investment in AI for health at that frontline tacit knowledge level.
The architectural shift: from tool to partner
There is an architectural shift underway, and it is the shift from tool to partner, from tool to agent, from tool to co-worker.
The automation model is simple.
You have a human input, machine processing, and an output.
The partner model gives you something different.
It gives you collaborative intelligence, which then opens a path where human nodes and AI nodes work together, connected to each other through dialogue, validation, pattern recognition, and adaptation.
What we really want to move to, in order to be relevant and useful to that bottom of the inverted pyramid, is the move from algorithmic recommendations to collaborative intelligence.

The seven principles of AI partnership
We have developed the AI4Health framework, which is available on our website at learning.foundation/ai.
It is organized around seven principles.

At the center is human agency.
Around human agency, you have the capacity for local leadership, collective intelligence, rapid adaptation, context sensitivity, knowledge flow, and sustainability.
The protocol is really about maintaining and strengthening human agency while leveraging the machine capabilities we are seeing.
Bridging the contextual gap in artificial intelligence for health
How do we bridge the gap between global data and local reality?
On the global data side, you have universal models, large-scale analytics, and pattern recognition.
On the local reality side, what matters is trust networks, cultural nuances, and resource constraints, because you are there every day.

We believe that peer learning networks, where the peers include both human and AI co-workers, are potentially one of the most powerful and highest-return solutions that can support work for health in the coming years, and potentially the coming decades.
Artificial intelligence may identify a disease pattern from a massive dataset that no human or team of humans could ever sift through on their own.
But a human may also identify that nobody comes to the market on Tuesdays, and that has not been documented in any digital system in a way that it can become part of the dataset the AI is examining.
That is one example of how we make explicit the potential for connection between humans and their AI co-workers.
The full learning cycle
At The Geneva Learning Foundation, we have developed what we call the full learning cycle.
Knowledge is necessary but insufficient.
We live in a world of knowledge abundance.
Access to information remains a problem on some issues, but even that is decreasing.
What we think is at stake is how you move from knowledge to action.
We built an engine premised on reciprocal trust, which is what you get in a strong, healthy network.

It takes its actors and its leaders through a cycle.
It starts with analysis, where network members give feedback to each other, help each other improve, and support each other through high-quality, high-bandwidth interaction.
It moves into action, where humans take the analysis and figure out what steps to take to move forward toward the goal.
It then moves to mobilization, which is how you grow from individual actions or individual initiatives through social presence, by validating lived experience.
And then it takes you right back to analysis.
The multiplication effect
This has a powerful multiplier effect.

This is what we mean when we say peer learning to action network, and this is what we have been doing for our first decade of existence.
We are now entering the second decade.
What we see, based on data from our COVID-19 Peer Hub in January 2020, is that human-to-human peer learning networks working through this full learning cycle implement approximately seven times faster through a networked response than through isolated efforts, and at roughly 90 percent lower cost.
That is a compelling impact value proposition.
And we are now exploring how artificial intelligence for health can augment this further.
Applied schematic: immunization systems
The question for us has been how this fits in with the San Francisco principle.
Let me take three specific examples.
The first is immunization systems, where The Geneva Learning Foundation has done the most work.
Health workers are able to identify what we might call micro-barriers.
In this case, market days and religion, but it could be many different things.
You can imagine an AI model looking for macro patterns around the goal of reaching zero-dose children.
These are children who have never received any vaccines, and who could die what I call stupid deaths from vaccine-preventable diseases, avoidable deaths.

There is an obvious role for pattern recognition at the scale and scope at which we work, which we were simply not able to do prior to the rise of these AI tools and AI agents.
Applied schematic: climate change and health
The second example is climate change and health.
In 2023, we organized for the first time a peer learning event in which 4,700 health and humanitarian workers shared their observations of what was changing in the climate and what the impacts were on the health of the communities they serve.
You can imagine frontline workers as sensors, with AI pattern recognition helping us detect early warning signals.
That is distributed sensing.

It could detect changing disease patterns before the large-scale institutional government surveillance systems, or the global surveillance systems, can actually spot them.
Applied schematic: neglected tropical diseases
The third example is neglected tropical diseases.
Here, you have knowledge in community X and an information gap in community Y.

You can imagine an AI facilitator enabling lateral knowledge transfer.
That facilitator role is precisely the one that The Geneva Learning Foundation’s human facilitators have been playing in the past, and have done relatively well, based on the outcome indicators we use.
The question for us is what happens when that facilitation is done by an artificial intelligence co-worker, or mediated, facilitated, or supported by one.
That is what we are interested in figuring out now.
The visual ethics filter: countering poverty porn 2.0
Recently, there has been a debate, at least in global health, in the pages of journals like The Lancet, but also at The Guardian, around what some have called poverty porn 2.0.
I do not particularly like the term, but the problem is real.
It has arisen from the use of generative AI by nonprofit organizations for their marketing and communications, in which they create images of fake poverty to try to elicit a positive response that will lead to a donation.
There is an economic driver, with communications teams making generative AI requests that are inherently colonial and racist, and therefore getting what you would expect.

But it is also more than that.
Even if you carefully frame your request, if you ask for a doctor taking care of children, the doctor will be white and the children will be brown or black.
It gets worse.
Even if you explicitly specify a black African doctor with children, after two or three iterations at the most, the doctor reverts to being white.
So there is a problem with the user and the colonial mindset, and there is a real challenge in terms of how we get dignified representation.
These AI systems and their datasets are distorting mirrors of our societies, of their hierarchies, of their structural racism.
As I have argued elsewhere, the problem is not the tool, the problem is the user and the colonial mindset, and AI did not create this impulse.
It just made it cheaper, faster, and easier to execute.
System barrier: the transparency paradox
The next challenge is what we call the transparency paradox.

A recent survey showed that pretty much everyone in the humanitarian sector, closely linked to global health, is using generative AI in various ways in their work.
But almost no one is disclosing it, and very few organizations actually have policies, or the means to apply those policies, around the use of generative AI.
We operate in a punitive culture.
Donor-driven requirements mean that if you deviate, if you try doing something differently, if you try something new, you are more likely to be punished than rewarded for it.
A punitive culture leads to no disclosure or forced disclosure.
Forced disclosure leads to devaluation of the work, regardless of its quality.
That leads to hidden use, and what we end up with is generic mush.
The question is how to support staff, teams, organizations, and networks in opening up, in disclosing, and in explaining how they are using these tools, why, what results they are getting, and what difference it makes.
A learning culture, by contrast, enables disclosure, collective learning, and innovation within a safe harbor.
Cognitive risk: the e-bike effect

The third risk I want to get into is a cognitive risk.
We call it the e-bike effect.
If you ride an electric bike, you can get to your destination, but it does not build muscle, at least not the way a conventional bike does.
It is the same with the brain.
The risk is metacognitive laziness, where the fluency of AI output suppresses the metacognitive cues that normally trigger critical evaluation.
A recent randomized controlled trial with medical students, reported at the OECD Digital Education Outlook 2026, showed that those given immediate AI access performed no better than the AI working alone.
Only students who developed their clinical reasoning before AI was introduced achieved genuine human-AI synergy.
To counter this risk, we propose what we call the five Cs.
They are creativity, curiosity, critical thinking, consciousness, and collective care.
We need to think about what these mean in the AI4Health context.
What are the risks at every level, for the local practitioner, for the global technical officer, for scientists, for administrators, for planners, and for decision makers?
Investment roadmap: infrastructure for the new epoch
I have highlighted some of the opportunities, some of how we can transition from human-to-human peer learning networks that gradually weave in AI agents as co-workers, and some of the risks.
If we look at the investment pyramid, the direction of investment priority is clear.
The first investment is still network development.
That means peer learning platforms and human connection, the infrastructure that supports human capacity and the human ability to connect.
This is going to be increasingly difficult to do, and also increasingly important to be able to get to the other stages.
The second investment is in capacity strengthening, which includes agentic literacy.
That is the ability to know and understand how to work with an agent, what an agent is, what it means to have an agent as a co-worker, and how that is different from using a tool.
The third investment is in AI partnership innovation, and specifically in systems that preserve agency.
These are systems designed not to stifle or diminish human creativity, the capacity to think on your own, to test something new, to do something differently.

The strategic divergence
There is a risk of strategic divergence.
We see two paths.
Path one is plain and simple digital colonialism.
It is characterized by high-income agents, geo-locked tools, dependency, and reinforced inequality.
Path two is the harder path.

It is networked intelligence, where AI is a co-worker, peer learning is the bridge, and the foundation is epistemic justice, leading to distributed capability where the capacity that adds up is greater than the sum of its parts.
We believe we are at that fork in the road.
The danger is that AI perfectly preserves the old poison we have yet to flush out.
Closing
This is, as I said, very much thinking aloud, and very much a work in progress.
It may not speak to, or resonate with, the pharmaceutical industry or medical professionals as such.
It is really about the public health context in low and middle income countries.
But I think some of it may be transferable.
I have written about this and continue to do so, because we believe continuous learning is the most important dimension here, together with leadership for learning.
The source architecture for the work I have presented today is available in a series of articles, including A global health framework for Artificial Intelligence as co-worker, The San Francisco Consensus, The agentic AI revolution, The crisis in scientific publishing, and Artificial intelligence, real racism.
Learn more about the AI4Health programme at learning.foundation/ai.
I look forward to the discussion and dialogue.
Thank you.
