What is the Impact Accelerator

What is The Geneva Learning Foundation’s Impact Accelerator?

Reda SadkiGlobal health

Imagine a social worker in Ukraine supporting children affected by the humanitarian crisis. Thousands of kilometers away, a radiation specialist in Japan is trying to find effective ways to communicate with local communities. In Nigeria, a health worker is tackling how to increase immunization coverage in their remote village. These professionals face very different challenges in very different places. Yet when they joined their first “Impact Accelerator”, something remarkable happened. They all found a way forward. They all made real progress. They all discovered they are not alone.

The Impact Accelerator is a simple, practical method developed by The Geneva Learning Foundation that helps professionals turn intent into action, results, and outcomes. It has worked equally well in every country where it has been tried. It has helped people – whatever their knowledge domain or context – strengthen action and accelerate progress to improve health outcomes. Each time, in each place, whatever the challenge, it has produced the same powerful results.

The social worker joins other professionals facing similar challenges. The radiation specialist connects with safety experts dealing with comparable concerns. The health worker collaborates with others working to improve immunization. Each group shares a common purpose.

What makes the Impact Accelerator different?

Most training programs teach you something and then send you away. You return to your workplace full of ideas but face the same obstacles. You have new knowledge but struggle to apply it. (Some people call this “knowledge transfer” but it is not only about knowledge. Others call this the “applicability problem”.) You feel alone with your challenges.

The Impact Accelerator works differently. It stays with you as you implement change. It connects you with others facing similar challenges. It helps you take small, concrete steps each week toward your bigger goal.

Each Impact Accelerator brings together professionals working on the same type of challenge. Social workers who support children join with others who do the same – but the group may also include teachers and psychologists they do not usually work with. Safety specialists connect with safety specialists, but also people in other job roles. It is their shared purpose that makes this diversity productive:  every discussion, every shared experience, every piece of advice directly applies to their work.

Think of it like learning to ride a bicycle. Traditional training is like someone explaining how bicycles work. The Impact Accelerator is like having someone run alongside you, keeping you steady as you pedal, cheering when you succeed, and helping you get back on when you fall. Everyone learns to ride, together. And everyone is going somewhere.

How does the Impact Accelerator work?

The Impact Accelerator follows a simple weekly rhythm that fits into daily work. It is learning-based work and work-based learning.

Monday: Set your goal

Every Monday, you decide on one specific action you will complete by Friday. Not a vague hope or a grand plan. One concrete thing you can actually do.

For example:

  • “I will create a safe space activity for five children showing signs of trauma.”
  • “I will develop a visual guide for the new radiation monitoring procedures.”
  • “I will meet with three community leaders to discuss vaccine concerns.”

You share this goal with others in the Accelerator. This creates accountability. You know that on Friday, your peers will ask how it turned out.

Wednesday: Check in with peers

Midweek, you connect with others in your group who face the same type of challenges. You share what is working, what is difficult, and what you are learning.

This is where magic happens. Someone else tried something that failed. Now you know to try differently. Another person found a creative solution. Now you can adapt it for your situation. You realize you are part of something bigger than yourself.

Friday: Report and reflect

On Friday, you report on your progress. Did you achieve your goal? What happened when you tried? What did you learn?

This is not about judging success or failure. Sometimes the most valuable learning comes from things that did not work as expected. The important thing is that you took action, you reflected on what happened, and you are ready to try again next week.

Monday again: Build on what you learned

The next Monday, you set a new goal. But now you are not starting from zero. You have the experience from last week. You have ideas from your peers. You have momentum.

Week by week, action by action, you make progress toward your larger goal.

The power of structured support in the Impact Accelerator

The Impact Accelerator provides several types of support to help you succeed.

Peer learning networks

You join a community of professionals who understand your challenges because they face similar ones. 

Each Impact Accelerator brings together people working on the same type of challenge. This shared purpose means that every suggestion, every idea, every lesson learned is likely to be relevant to your work. The learning comes not from distant experts but from people doing the same work you do. Their solutions are practical and tested in real conditions like yours.

Guided structure

While you choose your own goals and actions, the Accelerator provides a framework that keeps you moving forward. The weekly rhythm creates momentum. The reporting requirements ensure reflection. The peer connections prevent isolation.

This structure is like the banks of a river. The water (your energy and creativity) flows freely, but the banks keep it moving in a productive direction.

Expert guidance when needed

Sometimes you need specific technical input or help with a particular challenge. The Accelerator provides “guides on the side” – experts who offer targeted support without taking over your process. They help you think through problems and connect you with resources, but you remain in charge of your own change effort.

What participants achieve

Across different countries and different challenges, Impact Accelerator participants report similar outcomes.

Increased confidence

“Before, I knew what should be done but felt overwhelmed about how to start. Now I take one step at a time and see real progress.” This confidence comes from successfully completing weekly actions and seeing their impact.

Tangible progress

Participants do not just learn about change; they create it. A vaccination program reaches new communities. Safety procedures actually get implemented. Children receive support when they need it. The changes may start small, but they are real and they grow.

Expanded networks

“I used to feel like I was the only one facing these problems. Now I have colleagues across my country who understand and support me.” These networks last beyond the Accelerator, providing ongoing support and collaboration.

Enhanced problem-solving

Through weekly practice and peer exchange, participants develop stronger skills for analyzing challenges and developing solutions. They learn to break big problems into manageable actions and to adapt based on results.

Resilience in facing obstacles

Every change effort faces barriers. The Accelerator helps participants expect these obstacles and work through them with peer support rather than giving up when things get difficult.

How can the same methodology work everywhere?

The Impact Accelerator has succeeded across vastly different contexts – from supporting children in Ukrainian cities to enhancing radiation safety in Japanese facilities to improving immunization in Nigerian villages. Each Accelerator focuses on one specific challenge area, bringing together professionals who share that common purpose. Why does the same approach work for such different challenges?

The answer lies in focusing on universal elements of successful change:

  • Breaking big goals into weekly actions;
  • Learning from peers who understand your specific context and challenges;
  • Reflecting on what works and what does not;
  • Building momentum through consistent progress; and
  • Creating accountability through a community united by shared purpose.

Each group focuses on their specific challenge and context, but the process of creating change remains remarkably similar.

A typical participant journey in the Impact Accelerator

Let us follow Yuliia, a social worker in Ukraine helping children affected by the humanitarian crisis.

Week 1: Getting started

Yuliia joins the Impact Accelerator after developing her action plan. Her big goal: establish effective psychological support for 50 displaced children in her community center within three months.

On Monday, she sets her first weekly goal: “During daily activities, I will observe and document how 10 children are affected.”

By Friday, she has detailed observations. She notices that loud noises sometimes cause reactions in most children, and several withdraw completely during group activities. This gives her concrete starting points.

Week 2: Building on learning

Based on her observations, Yuliia sets a new goal: “I will create a quiet corner with calming materials and test it with three children who are withdrawn.”

During the Wednesday check-in, another social worker shares how she uses art therapy for non-verbal expression with traumatized children. A colleague working in a different city describes success with sensory materials. Yuliia incorporates both ideas into her quiet corner.

The quiet corner proves successful – two of the three children spend time there and begin to engage with the materials. One child draws for the first time since arriving at the center.

Week 3: Creative solutions

Yuliia’s new goal: “I will develop a simple ‘feelings chart’ with visual cues and introduce it during morning circle time.”

Her peers from Ukraine and all over Europe – all working with children – help refine the idea. A psychologist from another region shares that abstract emotions are hard for traumatized children to identify. She suggests using colors and weather symbols instead of facial expressions. Another colleague recommends making the chart interactive rather than static.

The feelings chart becomes a breakthrough tool. Children who never spoke about their emotions begin pointing to images. Yuliia’s colleagues can better understand and respond to children’s needs.

Week 4: Scaling what works

Energized by success, Yuliia aims higher: “I will train two other staff members to use the quiet corner and feelings chart, and create a simple guide for these tools.”

By now, Yuliia has concrete evidence that these approaches work. She documents specific examples of children’s progress. Her guide is so practical that the center director wants to share it with other locations.

The ripple effect

Yuliia’s tools spread throughout the network of centers supporting displaced children. Through the Accelerator network, colleagues adapt her approaches for different age groups and settings. Soon, hundreds of children across Ukraine benefit from these simple but effective interventions.

The evidence of impact

The true test of any approach is whether it creates lasting change. Impact Accelerator participants consistently report:

  • Specific improvements in their work that they can measure and document;
  • Sustained changes that continue after the Accelerator ends;
  • Solutions that others adopt and spread;
  • Professional growth that enhances all their future work; and
  • Networks that provide ongoing support and learning.

These outcomes appear whether participants work on mental health support in Ukraine, radiation safety in Japan, or immunization in Nigeria. The challenges differ, but the pattern of success remains consistent.

How we prove the Accelerator makes a difference

In global health, the biggest challenge is proving that your intervention actually caused the improvements you see. This is called “attribution.” How do we know that better health outcomes happened because of the Impact Accelerator and not for other reasons?

The Geneva Learning Foundation solves this challenge through a three-step process that connects the dots between learning, action, and results.

Step 1: Measuring where we start

Before participants begin taking action, they document their baseline – the current situation they want to improve. For example:

  • A social worker records how many children show severe trauma symptoms.
  • A radiation specialist documents current safety incident rates.
  • A health worker notes the vaccination coverage in their area.

These starting numbers give us a clear picture of where improvement begins.

Step 2: Tracking progress and actions

Every week, participants complete “acceleration reports” that capture two things:

  • The specific actions they took; and
  • Any changes they observe in their measurements.

This creates a detailed record connecting what participants do to what happens as a result. Week by week, the picture becomes clearer.

Step 3: Proving the connection

Here is where the Impact Accelerator becomes special. When participants see improvements, they must answer a crucial question: “How much of this change happened because of what you learned and did through the Accelerator?”

But they cannot just claim credit. They must prove it to their peers by showing:

  • Exactly which actions led to which results;
  • Why the changes would not have happened without their intervention; and
  • Evidence that their specific approach made the difference.

This peer review process is powerful. Your colleagues understand your context. They know what is realistic. They can spot when claims are too bold or when someone is being too modest. They ask tough questions that help clarify what really caused the improvements.

After the first-ever Accelerator in 2019, we compared the implementation progress after six months between those who joined this final stage and a control group that also developed action plans, but did not join.

Why this method works

This approach solves several problems that make attribution difficult:

  1. Traditional studies often cannot capture the complexity of real-world change. The Impact Accelerator’s method shows not just that change happened, but how and why it happened.
  2. Self-reporting can be unreliable when people work alone. But when you must convince peers who understand your work, the reports become more accurate and honest.
  3. Numbers alone do not tell the whole story. By combining measurements with detailed descriptions of actions and peer validation, we get a complete picture of how change happens.

The invitation to act

Around the world, professionals like you are transforming their work through the Impact Accelerator. They start with the same doubts you might have: “Can I really create change? Will this work in my context? Do I have time for this?”

Week by week, action by action, they discover the answer is yes. Yes, they can create change. Yes, it works in their context. Yes, they can find time because the Accelerator fits into their real work rather than adding to it.

The Impact Accelerator does not promise overnight transformation. It offers something better: a proven process for creating real, sustainable change through your own efforts, supported by peers who understand your journey.

If you work in a field where you seek to make a difference, the Impact Accelerator can help you move from good intentions to meaningful impact. The same process can work for you.

The question is not whether the Impact Accelerator can help you create change. The question is: What change do you want to create?

Your journey can begin Monday.

Image: The Geneva Learning Foundation Collection © 2025

20250717.PFA Accelerator article

PFA Accelerator: across Europe, practitioners learn from each other to strengthen support to children affected by the humanitarian crisis in Ukraine

Reda SadkiGlobal health

In the PFA Accelerator, practitioners supporting children are teaching each other what works.

Every Friday, more than 240 education, social work, and health professionals across Ukraine and Europe file reports on the same question: What happened when you tried to help a child this week?

Their answers – grounded in their daily work – are creating new insights into how Psychological First Aid (“PFA”) works in active conflict zones, displacement centers, and communities hosting Ukrainian families. These practitioners implement practical actions with children each week, then share what they learn with colleagues from all over Europe who face similar challenges.

The tracking reveals stark patterns. More than half work with children showing anxiety, fear, and stress responses triggered by air raids, family separation, or displacement. Another 42% focus on children struggling to connect with others in unfamiliar places—Ukrainian teenagers isolated in Polish schools, families in Croatian refugee centers, children moved from eastern Ukraine to western regions.

“We have a very unique experience that you cannot get through lectures,” said PFA practitioner and Ukrainian-language facilitator Hanna Nyzkodobova during Monday’s session, speaking to over 200 of her peers. “The Ukrainian context is not comparable to any other country.”

Locally-led organizations leading implementation

The programme’s most striking feature is its reach into organizations operating closest to active hostilities—precisely where support needs are most acute and convention training programs may not operate. For example, the charitable foundation “Everything will be fine Ukraine” implements approaches within 20 kilometers of active fighting, supporting 6,000 children across Donetsk, Dnipropetrovsk, and Kharkiv regions. Weekly reports from their participants document how psychological first aid help when air raid sirens interrupt sessions or when families face repeated displacement.

Posmishka UA, Ukraine’s largest participating organization with over 400 staff members, demonstrates how peer learning can support local actors directly at scale. During Monday’s learning session, Posmishka participants shared experiences from work in local communities that would be difficult to capture through conventional research or training approaches.

South Ukrainian National Pedagogical University has integrated the program across 339 faculty and 3,783 students, bringing PFA into the work of its Mental Health Center. Youth Platform is now offering PFA to 600 young people aged 14-35 across five Ukrainian regions, while the All-Ukrainian Public Center “Volunteer” scales implementations to over 10,000 children nationwide.

These partnerships reveal something crucial: when crisis response is most urgent, peer learning between local actors may prove more effective and sustainable than waiting for external expertise and costly training to develop solutions.

Learning what works through implementation

The Geneva Learning Foundation (TGLF) and the International Federation of Red Cross and Red Crescent Societies (IFRC), within the project Provision of quality and timely psychological first aid to people affected by Ukraine crisis in impacted countries, supported by the European Union, created what they call the PFA Accelerator—a component of a broader certificate program reaching over 330 organizations supporting more than 1 million children affected by the humanitarian crisis in Ukraine. This “Accelerator” methodology emerged from recognizing that new approaches are necessary in unprecedented crises. When children face trauma from active conflict, family separation, and repeated displacement simultaneously, guidelines can help but cannot tell you how to adapt to your specific situation.

The breakthrough lies in turning scale from an obstacle into an advantage. Rather than trying to train individuals who then work in isolation, the programme creates learning networks where practitioners immediately share what works, what doesn’t, and why.

Analysis of the first 60 action plans shows PFA Accelerator participants setting specific, measurable goals: 88% of those working with anxious children plan concrete emotional regulation activities rather than vague “support” approaches.

Iryna from Kryvyi Rih reported that schools actively sought partnerships after her initial outreach succeeded: “They wanted us to come to them,” she said, describing how her mobile facilitation team exceeded the goal she set for herself in the Accelerator – because she managed to help school administrators recognize the value of Psychological First Aid (PFA) for children.

Practical innovations emerge from necessity

The weekly implementation requirement forces creative problem-solving with limited resources. Mariya from Zaporizhzhia described combining parent and child sessions: “We conducted joint sessions with psychosocial support, where together we learned calming techniques and did exercises oriented toward team building.” This approach addressed both parent stress and child needs while optimizing scarce time and space resources.

In the PFA Accelerator, other participants can then share their feedback – or realize that Mariya’s local solution can help them, too. “The exchange of experience that happens on this platform is very important because someone is more experienced, someone less experienced,” noted participant Liubov during the Ukrainian session.

Such practical adaptations become documented knowledge shared across the network. However, in the first week, although 82% identify colleague support as their primary resource, only 49% initially planned collaborative approaches involving other adults. The peer feedback process helps participants recognize such patterns and adjust their methods accordingly.

Defying distance to solve problems together

What emerges is not only better implementation of existing approaches—it’s new knowledge about how psychological support works under difficult conditions. The weekly reports create rapid feedback loops showing which approaches help children cope with ongoing uncertainty, how to maintain therapeutic relationships during displacement, and which interventions remain effective when basic safety cannot be guaranteed.

The programme operates across Ukraine and 27 European countries, supported by over 80 European focal points and more than 20 organizational partners. This enables pattern recognition impossible without scale. Practitioners can better discern which approaches work across different contexts, how cultural differences affect intervention effectiveness, and which methods prove most adaptable to rapidly changing circumstances.

The larger significance extends beyond Ukraine. By demonstrating how local actors can rapidly develop and refine effective practices when given proper structure for peer learning, the programme offers a model for responding to other crises where traditional expert-led approaches prove too slow or disconnected from local realities. Sometimes the most valuable expertise exists not in training manuals but in the accumulated experience of practitioners working directly with affected populations.

Learn more and enroll in the PFA Accelerator: https://www.learning.foundation/ukraine-accelerator

This project is funded by the European Union. Its contents are the sole responsibility of TGLF and IFRC, and do not necessarily reflect the views of the European Union.

Photo © Sébastien Delarque

Eric Schmidt’s San Francisco Consensus about the impact of artificial intelligence

Reda SadkiArtificial intelligence

“We are at the beginning of a new epoch,” Eric Schmidt declared at the RAISE Summit in Paris on 9 July 2025. The former Google CEO’s message grounded in what he calls the San Francisco Consensus carries unusual weight—not necessarily because of his past role leading one of tech’s giants, but because of his current one: advising heads of state and industry on artificial intelligence.

“When I talk to governments, what I tell them is, one, ChatGPT is great, but that was two years ago. Everything’s changed again. You’re not prepared for it. And two, you better get organized around it—the good and the bad.”

At the Paris summit, he shared what he calls the “San Francisco Consensus”—a convergence of belief among Silicon Valley’s leaders that within three to six years AI will fundamentally transform every aspect of human activity.

Whether one views this timeline as realistic or delusional matters less than the fact that the people building AI systems—and investing hundreds of billions in infrastructure—believe it. Their conviction alone makes the Consensus a force shaping our immediate future.

“There is a group of people that I work with. They are all in San Francisco, and they have all basically convinced themselves that in the next two to six years—the average is three years—the entire world will change,” Schmidt explained. (In the past, he initially referred to the Consensus as a kind of inside joke.)

He carefully framed this as a consensus rather than fact: “I call it a consensus because it’s true that we agree… but it’s not necessarily true that the consensus is true.”

Schmidt’s own position became clear as he compared the arrival of artificial general intelligence (“AGI”) to the Enlightenment itself. “During the Enlightenment, we as humans learned from going from direct faith in God to using our reasoning skills. So now we have the arrival of a new non-human intelligence, which is likely to have better reasoning skills than humans can have.”

The three pillars of the San Francisco Consensus

The Consensus rests on three converging technological revolutions:

1. The language revolution

Large language models like ChatGPT captured public attention by demonstrating AI’s ability to understand and generate human language. But Schmidt emphasized these are already outdated. The real transformation lies in language becoming a universal interface for AI systems—enabling them to process instructions, maintain context, and coordinate complex tasks through natural language.

2. The agentic revolution

“The agentic revolution can be understood as language in, memory in, language out,” Schmidt explained. These are AI systems that can pursue goals, maintain state across interactions, and take actions in the world.

His deliberately mundane example illustrated the profound implications: “I have a house in California, I want to build another one. I have an agent that finds the lot, I have another agent that works on what the rules are, another agent that works on designing the building, selects the contractor, and at least in America, you have an agent that then sues the contractor when the house doesn’t work.”

The punchline: “I just gave you a workflow example that’s true of every business, every government, and every group human activity.”

3. The reasoning revolution

Most significant is the emergence of AI systems that can engage in complex reasoning through what experts call “inference”—the process of drawing conclusions from data—enhanced by “reinforcement learning,” where systems improve by learning from outcomes.

“Take a look at o3 from ChatGPT,” Schmidt urged. “Watch it go forward and backward, forward and backward in its reasoning, and it will blow your mind away.” These systems use vastly more computational power than traditional searches—”many, many thousands of times more electricity, queries, and so forth”—to work through problems step by step.

The results are striking. Google’s math model, says Schmidt, now performs “at the 90 percentile of math graduate students.” Similar breakthroughs are occurring across disciplines.

What is the timeline of the San Francisco Consensus?

The Consensus timeline seems breathtaking: three years on average, six in Schmidt’s more conservative estimate. But the direction matters more than the precise date.

“Recursive self-improvement” represents the critical threshold—when AI systems begin improving themselves. “The system begins to learn on itself where it goes forward at a rate that is impossible for us to understand.”

After AGI comes superintelligence, which Schmidt defines with precision: “It can prove something that we know to be true, but we cannot understand the proof. We humans, no human can understand it. Even all of us together cannot understand it, but we know it’s true.”

His timeline? “I think this will occur within a decade.”

The infrastructure gamble

The Consensus drives unprecedented infrastructure investment. Schmidt addressed this directly when asked about whether massive AI capital expenditures represent a bubble:

“If you ask most of the executives in the industry, they will say the following. They’ll say that we’re in a period of overbuilding. They’ll say that there will be overcapacity in two or three years. And when you ask them, they’ll say, but I’ll be fine and the other guys are going to lose all their money. So that’s a classic bubble, right?”

But Schmidt sees a different logic at work: “I’ve never seen a situation where hardware capacity was not taken up by software.” His point: throughout tech history, new computational capacity enables new applications that consume it. Today’s seemingly excessive AI infrastructure will likely be absorbed by tomorrow’s AI applications, especially if reasoning-based AI systems require “many, many thousands of times more” computational power than current models.

The network effect trap

Schmidt’s warnings about international competition reveal why AI development resembles a “network effect business”—where the value increases exponentially with scale and market dominance becomes self-reinforcing. In AI, this manifests through:

  • More data improving models;
  • Better models attracting more users;
  • More users generating more data; and
  • Greater resources enabling faster improvement.

“What happens when you’ve got two countries where one is ahead of the other?” Schmidt asked. “In a network effect business, this is likely to produce slopes of gains at this level,” he said, gesturing sharply upward. “The opponent may realize that once you get there, they’ll never catch up.”

This creates what he calls a “race condition of preemption”—a term from computer science describing a situation where the outcome depends critically on the sequence of events. In geopolitics, it means countries might take aggressive action to prevent rivals from achieving irreversible AI advantage.

The scale-free domains

Schmidt believes that some fields will transform faster due to their “scale-free” nature—domains where AI can generate unlimited training data without human input. Exhibit A: mathematics. “Mathematicians with whiteboards or chalkboards just make stuff up all day. And they do it over and over again.”

Software development faces similar disruption. When Schmidt asked a programmer what language they code in, the response—”Why does it matter?”—captured how AI makes specific technical skills increasingly irrelevant.

Critical perspectives on the San Francisco Consensus

The San Francisco Consensus could be wrong. Silicon Valley has predicted imminent breakthroughs in artificial intelligence before—for decades, in fact. Today’s optimism might reflect the echo chamber of Sand Hill Road. Fundamental challenges remain: reliability, alignment, the leap from pattern matching to genuine reasoning.

But here is what matters: the people building AI systems believe their own timeline. This belief, held by those controlling hundreds of billions in capital and the world’s top technical talent, becomes self-fulfilling. Investment flows, talent migrates, governments scramble to respond.

Schmidt speaks to heads of state because he understands this dynamic. The consensus shapes reality through sheer force of capital and conviction. Even if wrong about timing, it is setting the direction. The infrastructure being built, the talent being recruited, the systems being designed—all point toward the same destination.

The imperative of speed

Schmidt’s message to leaders carried the urgency of hard-won experience: “If you’re going to [invest], do it now and move very, very fast. This market has so many players. There’s so much money at stake that you will be bypassed if you spend too much time worrying about anything other than building incredible products.”

His confession about Google drove the point home: “Every mistake I made was fundamentally one of time… We didn’t move fast enough.”

This was not generic startup advice but specific warning about exponential technologies. In AI development, being six months late might mean being forever behind. The network effects Schmidt described—where leaders accumulate insurmountable advantages—are already visible in the concentration of AI capabilities among a handful of companies.

For governments crafting AI policy, businesses planning strategy, or education institutions charting their futures,  the timeline debate misses the point. Whether recursive self-improvement arrives in three years or six, the time to act is now. The changes ahead—in labor markets, in global power dynamics, in the very nature of intelligence—demand immediate attention.

Schmidt’s warning to world leaders was not about a specific date but about a mindset: those still debating whether AI represents fundamental change have already lost the race.

Photo credit: Paris RAISE Summit (8-9 July 2025) © Sébastien Delarque

Why peer learning is critical to survive the Age of Artificial Intelligence

Why peer learning is critical to survive the Age of Artificial Intelligence

Reda SadkiArtificial intelligence, Global health

María, a pediatrician in Argentina, works with an AI diagnostic system that can identify rare diseases, suggest treatment protocols, and draft reports in perfect medical Spanish. But something crucial is missing. The AI provides brilliant medical insights, yet María struggles to translate them into action in her community. What is needed to realize the promise of the Age of Artificial Intelligence?

Then she discovers the missing piece. Through a peer learning network—where health workers develop projects addressing real challenges, review each other’s work, and engage in facilitated dialogue—she connects with other health professionals across Latin America who are learning to work with AI as a collaborative partner. Together, they discover that AI becomes far more useful when combined with their understanding of local contexts, cultural practices, and community dynamics.

This speculative scenario, based on current AI developments and existing peer learning successes, illuminates a crucial insight as we ascend into the age of artificial intelligence. Eric Schmidt’s San Francisco Consensus predicts that within three to six years, AI will reason at expert levels, coordinate complex tasks through digital agents, and understand any request in natural language.

Understanding how peer learning can bridge AI capabilities and human thinking and action is critical to prepare for this future.

Collaboration in the Age of Artificial Intelligence

The three AI revolutions—language interfaces, reasoning systems, and agentic coordination—will offer unprecedented capabilities. If access is equitable, this will be available to any health worker, anywhere. Yet having access to these tools is just the beginning. The transformation will require humans to learn together how to collaborate effectively with AI.

Consider what becomes possible when health workers combine AI capabilities with collective human insight:

  • AI analyzes disease patterns; peer networks share which interventions work in specific cultural contexts.
  • AI suggests optimal treatment protocols; practitioners adapt them based on local resource availability.
  • AI identifies at-risk populations; community workers know how to reach them effectively.

The magic happens in integration of AI and human capabiltiies through peer learning. Think of it this way: AI can analyze millions of health records to identify disease patterns, but it may not know that in your district, people avoid the Tuesday clinic because that is market day, or that certain communities trust traditional healers more than government health workers.

When epidemiologists share these contextual insights with peers facing similar challenges—through structured discussions and collaborative problem-solving—they learn together how to adapt AI’s analytical power to local realities.

For example, when an AI system identifies a disease cluster, epidemiologists in a peer network can share strategies for investigating it: one colleague might explain how they gained community trust for contact tracing, another might share how they adapted AI-generated survey questions to be culturally appropriate, and a third might demonstrate how they used AI predictions alongside traditional knowledge to improve outbreak response.

This collective learning—where professionals teach each other how to blend AI’s computational abilities with human understanding of communities—creates solutions more effective than either AI or individual expertise could achieve alone.

Understanding peer learning in the Age of Artificial Intelligence

Peer learning is not about professionals sharing anecdotes. It is a structured learning process where:

  • Participants develop concrete projects addressing real challenges in their contexts, such as improving vaccination coverage or adapting AI tools for local use.
  • Peers review each other’s work using expert-designed rubrics that ensure quality while encouraging innovation.
  • Facilitated dialogue sessions help surface patterns across different contexts and generate collective insights.
  • Continuous cycles of action, reflection, and revision transform individual experiences into shared wisdom.
  • Every participant becomes both teacher and learner, contributing their unique insights while learning from others.

This approach differs fundamentally from traditional training because knowledge flows horizontally between peers rather than vertically from experts. When applied to human-AI collaboration, it enables rapid collective learning about what works, what fails, and why.

Why peer networks unlock the potential of the Age of Artificial Intelligence

Contextual intelligence through collective wisdom

AI systems train on global data and identify universal patterns. This is their strength. Human practitioners understand local contexts intimately. This is theirs. Peer learning networks create bridges between these complementary intelligences.

When a health worker discovers how to adapt AI-generated nutrition plans for local food availability, that insight becomes valuable to peers in similar contexts worldwide. Through structured sharing and review processes, the network creates a living library of contextual adaptations that make AI recommendations actionable.

Trust-building in the age of AI

Communities often view new technologies with suspicion. The most sophisticated AI cannot overcome this alone. But when local health workers learn from peers how to introduce AI as a helpful tool rather than a threatening replacement, acceptance grows.

In peer networks, practitioners share not just technical knowledge but communication strategies through structured dialogue: how to explain AI recommendations to skeptical patients, how to involve community leaders in AI-assisted health programs, how to maintain the human touch while using digital tools. This collective learning makes AI acceptable and valuable to communities that might otherwise reject it.

Distributed problem-solving

When AI provides a diagnosis or recommendation that seems inappropriate for local conditions, isolated practitioners might simply ignore it. But in peer networks with structured review processes, they can explore why the discrepancy exists and how to bridge it.

A teacher receives AI-generated lesson plans that assume resources her school lacks. Through her network’s collaborative problem-solving process, she finds teachers in similar situations who have created innovative adaptations. Together, they develop approaches that preserve AI’s pedagogical insights while working within real constraints.

The new architecture of collaborative learning

Working effectively with AI requires new forms of human collaboration built on three essential elements:

Reciprocal knowledge flows

When everyone has access to AI expertise, the most valuable learning happens between peers who share similar contexts and challenges. They teach each other not what AI knows, but how to make AI knowledge useful in their specific situations through:

  • Structured project development and peer review;
  • Regular assemblies where practitioners share experiences;
  • Documentation of successful adaptations and failures;
  • Continuous refinement based on collective feedback.

Structured experimentation

Peer networks provide safe spaces to experiment with AI collaboration. Through structured cycles of action and reflection, practitioners:

  • Try AI recommendations in controlled ways;
  • Document what works and what needs adaptation using shared frameworks;
  • Share failures as valuable learning opportunities through facilitated sessions;
  • Build collective knowledge about human-AI collaboration.

Continuous capability building

As AI capabilities evolve rapidly, no individual can keep pace alone. Peer networks create continuous learning environments where:

  • Early adopters share new AI features through structured presentations;
  • Groups explore emerging capabilities together in hands-on sessions;
  • Collective intelligence about AI use grows through documented experiences;
  • Everyone stays current through shared discovery and regular dialogue.

Evidence-based speculation: imagining peer networks that include both machines and humans

While the following examples are speculative, they build on current evidence from existing peer learning networks and emerging AI capabilities to imagine near-future possibilities.

The Nigerian immunization scenario

Based on Nigeria’s successful peer learning initiatives and current AI development trajectories, we can envision how AI-assisted immunization programs might work. AI could help identify optimal vaccine distribution patterns and predict which communities are at risk. Success would come when health workers form peer networks to share:

  • Techniques for presenting AI predictions to community leaders effectively;
  • Methods for adapting AI-suggested schedules to local market days and religious observances;
  • Strategies for using AI insights while maintaining personal relationships that drive vaccine acceptance.

This scenario extrapolates from current successes in peer learning for immunization in Nigeria to imagine enhanced outcomes with AI partnership.

Climate health innovation networks

Drawing from existing climate health responses and AI’s growing environmental analysis capabilities, we can project how peer networks might function. As climate change creates unprecedented health challenges, AI models will predict impacts and suggest interventions. Community-based health workers could connect these ‘big data’ insights with their own local observations and experience to take action, sharing innovations like:

  • Using AI climate predictions to prepare communities for heat waves;
  • Adapting AI-suggested cooling strategies to local housing conditions;
  • Combining traditional knowledge with AI insights for water management.

These possibilities build on documented peer learning successes in sharing health workers observations and insights about the impacts of climate change on the health of local communities.

Addressing AI’s limitations through collective wisdom

While AI offers powerful capabilities, we must acknowledge that technology is not neutral—AI systems carry biases from their training data, reflect the perspectives of their creators, and can perpetuate or amplify existing inequalities. Peer learning networks provide a crucial mechanism for identifying and addressing these limitations collectively.

Through structured dialogue and shared experiences, practitioners can:

  • Document when AI recommendations reflect biases inappropriate for their contexts;
  • Develop collective strategies for identifying and correcting AI biases;
  • Share techniques for adapting AI outputs to ensure equity;
  • Build shared understanding of AI’s limitations and appropriate use cases.

This collective vigilance and adaptation becomes essential for ensuring AI serves all communities fairly.

What this means for different stakeholders

For funders: Investing in collaborative capacity

The highest return on AI investment comes not from technology alone but from building human capacity to use it effectively. Peer learning networks:

  • Multiply the impact of AI tools through shared adaptation strategies;
  • Create sustainable capacity that grows with technological advancement;
  • Generate innovations that improve AI applications for specific contexts;
  • Build resilience through distributed expertise.

For practitioners: New collaborative competencies

Working effectively with AI requires skills best developed through structured peer learning:

  • Partnership mindset: Seeing AI as a collaborative tool requiring human judgment.
  • Adaptive expertise: Learning to blend AI capabilities with contextual knowledge.
  • Reflective practice: Regularly examining what works in human-AI collaboration through structured reflection.
  • Knowledge sharing: Contributing insights through peer review and dialogue that help others work better with AI.

For policymakers: Enabling collaborative ecosystems

Policies should support human-AI collaboration by:

  • Funding peer learning infrastructure alongside AI deployment;
  • Creating time and space for structured peer learning activities;
  • Recognizing peer learning as essential professional development;
  • Supporting documentation and spread of effective practices.

AI-human transformation through collaboration: A comparative view

Working with AI individuallyWorking with AI through structured peer networks
Powerful tools but limited adaptation
Insights remain isolated
Success depends on individual skill
Continuous adaptation through structured sharing
Insights multiply across network through peer review
Collective wisdom enhances individual capability
AI recommendations may miss local context
Trial and error in isolation
Slow spread of effective practices
Context-aware applications emerge through dialogue
Structured experimentation with collective learning
Rapid diffusion through documented innovations
Overwhelmed by rapid AI changes
Struggling to keep pace alone
Uncertainty about appropriate use
Collective sense-making through facilitated sessions
Shared discovery in peer projects
Growing confidence through structured support

The collaborative future

As AI capabilities expand, two paths emerge:

Path 1: Individuals struggle alone to make sense of AI tools, leading to uneven adoption, missed opportunities, and growing inequality between those who figure it out and those who do not.

Path 2: Structured peer networks enable collective learning about human-AI collaboration, leading to widespread effective use, continuous innovation, and shared benefit from AI advances.

What determines outcomes is how humans organize to learn and work together with AI through structured peer learning processes.

María’s projected transformation

Six months after her initial struggles, we can envision how María’s experience might transform. Through structured peer learning—project development, peer review, and facilitated dialogue—she could learn to see AI not as a foreign expert imposing solutions, but as a knowledgeable colleague whose insights she can adapt and apply.

Based on current peer learning practices, she might discover techniques from colleagues across Latin America and the rest of the world:

  • Methods for using AI diagnosis as a conversation starter with traditional healers;
  • Strategies for validating AI recommendations through community health committees;
  • Approaches for using AI analytics to support (not replace) community knowledge.

Following the pattern of peer learning networks, Maríawould begin contributing her own innovations through structured sharing, particularly around integrating AI insights with indigenous healing practices. Her documented approaches would spread through peer review and dialogue, helping thousands of health workers make AI truly useful in their communities.

Conclusion: The multiplication effect

AI transformation promises to augment human capabilities dramatically. Language interfaces will democratize access to advanced tools. Reasoning systems will provide expert-level analysis. Agentic AI will coordinate complex operations. These capabilities are beginning to transform what individuals can accomplish.

But the true multiplication effect will come through structured peer learning networks. When thousands of practitioners share how to work effectively with AI through systematic project work, peer review, and facilitated dialogue, they create collective intelligence about human-AI collaboration that no individual could develop alone. They transform AI from an impressive but alien technology into a natural extension of human capability.

For funders, this means the highest-impact investments combine AI tools with structured peer learning infrastructure. For policymakers, it means creating conditions where collaborative learning flourishes alongside technological deployment. For practitioners, it means embracing both AI partnership and peer collaboration through structured processes as essential to professional practice.

The future of human progress may rest on our ability to find effective ways to build powerful collaboration in networks that combine human and artificial intelligence. When we learn together through structured peer learning how to work with AI, we multiply not just individual capability but collective capacity to address the complex challenges facing our world.

AI is still emergent, changing constantly and rapidly. The peer learning methods are proven: we know a lot about how humans learn and collaborate. The question is how quickly we can scale this collaborative approach to match the pace of AI advancement. In that race, structured peer learning is not optional—it is essential.

Image: The Geneva Learning Foundation Collection © 2025

RAISE Language as AI’s universal interface What it means and why it matters-small

Language as AI’s universal interface: What it means and why it matters

Reda SadkiArtificial intelligence

Imagine if you could control every device, system, and process in the world simply by talking to it in plain English—or any language you speak. No special commands to memorize. No programming skills required. No technical manuals to study. Just explain what you want in your own words, and it happens.

This is the transformation Eric Schmidt described when he spoke about language becoming the “universal interface” for artificial intelligence. To understand why this matters, we need to step back and see how radically this changes everything.

The old way: A tower of Babel

Today, interacting with technology requires learning its language, not the other way around. Consider what you need to know:

  • To use your smartphone, you must understand apps, settings, swipes, and taps
  • To search the internet effectively, you need the right keywords and search operators
  • To work with a spreadsheet, you must learn formulas, functions, and formatting
  • To program a computer, you need years of training in coding languages
  • To operate specialized software—from medical systems to industrial controls—requires extensive training

Each system speaks its own language. Humans must constantly translate their intentions into forms machines can understand. This creates barriers everywhere: between people and technology, between different systems, and between those who have technical skills and those who do not.

The new way: Natural language as universal interface

What changes when AI systems can understand and act on natural human language? Everything.

Instead of learning how to use technology, you simply tell it what you want:

  • “Find all our customers who haven’t ordered in six months and draft a personalized re-engagement email for each”
  • “Look at this medical scan and highlight anything unusual compared to healthy tissue”
  • “Monitor our factory equipment and alert me if any patterns suggest maintenance is needed soon”
  • “Take this contract and identify any terms that differ from our standard agreement”

The AI system translates your natural language into whatever technical operations are needed—database queries, image analysis, pattern recognition, document comparison—without you needing to know how any of it works.

Why a universal interface changes everything

1. Democratization of capability

When language becomes the interface, advanced capabilities become available to everyone who can explain what they want. A small business owner can perform complex data analysis without hiring analysts. A teacher can create customized learning materials without programming skills. A farmer can optimize irrigation without understanding algorithms.

The divide between technical and non-technical people begins to disappear. What matters is not knowing how to code but knowing what outcomes you want to achieve.

2. System integration without friction

Today, making different systems work together is a nightmare of APIs, data formats, and compatibility issues. But when every system can be controlled through natural language, integration becomes as simple as explaining the connection you want:

“When a customer complains on social media, create a support ticket, alert the appropriate team based on the issue type, and draft a public response acknowledging their concern”

The AI handles all the technical complexity of connecting social media monitoring, ticketing systems, team communications, and response generation.

3. Context that travels

Unlike traditional interfaces that reset with each interaction, language-based AI systems can maintain context across time and tasks. They remember previous conversations, understand ongoing projects, and track evolving situations.

Imagine telling an AI: “Remember that analysis we did last month on customer churn? Update it with this quarter’s data and highlight what’s changed.” The system knows exactly what you’re referring to and can build on previous work.

4. Coordination at scale

When AI agents can communicate through natural language, they can coordinate complex operations without human intervention. Schmidt’s example of building a house illustrates this—multiple AI agents handling different aspects of a project, all coordinating through language:

  • The land-finding agent tells the regulation agent about the plot it found
  • The regulation agent informs the design agent about building restrictions
  • The design agent coordinates with the contractor agent on feasibility
  • Each agent can explain its actions and reasoning in plain language

Real-world implications

For business

Companies can automate complex workflows by describing them in natural language rather than programming them. A marketing manager could say: “Monitor our competitor’s pricing daily, alert me to any changes over 5%, and prepare a report on their promotional patterns.” No need for programmers, database experts, or data analysts.

For healthcare

Doctors can interact with AI diagnostic tools using medical terminology they already know, rather than learning proprietary interfaces. “Compare this patient’s symptoms with similar cases in our database and suggest additional tests based on what we might be missing.”

For education

Teachers can create personalized learning experiences by describing what they want: “Create practice problems for my students who are struggling with fractions, make them progressively harder as they improve, and let me know who needs extra help.”

For government

Policy makers can analyze complex data and model scenarios using plain language: “Show me how proposed changes to tax policy would affect families earning under $50,000 in rural areas versus urban areas.”

Five challenges ahead

This transformation is not without risks and challenges:

  1. Accuracy: Natural language is ambiguous. Ensuring AI systems correctly interpret intentions requires sophisticated understanding of context and nuance.
  2. Security: If anyone can control systems through language, protecting against malicious use becomes critical.
  3. Verification: When complex operations happen through simple commands, how do we verify the AI did what we intended?
  4. Dependency: As we rely more on AI to translate our intentions into actions, what happens to human technical skills?

The bottom line

Language as a universal interface represents a fundamental shift in how humans relate to technology. Instead of humans learning to speak machine languages, machines are learning to understand human intentions expressed naturally.

This is not just about making technology easier to use. It is about removing the barriers between human intention and digital capability. When that barrier falls, we enter Eric Schmidt’s “new epoch”—where the distance between thinking something and achieving it collapses to nearly zero.

The implications ripple through every industry, every job, every aspect of daily life. Those who understand this shift and adapt quickly will find themselves with almost magical capabilities. Those who do not may find themselves bypassed by others who can achieve in minutes what once took months.

The universal interface is coming. The question is not whether to prepare, but how quickly you can begin imagining what becomes possible when the only limit is your ability to describe what you want.

What does AI reasoning revolution mean for global health

What does AI reasoning mean for global health?

Reda SadkiArtificial intelligence, Global health

When epidemiologists investigate a disease outbreak, they do not just match symptoms to known pathogens. They work through complex chains of evidence, test hypotheses, reconsider assumptions when data does not fit, and sometimes completely change their approach based on new information. This deeply human process of systematic reasoning is what artificial intelligence systems are now learning to do.

This capability represents a fundamental shift from AI that recognizes patterns to AI that can work through complex problems the way a skilled professional would. For those working in global health and education, understanding this transformation is essential.

The difference between answering and reasoning

To understand this revolution, consider how most AI works today versus how reasoning AI operates.

Traditional AI excels at pattern recognition. Show it a chest X-ray, and it can identify pneumonia by matching patterns it learned from millions of examples. Ask it about disease symptoms, and it retrieves information from its training data. This is sophisticated, but it is fundamentally different from reasoning.

Consider this scenario: An unusual cluster of respiratory illness appears in a rural community. The symptoms partially match several known diseases but perfectly match none. Environmental factors are unclear. Some patients respond to standard treatments. Others do not.

A pattern-matching AI might list possible diseases based on symptom similarity. But a reasoning AI would approach it like an epidemiologist:

  • “Let me examine the symptom progression timeline.”
  • “The geographic clustering suggests environmental or infectious cause. Let me investigate both paths.”
  • “Wait, these treatment responses do not align with any single pathogen. Could this be co-infection?”
  • “I need to reconsider. What if the environmental factor is not the cause but is affecting treatment efficacy?”

The AI actually works through the problem, forms hypotheses, recognizes when evidence contradicts its assumptions, and adjusts its approach accordingly.

How reasoning AI thinks through problems

Advanced AI systems now demonstrate visible thinking processes. When analyzing complex health data, they might:

  • “First, let me identify the key variables affecting disease transmission in this population.”
  • “I will start by calculating the basic reproduction number using standard methods.”
  • “These results seem inconsistent with the observed spread pattern. Let me check my assumptions.”
  • “I may have overlooked the role of asymptomatic carriers. Let me recalculate.”
  • “This aligns better with observations. Now I can project intervention outcomes.”

This is not scripted behavior. The AI works through problems, recognizes errors, and corrects its approach—much like a researcher reviewing their analysis.

Why reasoning requires massive computational power

Reasoning AI systems require thousands of times more computational resources than traditional AI. Understanding why helps explain both their power and limitations.

Think about the difference between recognizing a disease from symptoms versus investigating a novel outbreak. Recognition happens quickly: an experienced clinician identifies malaria almost instantly. But investigating an unusual disease cluster requires sustained analysis, exploring multiple hypotheses, checking each against evidence.

The same applies to AI. Traditional pattern-matching AI makes a single pass through its neural network. But reasoning AI must:

  • Explore multiple hypotheses simultaneously;
  • Check each reasoning step for logical consistency;
  • Backtrack when evidence contradicts assumptions;
  • Verify conclusions against all available data; and
  • Consider alternative explanations.

Each step requires intensive computation. The AI might explore hundreds of reasoning paths before reaching sound conclusions.

Matching expert performance

AI systems in mid-2025 perform at the level of graduate students in mathematics and other fields. For global health, this means AI that can:

  • Design epidemiological studies with appropriate controls;
  • Identify confounding variables in complex datasets;
  • Recognize when standard statistical methods do not apply; and
  • Develop novel approaches to emerging health challenges.

This is not about calculating faster—computers have done that for decades. It is about understanding concepts, recognizing which analytical techniques to apply, and working through novel problems.

Applications in global health

Reasoning AI transforms multiple aspects of global health work:

Outbreak investigation: AI that can integrate diverse data sources—clinical reports, environmental data, travel patterns, genetic sequences—to identify outbreak sources and transmission patterns.

Treatment optimization: Systems that reason through drug interactions, comorbidities, and local factors to recommend personalized treatment protocols.

Resource allocation: AI that understands trade-offs between prevention and treatment, immediate needs and long-term capacity building, to optimize limited resources.

Research design: Systems that can identify weaknesses in study designs, suggest improvements, and recognize when findings may not generalize to other populations.

Policy analysis: AI that reasons through complex interventions, anticipating unintended consequences and identifying implementation barriers.

What makes AI reasoning different

Five capabilities distinguish reasoning AI from pattern-matching systems:

  1. Working memory: Reasoning AI holds multiple pieces of information active while working through problems, like a human tracking several hypotheses simultaneously.
  2. Logical consistency: Each conclusion must follow logically from evidence and prior reasoning steps.
  3. Error recognition: When results do not make sense, the system recognizes the problem and adjusts its approach.
  4. Abstraction: The AI recognizes general principles and applies them to specific situations, not just memorizing solutions.
  5. Explanation: Reasoning AI can explain its logic, making its conclusions verifiable and trustworthy.

The path forward

The reasoning revolution does not replace human expertise but augments it in powerful ways. For global health professionals, this means:

  • AI partners that can work through complex epidemiological puzzles;
  • Systems that help design culturally appropriate interventions;
  • Tools that identify patterns humans might miss while respecting local knowledge.

Understanding reasoning AI is no longer optional for those shaping global health. These systems are becoming intellectual partners capable of working through complex problems alongside human experts. The question is not whether to engage with this technology but how to use it effectively while maintaining human agency, judgment, and values in decisions that affect human lives.

The ability to reason—to work systematically through complex problems—has always been central to advancing human health and knowledge. Now that machines are learning this capability, we must thoughtfully consider how to harness it for global benefit while ensuring human wisdom guides its application.

Agentic AI revolution and workforce development

The agentic AI revolution: what does it mean for workforce development?

Reda SadkiArtificial intelligence

Imagine hiring an assistant who never sleeps, never forgets, can work on a thousand tasks simultaneously, and communicates with you in your own language. Now imagine having not just one such assistant, but an entire team of them, each specialized in different areas, all coordinating seamlessly to achieve your goals. This is the “agentic AI revolution” —a transformation where AI systems become agents that can understand objectives, remember context, plan actions, and work together to complete complex tasks. It represents a shift from AI as a tool you use to AI as a workforce that you collaborate with.

Understanding AI agents: More than chatbots

When most people think of AI today, they think of ChatGPT or similar systems—you ask a question, you get an answer. That interaction ends, and the next time you return, you start fresh. These are powerful tools, but they are fundamentally reactive and limited to single exchanges.

AI agents are different. They work on a principle of “language in, memory in, language out.” Let’s break down what this means:

  1. Language in: You describe what you want in natural language, not computer code. “Find me a house in California that meets these criteria…”
  2. Memory in: The agent remembers everything relevant—your preferences, previous searches, budget constraints, past interactions. It maintains this memory across days, weeks, or months.
  3. Language out: The agent reports back in plain language, explains what it did, and asks for clarification when needed. “I found three properties matching your criteria. Here’s why each might work…”

But here is the crucial part: between receiving your request and reporting back, the agent can take actions in the world. It can search databases, fill out forms, make appointments, send emails, analyze documents, and coordinate with other agents.

The house that agentic AI built

The example of building a house perfectly illustrates how agents transform complex projects. In the traditional approach, you would:

  1. Spend weeks searching real estate listings yourself.
  2. Hire a lawyer to research zoning laws and regulations.
  3. Work with an architect to design the building.
  4. Interview and select contractors.
  5. Manage the construction process.
  6. Deal with disputes if things go wrong.

Each step requires your active involvement, coordination between different professionals, and enormous amounts of time.

In the agentic model, you simply state your goal: “I want to build a house in California with these specifications and this budget.” Then:

  • Agent 1 searches for suitable lots, analyzing thousands of options against your criteria.
  • Agent 2 researches all applicable regulations, permits, and restrictions for each potential lot.
  • Agent 3 creates design options that maximize your preferences while meeting all regulations.
  • Agent 4 identifies and vets contractors, checking licenses, reviews, and past performance.
  • Agent 5 monitors construction progress and prepares documentation if issues arise.

These agents do not work in isolation. They communicate constantly:

  • The lot-finding agent tells the regulation agent which properties to research.
  • The regulation agent informs the design agent about height restrictions and setback requirements.
  • The design agent coordinates with the contractor agent about feasibility and costs.
  • All agents update you on progress and escalate decisions that need human judgment.

Why agentic AI changes everything

This workflow example is true of every business, every government, and every group human activity. In other words, this transformation has universal relevance.

Every complex human endeavor involves similar patterns:

  • Multiple steps that must happen in sequence;
  • Different types of expertise needed at each step;
  • Coordination between various parties;
  • Information that must flow between stages; and
  • Decisions based on accumulated knowledge.

Today, humans do all this coordination work. We are the project managers, the communicators, the information carriers, the decision makers at every level. The agentic revolution means AI agents can handle much of this coordination, freeing humans to focus on setting goals and making key judgments.

The memory advantage

What makes agents truly powerful is their memory. Unlike human workers who might forget details or need to be briefed repeatedly, agents maintain perfect recall of:

  • Every interaction and decision;
  • All relevant documents and data;
  • The complete history of a project; and
  • Relationships between different pieces of information.

This memory persists across time and can be shared between agents. When you return to a project months later, the agents remember exactly where things stood and can continue seamlessly.

Agentic AI from individual tools to digital teams

The revolutionary aspect is not just individual agents but how they work together. Like a well-functioning human team, AI agents can:

  • Divide complex tasks based on specialization;
  • Share information and coordinate actions;
  • Escalate issues that need human decision-making;
  • Learn from outcomes to improve future performance; and
  • Scale up or down based on workload.

But unlike human teams, they can:

  • Work 24/7 without breaks;
  • Handle thousands of tasks in parallel;
  • Communicate instantly without misunderstandings;
  • Maintain perfect consistency; and
  • Never forget critical details.

The new human role as co-worker to agentic AI

In this world, humans do not become obsolete—our role fundamentally changes. Instead of doing routine coordination and information processing, we:

  • Set goals and priorities;
  • Make value judgments;
  • Handle exceptions requiring creativity or empathy;
  • Build relationships and trust;
  • Ensure ethical considerations are met; and
  • Provide the vision and purpose that guides agent actions.

Challenges and considerations

The agentic revolution raises important questions:

  • Trust: How do we verify agents are acting in our interest?
  • Control: What happens when agents make decisions we did not anticipate?
  • Accountability: Who is responsible when an agent makes an error?
  • Privacy: What data do agents need access to, and how is it protected?
  • Employment: What happens to jobs based on coordination and information processing?

What can agentic AI do in 2025?

Early versions of these agents already exist in limited forms. Organizations and individuals who understand this shift early will have significant advantages. Those who continue operating as if human coordination is the only option may find themselves struggling to compete with those augmented by agentic AI teams.

Where do we go from here?

The agentic revolution represents something humanity has never had before: the ability to multiply our capacity for complex action without proportionally increasing human effort. It is as if every person could have their own team of tireless, brilliant assistants who understand their goals and work together seamlessly to achieve them.

This is not about replacing human intelligence but augmenting human capability. When we can delegate routine coordination and information processing to agents, we can focus on what humans do best: creating meaning, building relationships, making ethical judgments, and pursuing purposes that matter to us.

The world we imagine—where building a house or running a business or navigating healthcare becomes as simple as stating your goal clearly—represents a fundamental shift in how complex tasks get accomplished. Whatever the timeline for this transformation, understanding how AI agents work and what they make possible has become essential for anyone trying to make sense of where our societies are heading.

The concept is clear: AI systems that can understand goals, remember context, and coordinate actions to achieve complex outcomes. What we do with this capability remains an open question—one that will be answered not by the technology itself, but by how we choose to use it.

Peer learning outperforms technical assistance

The great technical assistance disruption: How peer networks outperform experts at a fraction of the cost

Reda SadkiWriting

“If health workers do not share their challenges and solutions, we are bound to fail.” This declaration from a participant in the Teach to Reach initiative facilitated by The Geneva Learning Foundation (TGLF) cuts to the heart of a crisis that has long plagued global health technical assistance: the persistent gap between what external experts provide and what practitioners actually need.

At the annual meeting of the American Society of Tropical Medicine and Hygiene (ASTMH), TGLF’s Reda Sadki presented evidence of a quiet revolution taking place in how global health organizations approach capacity building and technical assistance. His research and practice demonstrate that digitally-enabled peer learning can overcome fundamental limitations that have constrained traditional models for decades. The implications challenge not just how we train health workers, but the entire infrastructure of expert-driven technical assistance that dominates global health.

Why we resist learning from screens

To understand why this revolution has been so long in coming, Sadki traced our resistance to digital learning back to philosophical roots that run deeper than most global health practitioners realize. The skepticism, he argued, stems from a fundamental assumption about how real learning occurs — an assumption that shapes everything from how we design training programs to how we structure technical assistance.

“Plato initiated our traditional negative view of the written word,” Sadki explained, describing how the ancient philosopher believed that writing “detaches the message from its author and transforms it into a dead thing, a text.” For Plato, authentic learning required direct interaction between teacher and student. Anything mediated — whether by writing or, by extension, digital technology — was considered a pale imitation of real knowledge transfer.

This ancient skepticism persists in modern global health, where the dominant assumption is that learning means recalling information and teaching means transmitting that information through direct instruction. Face-to-face workshops and expert-led training sessions are considered “real” technical assistance, while digital alternatives are viewed as convenient but inferior substitutes.

“It is a false dichotomy to distinguish between, to oppose our lived reality to the digital one,” Sadki argued. “The digital one is lived also. It is also reality.” Yet this dichotomy continues to shape technical assistance models that prioritize flying experts around the world to deliver content in person, even when evidence suggests digital approaches may be more effective.

Indeed, the evidence is striking. Two major meta-analyses comparing learning modalities found that “distance learning results have been consistently better” than traditional face-to-face approaches, “and that has been the case since 1991.” Yet global health technical assistance remains largely wedded to what Bill Cope and Mary Kalantzis call a “didactic learning architecture” — the familiar setup where external experts deliver content to passive recipients arranged “in rows, they do not speak to each other, the teacher sits at the front.”

When information transmission fails

The inadequacy of information transmission models becomes clear when considering the nature of challenges that health workers actually face. Most global health training assumes that the problem is a lack of information — that if practitioners simply knew more facts or protocols, they would perform better. This assumption drives technical assistance focused on delivering standardized content through lectures, presentations, and workshops.

But research in learning science reveals a more complex reality. “When knowledge is a river, not a reservoir, process, not a product,” expert-led information transmission breaks down, Sadki observed. Modern knowledge workers have “around 10 percent” of the knowledge they need “right there in your brain,” with “90 percent of what you need to know going to come from other humans, or increasingly from machines.”

This insight challenges the foundation of traditional technical assistance. If practitioners need to access knowledge through connections rather than storage, then the goal should not be filling their heads with information but connecting them to networks where knowledge flows. Yet most capacity building programs continue to focus on what Sadki called “content-driven learning” rather than connection-driven learning.

The shift required is profound. Rather than positioning external experts as the primary source of knowledge, effective technical assistance must create what Connell Foley described as “a fundamental shift from being an expert who provides answers, to being a facilitator who, through critical thought, can develop questions that prompt others to analyze and develop strategies to address their own needs.”

Digital technologies as technical assistance disruptors

The breakthrough comes when digital technologies “enable you to defy distance and boundaries in order to connect with others and learn from them.” This represents more than technological innovation — it challenges the basic economics and power structures of traditional technical assistance.

Consider the conventional model: international organizations identify capacity gaps, hire external experts, and deploy them to deliver training. This approach assumes that valid knowledge flows primarily from international experts to local practitioners. It requires significant funding for travel, venues, and expert fees, limiting both reach and frequency of interaction.

Digitally-enabled peer learning turns this model on its head. “Peer learning has always been there,” Sadki noted. “Learning from others, learning from people who are like yourself has always been important, but it has been limited to those within your physical space.” Digital technologies remove that spatial limitation, enabling practitioners facing similar challenges across different contexts to learn directly from each other.

Cristina Guerrero, an emergency health doctor who leads a helicopter rescue team in Cadiz, Spain, experienced this transformation through the foundation’s #Ambulance! programme with the International Federation of Red Cross and Red Crescent Societies (IFRC) and the International Committee of the Red Cross (ICRC). “I thought I already knew how to face violence,” she reflected. “Then I heard how they do things in other parts of the world. I learned how I can do my work differently. I became mindful in new ways.”

Her experience illustrates what traditional technical assistance models struggle to achieve: not just information transfer, but genuine transformation of practice. Sadki noted that peer learning produced “changes in mindfulness” — higher-order learning that most would consider “impossible to achieve by digital means.” Yet “digital combined with social and peer learning made it possible.”

Evidence of a new technical assistance model

TGLF’s collaboration with the World Health Organization, implementing 46 cohorts of peer learning initiatives focused on immunization and other technical areas, provided rigorous evidence that peer learning can replace traditional expert-led technical assistance. The first impact evaluation of this collaboration in January 2019 found that “these are more than just courses. These are interventions designed to foster and improve practice at every level.”

This approach represents what researcher Alexandra Nastase and colleagues would recognize as a fourth model of technical assistance, beyond their three categories of capacity substitution, supplementation, and development. This model challenges fundamental assumptions about who holds valid knowledge and how capacity building should occur.

The most dramatic validation came through TGLF’s Impact Accelerator mechanism. When 644 alumni signed a pledge to achieve impact in July 2019, something remarkable happened. “‘We are together’ became a slogan for the individuals involved,” Sadki observed. The measurable results were astonishing: participants who engaged in peer learning showed seven times higher rates of project implementation compared to a control group that did not engage in peer learning activities to support and learn from each other.

The scale of subsequent initiatives has been even more striking. The Movement for Immunization Agenda 2030, launched in March 2022, grew to 6,185 participants in its first two weeks. In the first four months, more than 1,000 developed action plans, and over 4,000 joined a new Impact Accelerator. Within this period, 30 percent of participants reported successful implementation of their local projects — implementation rates that far exceed what traditional technical assistance typically achieves.

Beyond the expert monopoly

Perhaps most significantly, the Geneva Learning Foundation’s model has enabled practitioners to transcend traditional power structures and drive their own capacity building agendas. Rather than waiting for external technical assistance, practitioners began forming organic learning networks that generate solutions from the ground up.

These examples illustrate a fundamental shift in the locus of knowledge creation. Traditional technical assistance assumes that solutions flow from international experts to local implementers. The foundation’s model demonstrates that practitioners facing similar challenges often hold the keys to solutions, and that the role of technical assistance should be creating conditions for them to learn from each other.

Transforming the technical assistance paradigm

The evidence points toward what Sadki called “an opportunity for transformation that may be much harder to achieve [than what we already know how to do], but with a far greater return on the investment.” The transformation involves “empowering health professionals to drive improvement from the ground up, connecting them to their peers, and linking to global guidance.”

This requires fundamentally different approaches to capacity building. Instead of the traditional model where external experts deliver knowledge to passive recipients, effective peer learning creates what Sadki described as “circular, interactive configurations” where practitioners engage directly with each other’s experiences. The facilitation may be digital, but the knowledge exchange is profoundly collaborative.

By systematically applying insights from social learning, networked learning, and digital learning, the foundation has created what amounts to “a human knowledge network” that “unites practitioners and those who support them in a shared pledge to turn knowledge into action.”

The fact that these “recent advances in learning science remain largely unknown in global health, at least in some quarters” remains a challenge.

The future of technical assistance

As global health faces increasingly complex challenges — from climate change to pandemic preparedness to health system resilience — the ability to harness collective intelligence through peer learning may prove essential. The evidence suggests that effective solutions emerge not from more sophisticated expert-driven interventions, but from better systems for enabling practitioners to learn from each other.

The implications extend beyond individual capacity building to systemic change. When health workers share challenges and solutions across contexts, they create what Sadki called “a river of knowledge” that practitioners can dip into when they need to solve a problem. This enables rapid adaptation and innovation at scales that traditional technical assistance cannot achieve.

The revolution in global health technical assistance may ultimately be less about technology and more about recognition — acknowledging that expertise is distributed rather than concentrated, and that the future lies not in perfecting systems for delivering knowledge from experts to practitioners, but in creating conditions for practitioners to take action by combining what they know because they are there every day with the best available global knowledge – reshaping global knowledge in the process.

References

Feenberg, A., 1989. The written world: On the theory and practice of computer conferencing, in: Mason, R., Kaye, A. (Eds.), Mindweave: Communication, Computers, and Distance Education. Pergamon Press, pp. 22–39.

Foley, C., 2008. Developing critical thinking in NGO field staff. Development in Practice 18, 774–778. https://doi.org/10.1080/09614520802386827

Jurgenson, N., 2012. The IRL Fetish. The New Inquiry 6.

Kalantzis M, Cope B. Didactic literacy pedagogy. In: Literacies. Cambridge University Press; 2012:63-94.

Means, B., Toyama, Y., Murphy, R., Bakia, M., Jones, K., 2010. Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies. U.S. Department of Education  Office of Planning, Evaluation, and Policy Development  Policy and Program Studies Service.

Nastase, A., Rajan, A., French, B., Bhattacharya, D., 2020. Towards reimagined technical assistance: the current policy options and opportunities for change. Gates Open Res 4, 180. https://doi.org/10.12688/gatesopenres.13204.1

Neumann, Y., Shachar, M., 2010. Twenty Years of Research on the Academic Performance Differences Between Traditional and Distance Learning: Summative Meta-Analysis and Trend Examination. MERLOT Journal of Online Learning and Teaching 6.

Sadki, R. (2022). Learning for Knowledge Creation: The WHO Scholar Program. Reda Sadki. https://doi.org/10.59350/j4ptf-x6x22

Sadki, R. (2023). Learning-based complex work: how to reframe learning and development. Reda Sadki. https://doi.org/10.59350/7fe95-1fz14

Sadki, R. (2024). Knowing-in-action: Bridging the theory-practice divide in global health. Reda Sadki. https://doi.org/10.59350/4evj5-vm802

Watkins, K.E., Sandmann, L.R., Dailey, C.A., Li, B., Yang, S.-E., Galen, R.S., Sadki, R., 2022. Accelerating problem-solving capacities of sub-national public health professionals: an evaluation of a digital immunization training intervention. BMC Health Services Research 22. https://doi.org/10.1186/s12913-022-08138-4

The funding crisis solution hiding in plain sight

The funding crisis solution hiding in plain sight

Reda SadkiGlobal health

“I did not realize how much I could do with what we already have.”

A Nigerian health worker’s revelation captures what may be the most significant breakthrough in global health implementation during the current funding crisis. While organizations worldwide slash programs and lay off staff, a small Swiss non-profit, The Geneva Learning Foundation (TGLF), is demonstrating how to achieve seven times greater likelihood of improved health outcomes while cutting costs by 90 percent.

The secret lies not in new technology or additional resources, but in something deceptively simple: health workers learning from and supporting each other.

Nigeria: Two weeks to connect thousands, four weeks to change, and six weeks to outcomes

On June 26, 2025, representatives from 153 global health and humanitarian organizations gathered for a closed-door briefing seeking proven solutions to implementation challenges they knew all too well. TGLF presented evidence from the Nigeria Immunization Agenda 2030 Collaborative that sounds almost too good be true to senior leaders who have to make difficult decisions given the funding cuts: documented results at unprecedented speed and scale – and at lower cost.

Working with Gavi, Nigeria’s Primary Health Care Development Agency, and UNICEF, they facilitated connections among 4,300 health workers and more than 600 local organizations across all Nigerian states, in just two weeks. Not fleeting digital clicks, but what Executive Director Reda Sadki calls “deep, meaningful engagement, sharing of experience, problem solving together.”

The challenge was reaching zero-dose children in fragile areas affected by armed conflict. The timeline was impossible by traditional standards. The results transformed many skeptics into advocates – including those who initially said it sounded too good to be true.

A civil society organization (CSO) volunteer reported that government staff initially dismissed the initiative: “They heard about this, thought it was just another CSO initiative. Two weeks in, they came back asking how to join.”

Funding crisis: How does sharing experience lead to better outcomes?

What happened next addresses the most critical question about peer learning approaches: do health workers learning from each other actually improve health outcomes?

TGLF’s comparative research demonstrated that groups using structured peer learning are seven times more likely to achieve measurable health improvements versus conventional approaches.

In Nigeria, health workers learned the “five whys” root cause analysis from each other. Many said no one had ever asked them: “What do you think we should do?” or “Why do you think that is?” The transformation was both rapid and measurable.

For example, at the program start, only 25 percent knew their basic health indicators for local areas. “I collect these numbers and pass them on, but I never realized I could use them in my work,” participants reported.

Four weeks in, they had produced 409 root cause analyses. Many realized that their existing activities were missing these root causes. After six weeks, health workers began credibly reporting attribution of new activities that led to finding and vaccinating zero-dose children.

Given limited budget, TGLF had to halt development. But here is the key point: more than half of participating have maintained and continued the peer support network independently, addressing sustainability concerns that plague traditional capacity-building efforts.

The snowball effect at scale

The breakthrough emerged from what Sadki describes as reaching “critical mass” where motivated participants pull others along. “This requires clearing the rubble of all the legacy of top-down command and control systems, figure out how to negotiate hierarchies, especially because government integration is systematically our goal.”

Nigeria represents one of four large-scale implementations demonstrating consistent results. In Côte d’Ivoire, 501 health workers from 96 districts mapped out 3.5 million additional vaccinations in four weeks. Global initiatives are likely to cost no more than a single country-specific program: the global Teach to Reach network has engaged 24,610 participants across more than 60 countries. The global Movement for Immunization Agenda 2030, launched in March 2022, grew from 6,186 to more than 15,000 members in less than four months.

The foundation tracks what they call a “complete measurement chain” from individual motivation through implementation actions to health outcomes. Cost efficiency stems from scale and sustainability, with back-of-envelope calculations suggesting 90 percent cost reduction compared to traditional methods.

Solving the abundance paradox

“You touched upon an important issue that I am struggling with—the abundance of guidance that my own organization produces and also guidance that comes from elsewhere,” noted a senior manager from an international humanitarian network during the briefing. “It really feels intriguing to put all that material into a course and look at what I am going to do with this. It is a precious process and really memorable and makes the policies and materials relevant.”

This captures a central challenge facing global health organizations: not lack of knowledge, but failure to translate knowledge into action. The peer learning model transforms existing policies and guidelines into peer learning experiences where practitioners study materials to determine specific actions they will take.

“Learning happens not simply by acquiring knowledge, but by actually doing something with it,” Sadki explained.

For example, a collaboration with Save the Children converted a climate change policy brief into a peer learning course accessed by more than 70,000 health workers, developed and deployed in three days with initial results expected within six weeks.

Networks that outlast the funding crisis

The foundation’s global network now includes more than 70,000 practitioners across 137 countries, with geographic focus on nations with highest climate vulnerability and disease burden. More than 50 percent are government staff. More than 80 percent work at district and community levels.

Tom Newton-Lewis, a leading health systems researcher and consultant who attended the briefing, captured what makes this approach distinctive: “I am always inspired by the work of TGLF. There are very few initiatives that work at scale that walk the talk on supporting local problem solving, and mobilize systems to strengthen themselves.”

This composition ensures that peer learning initiatives operate within rather than parallel to official health systems. More than 1,000 national policy planners connect directly with field practitioners, creating feedback loops between strategy development and implementation reality.

Networks continue functioning when external support changes. The foundation has documented continued peer connections through network analysis, confirming that established relationships maintain over time.

Three pathways forward

The foundation outlined entry points for organizations seeking proven implementation approaches. First, organizations can become program partners, providing their staff access to existing global programs while co-developing new initiatives. Available programs include measles, climate change and health, mental health, non-communicable diseases, neglected tropical diseases, immunization, and women’s leadership.

Second, using the model to connect policy and implementation at scale and lower cost. Timeline: three days to build, four to six weeks for initial results. Organizations gain direct access to field innovations while receiving evidence-based feedback on what actually works in practice.

Third, testing the model on current problems where policy exists but implementation remains inconsistent. Organizations can connect their staff to practitioners who have solved similar problems without additional funding. Timeline: six to eight weeks from start to documented results.

The foundation operates through co-funding partnerships rather than grant-making, with flexible arrangements tailored to partner capacity and project scope. What they call “economy of effort” often delivers initiatives spanning more than 50 countries for the cost of single-country projects.

Adaptability across contexts

The model has demonstrated remarkable versatility across different contexts and challenges. The foundation has successfully adapted the approach to new geographic areas like Ukraine and thematic areas like mental health and psychosocial support. Each adaptation requires understanding specific contexts, needs, and goals, but the fundamental peer learning principles remain consistent.

An Indian NGO raised a fundamental challenge: “Where we struggle with program implementation post-funding is without remuneration frontline workers. Although they want to bring change in the community, are motivated, and have enough data, cannot continue.”

Sadki’s response: “By recognizing the capabilities for analysis, for adaptation, for carrying out more effective implementation because of what they know, because they are there every day, that should contribute to a growing movement for recognition that CHWs in particular should be paid for the work that they do.”

The path forward

The Nigerian health worker’s realization—discovering untapped potential in existing resources—represents more than individual transformation. It demonstrates how peer learning unlocks collective intelligence already present within communities and health systems.

In two weeks, health workers connected with each other across Nigeria’s most challenging regions, facilitated by the foundation’s proven methodology. By the sixth week, they had begun reporting credible, measurable health improvements. The model works because it values local knowledge, creates peer support systems, and integrates with government structures rather than bypassing them.

With funding cuts forcing difficult choices across global health, this model offers documented evidence that better health outcomes can cost less, sustainable networks continue without external support, and local solutions scale globally. For organizations seeking proven implementation approaches during resource constraints, the question is not whether they can afford to try peer learning, but whether they can afford not to.

Image: The Geneva Learning Foundation Collection © 2025

When funding shrinks, impact must grow the economic case for peer learning networks-small

When funding shrinks, impact must grow: the economic case for peer learning networks

Reda SadkiGlobal health, The Geneva Learning Foundation

Humanitarian, global health, and development organizations confront an unprecedented crisis. Donor funding is in a downward spiral, while needs intensify across every sector. Organizations face stark choices: reduce programs, cut staff, or fundamentally transform how they deliver results.

Traditional capacity building models have become economically unsustainable. Technical assistance, expert-led workshops, international travel, and venue-based training are examples of high-cost, low-volume activities that organizations may no longer be able to afford.

Yet the need for learning, coordination, and adaptive capacity has never been greater.

The opportunity cost of inaction

Organizations that fail to adapt face systematic disadvantage. Traditional approaches cannot survive current funding constraints while maintaining effectiveness. Meanwhile, global challenges intensify: climate change drives new disease patterns; conflict disrupts health systems; demographic transitions strain capacity.

These complex, interconnected challenges require adaptive systems that respond at the speed and scale of emerging threats. Organizations continuing expensive, ineffective approaches will face programmatic obsolescence.

Working with governments and trusted partners that include UNICEF, WHO, Gates Foundation, Wellcome Trust, and Gavi (as part of the Zero-Dose Learning Hub), the Geneva Learning Foundation’s peer learning networks have consistently demonstrated they can deliver measurably superior outcomes while reducing costs by up to 86% compared to conventional approaches.

Peer learning networks offer both immediate financial relief and strategic positioning for long-term sustainability. The evidence spans nine years, 137 countries, and collaborations with the most credible institutions in global health, humanitarian response, and research.

The unsustainable economics of traditional capacity building

A comprehensive analysis reveals the structural inefficiencies of conventional approaches. Expert consultants command daily rates of $800 or more, plus travel expenses. International workshops may require $15,000-30,000 for venues alone. Participant travel and accommodation averages $2,000 per person. A standard 50-participant workshop costs upward of $200,000.

When factoring limited sustainability, the economics become even more problematic. Traditional approaches achieve measurable implementation by only 15-20% of participants within six months. This translates to effective costs of $10,000-20,000 per participant who actually implements new practices.

A rudimentary cost-benefit analysis demonstrates how peer learning networks restructure these economics fundamentally.

ComponentTraditional approachPeer learning networksEfficiency gain
Cost per participant$1,850$26786% reduction
Implementation rate15-20%70-80%4x higher success
Duration of engagement2-3 days90+ days30x longer
Post-training supportNoneContinuous networkSustained capacity

Learn more: Calculating the relative effectiveness of expert coaching, peer learning, and cascade training

Evidence of measurable impact at scale

Value for money requires clear attribution between investments and outcomes.

In January 2020, we compared outcomes between two groups. Both had intent to take action to achieve results. Health workers using structured peer learning were seven times more likely to implement effective strategies resulting in improved outcomes, compared to the other group that relied on conventional approaches.

What about speed and scale?

In July 2024, working with Nigeria’s National Primary Health Care Development Agency (NPHCDA) and UNICEF, we connected 4,300 health workers across all states and 300+ local government areas within two weeks. Over 600 local organizations including government facilities, civil society, faith-based groups, and private sector actors joined this Immunization Collaborative.

With two more weeks, participants produced 409 peer-reviewed root cause analyses. By Week 6, we began to receive credible vaccination coverage improvements after six weeks, especially in conflict-affected northern regions where conventional approaches had consistently failed. The total programme cost was equivalent to 1.5 traditional workshops for 75 participants. Follow-up has shown that more than half of the participants are staying connected long after TGLF’s “jumpstarting” activities, driven by intrinsic motivation.

Côte d’Ivoire demonstrates crisis response capability. Working with Gavi and the Ministry of Health, we recruited 501 health workers from 96 districts (85% of the country) in nine days ahead of the country’s COVID-19 vaccination campaign in November 2021. Connected to each other, they shared local solutions and supported each other, contributing to vaccination of an additional 3.5 million additional people at $0.26 per vaccination delivered.

TGLF’s model empowers health workers to share knowledge, solve local challenges, and implement solutions via a digital platform. Unlike top-down training and technical assistance, it fosters collective intelligence, enabling rapid adaptation to crises. Since 2016, TGLF has mobilized networks for immunization, COVID-19 response, neglected tropical diseases (NTDs), mental health and psychosocial support, noncommunicable diseases, and climate-health resilience.

These cases illustrate the ability of TGLF’s model to address strategic global priorities—equity, resilience, and crisis response—while maximizing efficiency. This model offers a scalable, low-cost alternative that delivers measurable impact across diverse priorities.

Our mission is to share such breakthroughs with other organizations and networks that are willing to try new approaches.

Resource allocation for maximum efficiency

Our partnership analysis reveals optimal resource allocation patterns that maximize impact while minimizing cost:

  • Human resources (85%): Action-focused approach leveraging human facilitation to foster trust, grow leadership capabilties, and nurture networks with a single-minded goal of supporting implementation to rapidly and sustainably achieve tangible outcomes.
  • Digital infrastructure (10%): Scalable platform development enabling unlimited concurrent participants across multiple countries.
  • Travel (5%): Minimal compared to 45% in traditional approaches, limited to essential coordination where social norms require face-to-face meetings, for example in partnership engagement with governments.

This structure enables remarkable economies of scale. While traditional approaches face increasing per-participant costs, peer learning networks demonstrate decreasing unit costs with growth. Global initiatives reaching 20,000+ participants across 60+ countries operate with per-participant costs under $10.

Sustainability through combined government and civil society ownership

Sustainability is critical amidst funding cuts. TGLF’s networks embed organically within government systems, involving both central planners in the capital as well as implementers across the country, at all levels of the health system.

Country ownership: Programs work within existing health system structures and national plans. Networks include 50% government staff and 80% district/community-level practitioners—the people who actually deliver services. In Nigeria, 600+ local organizations – both private and public – collaborated, embedding learning in both civil society and government structures.

Sustainability: In Côte d’Ivoire, 82% sustained engagement without incentives, fostering self-reliant networks. 78% said they no longer needed any assistance from TGLF to continue.

This approach enhances aid effectiveness, reducing dependency on external funding.

Aid effectiveness: Rather than bypassing systems, peer learning strengthens existing infrastructure. Networks continue functioning when external funding decreases because they operate through established government channels linked to civil society networks.

Transparency: Digital platforms create comprehensive audit trails providing unprecedented visibility into program implementation and results for donor oversight.

Implementation pathways for resource-constrained organizations

Organizations can adopt peer learning approaches through flexible pathways designed for immediate deployment.

  1. Rapid response initiatives (2-6 weeks to results): Address critical challenges requiring immediate mobilization. Suitable for disease outbreaks, humanitarian emergencies, or longer-term policy implementation.
  2. Program transformation (3-6 months): Convert existing technical assistance programs to peer learning models, typically reducing costs by 80-90% while expanding reach, inclusion, and outcomes.
  3. Cross-portfolio integration: Single platform investments serve multiple technical areas and geographic regions simultaneously, maximizing efficiency across donor portfolios with marginal costs approaching zero for additional countries or topics.

The strategic choice

The funding environment will not improve. Economic uncertainty in traditional donor countries, competing domestic priorities, and growing skepticism about aid effectiveness create permanent pressure for better value for money.

Organizations face a fundamental choice: continue expensive approaches with limited impact, or transition to emergent models that have already shown they can achieve superior results at dramatically lower cost while building lasting capability.

The question is not whether to change—budget constraints mandate adaptation. The question is whether organizations will choose approaches that thrive under resource constraints or continue hoping that some donors will fill the gaping holes left by funding cuts.

The evidence demonstrates that peer learning networks achieve 86% cost reduction while delivering 4x implementation rates and 30x longer engagement. These gains are not theoretical—they represent verified outcomes from active partnerships with leading global institutions.

In an era of permanent resource constraints and intensifying challenges, organizations that embrace this transformation will maximize their mission impact. Those that do not will find themselves increasingly unable to serve the communities that depend on their work.

Image: The Geneva Learning Foundation Collection © 2025