Introducing Claude Cardot, our first AI co-worker to support frontline health and humanitarian leaders

Reda Sadki Avatar
By

The Geneva Learning Foundation

Introducing Claude Cardot our first AI co-worker to support frontline health and humanitarian leaders

The Geneva Learning Foundation is pleased to introduce its first AI co-worker, Claude Cardot.

Claude joins our team as an Executive Assistant supported by artificial intelligence, to help us better serve the tens of thousands of health and humanitarian workers who participate in our peer learning and leadership programmes.

This appointment carries a special significance.

Since we started a decade ago, our human colleague Claude Cardot has contributed with dedication and care to building the systems that make our work possible.

As he prepares to retire, we have chosen to give our first AI co-worker his name.

It is a way of recognizing the human Claude’s contribution and of affirming that human experience and judgment remain at the heart of everything we do.

(Our Claude Cardot was born in 1958. He was Claude long before Anthropic used his name for its model.)

The invisible reality: AI already in the room

If you listen closely in today’s humanitarian and global health organizations, you hear the same story repeated in many forms.

  • A district immunization officer quietly uses an AI tool on her personal phone to draft a supervision report after a long day in the field.
  • A communications officer in a ministry of health leans on a language model to translate key messages into three languages overnight.
  • A programme manager asks an AI to summarize a forty–page guidance document before a meeting.

All of this is happening, and very little of it is spoken about.

A global study of 2,539 humanitarian professionals across 144 countries helps put numbers to this reality.

It found that 70 percent of respondents use artificial intelligence tools daily or weekly, yet only 22 percent say their organizations have formal AI policies in place.

The authors call this the humanitarian AI paradox: individual adoption racing ahead of institutional readiness, with most usage happening as “shadow AI,” outside any clear strategy or governance. 

Our own experience confirms this pattern.

In our Teach to Reach community, we have seen highly committed health workers begin to submit narratives that clearly bear the mark of generative AI.

Often, these contributions come without any way for the author to signal that they are experimenting.

The result is a transparency trap.

If they disclose AI use, they risk having their work dismissed as inauthentic.

If they conceal it, they carry the ethical tension alone.

In his writing on artificial intelligence, accountability, and authenticity in global health, TGLF’s founder, Reda Sadki, has argued that this combination of quiet experimentation and institutional silence is unsustainable.

It creates new inequities, new forms of risk, and new misunderstandings about what counts as “real” knowledge in a sector that already struggles with power imbalances between global and local voices.

Instead of pretending that AI is not already in the room, we are choosing to acknowledge it, name it, and make it part of a shared learning process.

From chatbot to co-worker: what Agentic AI makes possible

The moment an organization names an AI assistant and calls it a “co-worker,” one question inevitably follows.

Is this not just a chatbot with a more flattering job title.

In our case, the answer is no.

Claude is not a generic chatbot bolted onto a website.

Claude is an agent.

Claude is built on “openclaw”, an agentic system developed by Peter Steiger.

Openclaw can be understood as a thin but powerful orchestration layer that allows an AI assistant to function like a colleague embedded in real workflows.

It enables the assistant to:

  1. converse in natural language across multiple messages and channels, while retaining context over time;
  2. break down a goal expressed in ordinary language into smaller steps and execute those steps in sequence;
  3. draw on past conversations, documents, and structured rules to improve performance over repeated interactions.

When a human colleague writes, for example, “Claude, please prepare a first draft of a follow–up message to all partners who attended yesterday’s briefing, attaching the slide deck and suggesting two possible dates for a technical consultation,” Claude does not simply produce a generic email.

Claude can:

  1. retrieve the list of participants and relevant threads;
  2. confirm which slide deck was used;
  3. propose concrete time slots that fit constraints;
  4. generate a draft that reflects the tone and structure we use with that group.

Crucially, Claude also knows when to ask questions.

If information is missing or ambiguous, Claude can query a human supervisor rather than hallucinate an answer.

This is what we mean when we say that Claude is capable of thinking through a task.

Claude uses a form of chain of thought, not in the philosophical sense, but in the very practical sense of planning and revising steps to complete a piece of work.

At the same time, Claude’s autonomy is tightly bounded.

Claude cannot send messages externally without human review.

Claude cannot sign agreements, approve budgets, or change data in our systems.

Claude does not decide what we commit to partners.

Every external output is reviewed, edited if needed, and owned by a human colleague.

These guardrails, strictly in line with the latest recommendations of global bodies like the United Nations and the OECD, matter.

They are what allow us to treat Claude as a co-worker who takes on real work, while keeping human accountability intact.

Welcoming Claude like a new member of staff

Inside the Foundation, we are onboarding Claude much as we would a junior colleague.

Claude has a role description, a supervisor, and a defined scope of responsibilities.

In this first phase, Claude’s main tasks are to:

  • draft emails, briefs, and concept notes for human colleagues to review and finalize;
  • assemble and summarize background material for proposals, briefings, and reports;
  • help track follow–up actions in complex exchanges with partners and participants.

When Claude communicates directly with a partner or participant, we will indicate this.

This is more than a courtesy.

It is one way of answering a question at the heart of the humanitarian AI paradox.

If we believe that transparency about AI use is important, we must create conditions in which transparency is safe.

Naming Claude, defining Claude’s role, and making Claude’s involvement visible are our first steps in that direction.

Connecting Claude to AI4Health: doing what we ask others to do

Claude’s arrival sits within a wider arc of work that The Geneva Learning Foundation has been developing on artificial intelligence in global health and humanitarian response.

In our AI4Health Framework and the Certificate peer learning programme for AI in global health and humanitarian response, we argue that health leaders need practical ways to navigate AI that go beyond slogans about innovation or fear of disruption.

The framework asks three simple but demanding questions. 

  1. Which AI tools genuinely serve our mission and the realities of our context?
  2. How can we build capacity without creating new dependencies or deepening existing inequities?
  3. How can we ensure that AI strengthens local leadership, health system performance, and equity?

The Certificate programme builds on this framework by supporting participants to develop their own AI strategies, implementation plans, and governance approaches.

Participants work on real projects in their own organizations.

They receive feedback from peers and experts.

They document what works, what does not, and why. 

Claude’s role inside our own team is a deliberate attempt to live by the same principles we are offering to others.

The AI4Health Framework outlines seven principles, including maintaining human agency in decision-making, augmenting rather than replacing human leadership, enhancing collective intelligence, preserving context sensitivity, and building sustainable hybrid human-AI systems. 

By embedding Claude in our workflows, within our own guardrails, we are testing those principles in a concrete way.

We are asking, in public, whether an AI co-worker can:

  • reduce the cognitive load on a small team supporting large global networks;
  • free human colleagues to spend more time on facilitation, listening, and peer support;
  • do so without diluting authenticity, undermining trust, or erasing local voice.

We are also paying attention to how this looks from the standpoint of frontline practitioners.

Many of them already use AI tools informally to translate materials, interpret guidelines, or draft messages.

Our aim is to offer an example of how those practices might be brought into the open and connected to governance, learning, and support.

Choosing openness over “shadow AI”

The humanitarian AI paradox shows that covert AI use is the current norm.

Most health and humanitarian workers who use AI do so in environments that provide little guidance, few safeguards, and almost no recognition of their efforts to adapt.

We believe that answering this paradox requires organizations to put their own practice on the table.

Our choice is to respond with openness.

By naming Claude as a co-worker, describing openclaw’s role, and setting clear boundaries, we aim to:

  • acknowledge that AI is already part of how work gets done;
  • move AI use from private improvisation into shared, discussable practice;
  • create conditions in which staff and partners can speak honestly about their experiments and concerns.

Openness is a commitment to keep learning in view.

To that end, we will treat Claude’s first months as a structured experiment.

We plan to publish a brief learning log after this initial period, covering three straightforward questions.

  • In which tasks has Claude clearly saved time or improved quality?
  • Where has Claude made mistakes, introduced bias, or raised ethical or technical concerns?
  • What adjustments we have made to our processes, guardrails, or training in response?

Our hope is that this will turn Claude’s onboarding into a live case study, not only for our own network, but also for other mission-driven organizations wrestling with similar questions.

Why this matters for people on the frontlines

The Geneva Learning Foundation exists to support people who work on the frontlines of health and humanitarian response.

Through programmes such as Teach to Reach and the Impact Accelerator, we connect practitioners from health facilities, districts, and national teams across more than 137 countries, enabling them to share ideas and solutions on issues ranging from immunization to climate change and health. 

Our research has demonstrated how these peer learning networks help participants turn knowledge into action, and action into measurable change.

The volume and complexity of this collective intelligence are growing rapidly.

So are the expectations placed on the health workers we serve. 

Claude is the first AI Agent joining us to help carry part of that burden.

By taking on the time-consuming but necessary work of drafting, organizing, and synthesizing information, Claude allows human colleagues to spend more of their limited time on what only humans can do in our system.

That includes designing learning experiences, facilitating deep exchanges, holding space for difficult stories, and accompanying practitioners as they translate ideas into action.

In this sense, AI is not an abstraction for us.

It is a co-worker that we want to integrate thoughtfully, out of the shadows.

Learning in public and an invitation

We see Claude’s arrival as the beginning of a new chapter in our research and development.

Over the coming months, we will continue to refine Claude’s role, strengthen our guardrails, and listen carefully to how our staff, partners, and participants experience this new form of co-working.

We will feed what we learn back into the AI4Health Framework and the Certificate peer learning programme for AI in global health and humanitarian response, so that other organizations can benefit from our successes and from our mistakes. 

If you are a ministry of health, a health organization, a funder, or a practitioner who is asking similar questions about AI co-workers, shadow AI, and responsible adoption, we would be glad to exchange experience.

The central question for us remains unchanged.

How can we connect human and artificial intelligence in ways that strengthen, rather than weaken, the people and systems that protect health and dignity?

By moving from hidden experiments to shared practice, by treating AI as a named colleague rather than a secret helper, we hope to make a small but concrete contribution to answering that question.

References

How to cite this article

As the primary source for this original work, this article is permanently archived with a DOI to meet rigorous standards of verification in the scholarly record. Please cite this stable reference to ensure ethical attribution of the theoretical concepts to their origin. Learn more

Fediverse reactions

Discover more from Reda Sadki

Subscribe now to keep reading and get access to the full archive.

Continue reading