State of AI report

What the 2025 State of AI Report means for global health and humanitarian action

Reda SadkiArtificial intelligence, Global health

The 2025 State of AI Report has arrived, painting a picture of an industry being fundamentally reshaped by “The Squeeze.”

This is a critical, intensifying constraint on three key resources: the massive-scale compute (processing power) required for training, the availability of high-quality data, and the specialized human talent to build frontier models.

This squeeze, the report details, is accelerating a consolidation of power.

It favors the “hyperscalers”—the handful of large technology corporations that can afford to build their own power plants to run their data centers.

For leaders in global health and humanitarian action, the report is essential reading.

However, it must be read with a critical eye.

The report’s narrative is, in many ways, the narrative of the hyperscalers.

It focuses on the benchmarks they dominate, the closed models they are building, and the resource problems they face.

This “view from the top” is valuable, but it is not the only reality.

What does this consolidation of power mean for our sector, and where should we be focusing our attention?

The new AI divide: A focus on closed-model dominance

The report documents a clear trend: closed, proprietary models are pulling ahead of open-source alternatives in raw performance benchmarks.

This is a direct result of the compute squeeze.

When training costs become astronomical, only the wealthiest organizations can compete at the frontier.

This focus on state-of-the-art performance, while informative, can be a distraction.

For humanitarian action, the “best” model is not necessarily the one that tops a leaderboard, but the one that is affordable, adaptable, and deployable in low-resource settings.

The true implication for our sector is the emergence of a new “AI divide”.

This divide is not just about access but about capability.

We may face a future where Global North institutions may license “PhD-level” specialized AI agents at cost lower than their human counterparts, while practitioners in the Global South are left with rudimentary or geolocked tools.

This dynamic threatens to reinforce, rather than disrupt, existing knowledge power imbalances and risks a new era of “digital colonialism”, where the sector becomes entirely dependent on a few private companies for its most critical technology.

Opportunities in the State of AI: Breakthroughs in science and health

The most unambiguous good news in the 2025 report is the dramatic acceleration of AI in science and medicine.

AI is no longer just a research assistant; it is demonstrating expert-level accuracy in diagnostics and is actively designing novel therapeutics.

This is a profound opportunity for global health.

Where the report’s perspective is incomplete, however, is on the gap between this capability and its real-world application.

An AI can provide a brilliant medical insight, but it lacks the “contextual intelligence” of a local practitioner.

An AI model may not know that people in a specific district avoid the clinic on Tuesdays because it is market day – unless humans are working side-by-side with the model to share such qualitative and experiential data.

Read more: Why peer learning is critical to survive the Age of Artificial Intelligence

Therefore, the report’s findings on medical AI should not prompt us to simply buy new tools.

It should prompt us to invest in the human infrastructure—like structured peer learning networks —where health workers can collectively learn how to blend AI’s power with their deep understanding of local realities.

The State of AI report’s risks and our own

The 2025 report rightly identifies a shift in risk, moving from passive issues like model bias to active, malicious threats like accelerated cyber capabilities and new “bio-risks.”

These are critical concerns for the health and humanitarian sectors.

But the report misses the most immediate barrier to AI adoption in our field: our own organizational culture.

Many of our institutions operate within “highly punitive accountability systems”.

These systems, which tie performance evaluation directly to funding, create an environment where experimentation carries significant personal and institutional risk.

This leads to a “transparency paradox”.

Health workers and field staff are already experimenting with AI, but they are forced to hide their use.

If they disclose that a report was AI-assisted, they risk having their work subjected to “automatic devaluation,” regardless of its quality.

This punitive culture prevents open discussion and makes collective learning difficult.

State of AI: A strategic response to the squeeze

The 2025 State of AI Report confirms that we cannot compete in the compute squeeze.

Our strategy must therefore be one of smart adaptation and collective action.

For global health and humanitarian leaders, key takeaways include:

  1. Do not be distracted by the “SOTA” race. Our goal is not to have the highest-performing model, but the most applicable and equitable one.
  2. Invest in human networks, not just technology. The greatest gains will come from building the collaborative capacity of our workforce to use AI tools effectively in context.
  3. Fix our internal culture. We must create environments where staff can experiment with AI openly and safely, without fear of reprisal. We cannot adapt to this technology if we are punishing our innovators.
  4. Unite for collective power. The report’s theme of consolidation is a warning. As individual non-governmental organizations, we have no power to negotiate with hyperscalers. We must explore forming a “cooperative” to gain a “seat at the table” and co-shape an AI ecosystem that serves the public interest, not just corporate agendas.

These risks and opportunities are part and parcel of why The Geneva Learning Foundation is offering the AI4Health certificate programme. Learn more here: https://www.learning.foundation/ai.

References