Artificial intelligence, accountability, and authenticity knowledge production and power in global health crisis

Artificial intelligence, accountability, and authenticity: knowledge production and power in global health crisis

Global health

I know and appreciate Joseph, a Kenyan health leader from Murang’a County, for years of diligent leadership and contributions as a Scholar of The Geneva Learning Foundation (TGLF). Recently, he began submitting AI-generated responses to Teach to Reach Questions that were meant to elicit narratives grounded in his personal experience.

Seemingly unrelated to this, OpenAI just announced plans for specialized AI agents—autonomous systems designed to perform complex cognitive tasks—with pricing ranging from $2,000 monthly for a “high-income knowledge worker” equivalent to $20,000 monthly for “PhD-level” research capabilities.

This is happening at a time when traditional funding structures in global health, development, and humanitarian response face unprecedented volatility.

These developments intersect around fundamental questions of knowledge economics, authenticity, and power in global health contexts.

I want to explore three questions:

  • What happens when health professionals in resource-constrained settings experiment with AI technologies within accountability systems that often penalize innovation?
  • How might systems claiming to replicate human knowledge work transform the economics and ethics of knowledge production?
  • And how should we navigate the tensions between technological adoption and authentic knowledge creation?

Artificial intelligence within punitive accountability structures of global health

For years, Joseph had shared thoughtful, context-rich contributions based on his direct experiences. All of a sudden, he was submitting generic mush with all the trappings of bad generative AI content.

Should we interpret this as disengagement from peer learning?

Given his history of diligence and commitment, I could not dismiss his exploration of AI tools as diminished engagement. Instead, I understood it as an attempt to incorporate new capabilities into his professional repertoire. This was confirmed when I got to chat with him on a WhatsApp call.

Our current Teach to Reach Questions system has not yet incorporated the use of AI. Our “old” system did not provide any way for Joseph to communicate what he was exploring.

Hence, the quality limitations in AI-generated narratives highlight not ethical failings but a developmental process requiring support rather than judgment.

But what does this look like when situated within global health accountability structures?

Health workers frequently operate within highly punitive systems where performance evaluation directly impacts funding decisions. International donors maintain extensive surveillance of program implementation, creating environments where experimentation carries significant risk. When knowledge sharing becomes entangled with performance evaluation, the incentives for transparency about AI “co-working” (i.e., collaboration between human and AI in work) diminish dramatically.

Seen through this lens, the question becomes not whether to prohibit AI-generated contributions but how to create environments where practitioners can explore technological capabilities without fear that disclosure will lead to automatic devaluation of their knowledge, regardless of its substantive quality. This heavily depends on the learning culture, which remains largely ignored or dismissed in global health.

The transparency paradox: disclosure and devaluation of artificial intelligence in global health

This case illustrates what might be called the “transparency paradox”—when disclosure or recognition of AI contribution triggers automatic devaluation regardless of substantive quality. Current attitudes create a problematic binary: acknowledge AI assistance and have contributions dismissed regardless of quality, or withhold disclosure and risk accusations of misrepresentation or worse.

This paradox creates perverse incentives against transparency, particularly in contexts where knowledge production undergoes intensive evaluation linked to resource allocation. The global health sector’s evaluation systems often emphasize compliance over innovation, creating additional barriers to technological experimentation. When every submission potentially affects funding decisions, incentives for technological experimentation become entangled with accountability pressures.

This dynamic particularly affects practitioners in Global South contexts, who face more intense scrutiny while having less institutional protection for experimentation. The punitive nature of global health accountability systems deserves particular emphasis. Health workers operate within hierarchical structures where performance is consistently monitored by both national governments and international donors. Surveillance extends from quantitative indicators to qualitative assessments of knowledge and practice.

In environments where funding depends on demonstrating certain types of knowledge or outcomes, the incentive to leverage artificial intelligence in global health may conflict with values of authenticity and transparency. This surveillance culture creates uniquely challenging conditions for technological experimentation. When performance evaluation drives resource allocation decisions, health workers face considerable risk in acknowledging technological assistance—even as they face pressure to incorporate emerging technologies into their practice.

The economics of knowledge in global health contexts

OpenAI’s announced “agents” represent a substantial evolution beyond simple chatbots or language models. If they are able to deliver what they just announced, these specialized systems would autonomously perform complex tasks simulating the cognitive work of highly-skilled professionals. The most expensive tier, priced at $20,000 monthly, purportedly offers “PhD-level” research capabilities, working continuously without the limitations of human scheduling or attention.

These claims, while unproven, suggest a potential future where knowledge work economics fundamentally change. For global health organizations operating in Geneva, where even a basic intern position for a recent master’s degree graduate cost more than 200 times that of a ChatGPT subscription, the economic proposition of systems working 24/7 for potentially comparable costs merits careful examination.

However, the global health sector has historically operated with significant labor stratification, where personnel in Global North institutions command substantially higher compensation than those working in Global South contexts. Local health workers often provide critical knowledge at compensation rates far below those of international consultants or staff at Northern institutions. This creates a different economic equation than suggested by Geneva-based comparisons. Many organizations have long relied on substantially lower local labor costs, often justified through capacity-building narratives that mask underlying power asymmetries.

Given this history, the risk that artificial intelligence in global health would replace local knowledge workers might initially appear questionable. Furthermore, the sector has demonstrated considerable resistance to technological adoption, particularly when it might disrupt established operational patterns. However, this analysis overlooks how economic pressures interact with technological change during periods of significant disruption.

The recent decisions of many government to donors to suddenly and drastically cut funding and shut down programs illustrates how rapidly even established funding structures can collapse. In such environments, organizations face existential questions about maintaining operational capacity, potentially creating conditions where technological substitution becomes more attractive despite institutional resistance.

A new AI divide

ChatGPT and other generative AI tools were initially “geo-locked”, making them more difficult to access from outside Europe and North America.

Now, the stratified pricing structure of OpenAI’s announced agents raises profound equity concerns. With the most sophisticated capabilities reserved for those able to pay high costs for the most capable agents, we face the potential emergence of an “AI divide” that threatens to reinforce existing knowledge power imbalances.

This divide presents particular challenges for global health organizations working across diverse contexts. If advanced AI capabilities remain the exclusive province of Northern institutions while Southern partners operate with limited or no AI augmentation, how might this affect knowledge dynamics already characterized by significant inequities?

The AI divide extends beyond simple access to include quality differentials in available systems. Even as simple AI tools become widely available, sophisticated capabilities that genuinely enhance knowledge work may remain concentrated within well-resourced institutions. This could lead to a scenario where practitioners in resource-constrained settings use rudimentary AI tools that produce low-quality outputs, further reinforcing perceptions of capability gaps between North and South.

Confronting power dynamics in AI integration

Traditional knowledge systems in global health position expertise in academic and institutional centers, with information flowing outward to practitioners who implement standardized solutions. This existing structure reflects and reinforces global power imbalances. 

The integration of AI within these systems could either exacerbate these inequities—by further concentrating knowledge production capabilities within well-resourced institutions—or potentially disrupt them by enabling more distributed knowledge creation processes.

Joseph’s journey demonstrates this tension. His adoption of AI tools might be viewed as an attempt to access capabilities otherwise reserved for those with greater institutional resources. The question becomes not whether to allow such adoption, but how to ensure it serves genuine knowledge democratization rather than simply producing more sophisticated simulations of participation.

These emerging dynamics require us to fundamentally rethink how knowledge is valued, created, and shared within global health networks. The transparency paradox, economic pressures, and emerging AI divide suggest that technological integration will not occur within neutral space but rather within contexts already characterized by significant power asymmetries.

Developing effective responses requires moving beyond simple prescriptions about AI adoption toward deeper analysis of how these technologies interact with existing power structures—and how they might be intentionally directed toward either reinforcing or transforming these structures.

My framework for Artificial Intelligence as co-worker to support networked learning and local action is intended to contribute to such efforts.

References

Frehywot, S., Vovides, Y., 2024. Contextualizing algorithmic literacy framework for global health workforce education. AIH 0, 4903. https://doi.org/10.36922/aih.4903

Hazarika, I., 2020. Artificial intelligence: opportunities and implications for the health workforce. International Health 12, 241–245. https://doi.org/10.1093/inthealth/ihaa007

John, A., Newton-Lewis, T., Srinivasan, S., 2019. Means, Motives and Opportunity: determinants of community health worker performance. BMJ Glob Health 4, e001790. https://doi.org/10.1136/bmjgh-2019-001790

Newton-Lewis, T., Munar, W., Chanturidze, T., 2021. Performance management in complex adaptive systems: a conceptual framework for health systems. BMJ Glob Health 6, e005582. https://doi.org/10.1136/bmjgh-2021-005582

Newton-Lewis, T., Nanda, P., 2021. Problematic problem diagnostics: why digital health interventions for community health workers do not always achieve their desired impact. BMJ Glob Health 6, e005942. https://doi.org/10.1136/bmjgh-2021-005942

Artificial Intelligence and the health workforce: Perspectives from medical associations on AI in health (OECD Artificial Intelligence Papers No. 28), 2024. , OECD Artificial Intelligence Papers. https://doi.org/10.1787/9a31d8af-en

Sadki, R. (2025). A global health framework for Artificial Intelligence as co-worker to support networked learning and local action. Reda Sadki. https://doi.org/10.59350/gr56c-cdd51