There is an important and necessary conversation happening right now about the use of generative artificial intelligence in global health and humanitarian communications.
Researchers like Arsenii Alenichev are correctly identifying a new wave of “poverty porn 2.0,” where artificial intelligence is used to generate stereotypical, racialized images of suffering—the very tropes many of us have worked for decades to banish.
The alarms are valid.
The images are harmful.
But I am deeply concerned that in our rush to condemn the new technology, we are misdiagnosing the cause.
The problem is not the tool.
The problem is the user.
Generative artificial intelligence is not the cause of poverty porn.
The root cause is the deep-seeded racism and colonial mindset that have defined the humanitarian aid and global health sectors since their inception.
This is not a new phenomenon.
It is a long-standing pattern.
In my private conversations with colleagues and researchers like Alenichev, I find we often agree on this point.
Yet, the public-facing writing and research seem to stop short, focusing on the technological symptom rather than the systemic illness.
It is vital we correct this focus before we implement the wrong solutions.
The old poison in a new bottle
Long before Midjourney, large organizations and their communications teams were propagating the worst kinds of caricatures.
I know this.
Many of us know this.
We remember the history of award-winning photographers being sent from the Global North to “find… miserable kids” and stage images to meet the needs of funders. Organizations have always been willing to manufacture narratives that “show… people on the receiving end of aid as victims”.
These working cultures — which demand images of suffering, which view Black and Brown bodies as instruments for fundraising, and which prioritize the “western gaze” — existed decades before artificial intelligence.
Artificial intelligence did not create this impulse.
It just made it cheaper, faster, and easier to execute.
It is an enabler, not an originator.
If an organization’s communications philosophy is rooted in colonial stereotypes, it will produce colonial stereotypes, whether it is using a 1000-dollar-a-day photographer or a 30-dollar-a-month software subscription.
The danger of a misdiagnosis
If we incorrectly identify artificial intelligence as the cause of this problem, our “solution” will be to ban the technology.
This would be a catastrophic mistake.
First, it is a superficial fix.
It allows the very organizations producing this content to performatively cleanse themselves by banning a tool, all while eluding the fundamental, painful work of challenging their own underlying racism and colonial impulses.
The problem will not be solved; it will simply revert to being expressed through traditional (and often staged) photography.
Second, it punishes the wrong people.
For local actors and other small organizations, generative artificial intelligence is not necessarily a tool for creating poverty porn.
It is a tactical advantage in a fight for survival.
Such organizations may lack the resources for a full communication team.
They are then “punished by algorithms” that demand a constant stream of visuals, burying stories of organizations that cannot provide them.
Furthermore, some organizations committed to dignity in representation are also using artificial intelligence to solve other deep ethical problems.
They use it to create dignified portraits for stories without having to navigate the complex and often extractive issues of child protection and consent.
They use it to avoid exploiting real people.
A blanket ban on artificial intelligence in our sector would disarm small, local organizations.
It would silence those of us trying to use the tool ethically, while allowing the large, wealthy organizations to continue their old, harmful practices unchanged.
The real work ahead
This is why I must insist we reframe the debate.
The question is not if we should use artificial intelligence.
The question is, and has always been, how we challenge the racist systems that demand these images in the first place.
My Algerian ancestors fought colonialism.
I cannot separate my work at The Geneva Learning Foundation from the struggle against racism and fighting for the right to tell our own stories.
That philosophy guides how I use any tool, whether it is a word processor or an image generator.
The tool is not the ethic.
We need to demand accountability from organizations like the World Health Organization, Plan International, and even the United Nations.
We must challenge the working cultures that green-light these campaigns.
We should also, as Arsenii rightly points out, support local photographers and artists.
But we must not let organizations off the hook by allowing them to blame a piece of software for their own lack of imagination and their deep, unaddressed colonial legacies.
Artificial intelligence is not the problem.
Our sector’s colonial mindset is.
References
- Alenichev, A., Kingori, P., Grietens, K.P., 2025. Reflections before the storm: the AI reproduction of biased imagery in global health visuals. The Lancet Global Health 11, e1496–e1498. https://doi.org/10.1016/S2214-109X(23)00329-7
- Aisha Down, 2025. AI-generated ‘poverty porn’ fake images being used by aid agencies. The Guardian.
- Gill, D., Levidow, L., 1987. Anti-racist science teaching.
Image: The Geneva Learning Foundation Collection © 2025