The arrival of artificial intelligence in the global development and non-profit sectors has sparked a period of intense ethical reflection around what some are calling ‘poverty porn 2.0’.
The challenge is not merely technical but fundamental to non-profit mission.
One important question is how to maintain credibility in an era where seeing is no longer believing.
The visual ethics organization Fairpicture is tackling head-on the nascent public dialogue.
Opening a “FairTalk” panel discussion on 11 February 2026, Fairpicture’s CEO Noah Arnold framed this not as a panic, but as a profound crisis of trust.
He described a media landscape where “manufactured images” are becoming so sophisticated that even forensic teams at major news outlets struggle to identify them.
Basil Stücheli, a Swiss photographer and AI artist, illustrated the high stakes with his exhibition What The Fake. He demonstrated how AI can effortlessly collapse a Swiss mountain to mimic a real landslide, showing how easily reality can be bent to serve a narrative.

Yet for the practitioners in the room – an eclectic group of communicators, fundraisers, and researchers – the goal was not necessarily to reject the tool out of fear. It was to wrestle with how to “do right” by the communities they serve while navigating a technology that fundamentally alters the nature of evidence.
However, what began as a conversation about navigating “slop” – the cheap, stereotypical imagery churned out by generative AI – quickly morphed into something far less comfortable .
Over the course of the hour, some panelists questioned the comforting illusion that “real” photography is inherently ethical. In doing so, they suggested that the sector’s problem is not only the new technology. The problem may be the old colonial history that the technology has learned too well…
The drift toward the savior
The technical evidence for AI’s bias is undeniable.
Arsenii Alenichev, a researcher at the Institute of Tropical Medicine in Antwerp, presented a series of experiments he conducted with Midjourney.
His findings are stark.
When he prompted the system to show “Black African doctors help poor and sick white children,” the AI refused to comply. When, finally, a complex prompt generated the scene, successive iterations pushed the system to “drift towards whiteness,” eventually forcing the doctor to become a white savior and the children to become Black.
“The systems really gravitate towards them,” Alenichev said of these entrenched racial hierarchies.
The machine has been fed so much colonial imagery that it is now incapable of imagining a world where the power dynamics are reversed.

The transactional relationship
While Alenichev diagnosed the machine, Kate Kardol, a communications and fundraising consultant for NGOs and UN agencies, diagnosed the industry.
She addressed the uncomfortable economic incentive that drives organizations toward these images: the belief that pity pays.
Kardol challenged the prevailing assumption that “crying child” imagery is necessary to hit fundraising targets. “I practically would ask them how they arrived at that conclusion,” Kardol said.
She demanded to know if organizations had the data to prove that stripping dignity raises more money, or if they were simply relying on “long-legacy style fundraising” based on lazy assumptions.
For Kardol, the risk of AI is not just about misinformation.
It is about the “transactional relationship” it creates with donors, where guilt is traded for dollars.
She reminded the audience that the sector has a “duty of care” not just to the donor, but to the subject.
An enabler, not an originator
Building on Kardol’s critique of the industry habits, Reda Sadki, who leads The Geneva Learning Foundation AI4Health programme, sharpened the argument by introducing the brutal economic reality underpinning these decisions.
Sadki revealed that the global health sector faced a “brutal shock” where 70% of funding was removed by some of the world’s wealthiest countries in early 2025.
In this climate of austerity, the rush toward AI is not just about efficiency.
It is about existence in a visual and digital world.
This economic pressure drives a dangerous displacement.
Sadki observed that when an organization’s philosophy is rooted in colonial stereotypes, the result is the same whether they use a “$2,000-a-day world award-winning photographer or a $50-a-month software subscription”. The difference is that the $50 subscription is affordable when the budget collapses.
“Artificial intelligence did not create this impulse,” Sadki said, referring to the demand for stereotypes. “It just made it cheaper, faster, and easier to execute” .
The true casualties of this crisis of representation may be the people on the receiving end of the aid.
They are stripped of a human observer who might offer a duty of care, and handed over to an algorithm that knows only how to automate the dehumanization of the past.
By removing the human witness, are organizations removing a line of defense for the subject’s dignity?
This could be the case, were it not for the checkered history (to put it politely) of representation by human photographers (mostly from high-income countries) of ‘their’ subjects.
The prompt is the new brief
The most incisive critique of the “human vs. machine” binary came when Alenichev returned to the discussion to play “devil’s advocate”. He challenged the popular call for organizations to disclose their AI prompts as a transparency measure.
“There is an elephant in the room which is called the Photography Brief,” Alenichev said.
He argued that photographers working in global health have effectively functioned as a “generative engine” for decades, trained on past images and deployed to reproduce iconic tropes of suffering.
If we demand to see the AI prompts, Alenichev asked, should we not also demand to see the briefs sent to human photographers?
He noted that these documents often contain “quite charged language” and that organizations would be “very uncomfortable” disclosing them.
The implication was devastating: the human process of image-making is often just as “programmed” by colonial assumptions as the algorithmic one.
The witness and the archive
Despite these critiques, Basil Stücheli offered perhaps the only technical defense against the total erosion of truth. “An AI cannot be on a certain place in a certain time,” Stücheli insisted .
For Stücheli, this physical limitation is the saving grace of the medium.
While illustrative photography might be swallowed by the algorithm, he argued that “documentary photography” remains distinct because it requires a witness.
Reda Sadki offered a live experiment in this tension.
His organization runs an annual call for photos, inviting immunization staff to document their daily work, a “real” archive of the frontline.
Learn more: Making the invisible visible: storytelling the health impacts of climate change
Yet, alongside this, his team aggressively uses AI to illustrate social media narratives.
Sadki admitted he is waiting for the collision.
He fears a future where the prevalence of synthetic images causes the public to doubt the “real” photos taken by health workers, dismantling the documentary value of the very images the sector is trying to protect.
But some panelists suggested this distinction might be a luxury that many can no longer afford.
Alenichev pointed out that small organizations are turning to AI not out of malice, but out of poverty.
This creates a new form of inequality: if “ethical representation” is the hiring of a human witness, it is becoming a luxury good.
He highlighted the hypocrisy of large legacy organizations that now refuse to use AI on ethical grounds, despite having created the “horrible images” that fill the datasets in the first place. “They polluted the datasets that other small organizations are forced to use,” Alenichev said.
The panel ended not with a call for better software, but for a deeper historical excavation.
Sadki’s final advice to the audience was simple. “Do not start with a rule about the software,” he said. “Start with a rule or an exploration about your culture” .
He urged organizations to dig into their own libraries and archives to make sense of the “colonial roots” of their visual language.
The danger of AI is not that it creates something new and terrible.
The danger is that it perfectly preserves the old poison we have yet to flush out of the system.
Banning the “fake” images will not save us if the “real” ones are just as compromised.
As Alenichev noted, the white savior does not need a computer to exist.
He has been with us all along.
References
- Aisha Down. AI-generated ‘poverty porn’ fake images being used by aid agencies. The Guardian [Internet]. 2025 Oct 20; Available from: https://www.theguardian.com/global-development/2025/oct/20/ai-generated-poverty-porn-fake-images-being-used-by-aid-agencies
- Alenichev A, Kingori P, Grietens KP. Reflections before the storm: the AI reproduction of biased imagery in global health visuals. The Lancet Global Health [Internet]. 2025 Oct [cited 2025 Oct 22];11(10):e1496–8. Available from: https://linkinghub.elsevier.com/retrieve/pii/S2214109X23003297
- Kate Kardol. ‘Poverty porn’ in the era of generative AI [Internet]. Bern, Switzerland: Fairpicture; 2025. Available from: https://fairpicture.org/poverty-porn-in-the-era-of-generative-ai-whitepaper-checklist/
- Reda Sadki (2025). How do we stop AI-generated ‘poverty porn’ fake images?. Reda Sadki: Learning to make a difference. https://doi.org/10.59350/03c4y-r2d18
