Old poison in new bottles

How do we stop AI-generated ‘poverty porn’ fake images?

Reda SadkiArtificial intelligence, Global health

There is an important and necessary conversation happening right now about the use of generative artificial intelligence in global health and humanitarian communications. Researchers like Arsenii Alenichev are correctly identifying a new wave of “poverty porn 2.0,” where artificial intelligence is used to generate stereotypical, racialized images of suffering—the very tropes many of us have worked for decades to banish. The alarms are valid. The images are harmful. But I am deeply concerned that in our rush to condemn the new technology, we are misdiagnosing the cause. The problem is not the tool. The problem is the user. Generative artificial intelligence is not the cause of poverty porn. The root cause is the deep-seeded racism and colonial mindset that have defined the humanitarian aid and global health sectors since their inception. This is not a new phenomenon. It is a long-standing pattern. In my private conversations with colleagues and researchers …