Image: Retrieved from plan international, 2025.
Aid organisations are increasingly using AI-generated visuals to depict poverty, children and conflict-affected people. These images show extreme scenes—malnourished children, cracked earth, and suffering survivors—yet they are not real photographs.
Researchers working in global health and communications raise concerns about these synthetic visuals. One researcher, Arsenii Alenichev, has identified more than 100 such AI-generated images used in aid campaigns. He says these reproduce the same visual language of “poverty porn” — inviting pity rather than understanding.
One of the key drivers: cost and consent. Unlike genuine photographs, AI-generated images don’t require negotiation for access, payment, or consent from those depicted. This makes them an appealing shortcut for budget-stretched organisations. However, the visuals often carry racialised or exaggerated tropes. For example, scenes emphasise children in mud or devastation in ways that reinforce stereotypes about certain regions rather than show accurate, dignified realities.
Some agencies are now responding. Plan International, for instance, has issued guidance advising against using AI-generated images of individual children. The aim: preserve dignity, ensure consent and improve the ethics of storytelling.
While AI visuals may offer convenience and lower cost, their use in humanitarian campaigns raises serious ethical questions about representation, dignity and the accuracy of global-development messaging. Agencies face a trade-off: efficient imagery versus responsible storytelling.