Think!

Politics and pop intersect as the debate around the ethics of AI image generators intensifies

AI powered image generators have been hailed as the way of the future for image creation and creativity – but recent high-profile cases in both the pop world and politics points to a more sinister use of the technology.  

Recently X (formerly Twitter) was forced to shut down searches for pop artist Taylor Swift (arguably the most famous woman in the world) after the site became flooded with so many AI generated deep fake porn images of the singer that they couldn’t delete them fast enough.  

And it’s not an issue just confined to Swift. According to an article in the Finacial Review a staggering 96% (give or take) of deep fakes on the internet are pornographic. While not an entirely new occurrence (fake porn has been possible for over two decades thanks to software like photoshop), AI powered image generators are making deep fake pornography easy, accessible and almost undetectable to any layperson on the internet with the drive and malicious intent

But as the Financial Review notes, what Swift’s case highlights is the creation of “whole new groups of both victims and abusers in a marketplace for unauthorised, sexualised images.” 

“They point to the quieter but no-less damaging way that generative AI has been undermining the dignity of women, churning out images that are sexualised by default, for instance, with the situation worse for women of colour.” 

But while Swift is battling sad men in their parent’s basement, closer to home a Victorian state politician was forced to publicly take on media giant Nine News after they digitally altered an image of her to expose her midriff and enlarge her breasts.  

Animal Justic Party MP Georgie Purcell took to X recently to call out Nine News Melbourne after they used a doctored image of her for a broadcast about duck hunting. 

In the post Purcell pointed out that her chest at been altered to appear larger and her dress has been changed to a midriff revealing two piece. 

 

Nine has since responded to the gaffe and claimed that it was an innocent “automation” mistake.  

In a statement to Crikey, 9News Melbourne director apologised to Purcell and stated, “During that process, the automation by Photoshop created an image that was not consistent with the original.” 

Which begs the question, who is really steering the ship here when it comes to AI? 

Australian journalists and media workers have already admitted to using generative AI in their reports and broadcasts, but as pointed out by Crikey, AI has been trained using the internet, and more regularly than not displays our unfortunate biases.  

“We know that AI reflects our own biases and repeats them back to us. Like the researcher who found that when you asked MidJourney to generate ‘Black African doctors providing care for white suffering children’, the generative AI product would always depict the children as Black, and even would occasionally show the doctors as white. Or the group of scientists who found that ChatGPT was more likely to call men an “expert” and women “a beauty” when asked to generate a recommendation letter.” 

And while congress and governments scramble to create laws and provisions in response to our growing usage of AI technology – it is generally societies most vulnerable who have to deal with the fall out of an unchecked media.  

While Swift and Purcell have the status, funds (in the case of Swift) and platform to address their public shaming, what about the thousands of nameless and voiceless victims of revenge porn and sexualisation now made all that easier by AI technology? 

Share this: