Sydney, Australia — In a chilling sign of how artificial intelligence is being weaponized, Australian activist Caitlin Roper has become the latest victim of an AI-generated harassment campaign that used hyper-realistic videos and images to depict her violent death.
Manipulated to depict threats to her life, the content, also generated by powerful generative AI technologies, depicted Roper being hanged, burnt and assaulted – often in her own clothes and her home environment – rendering the threats all the more upsetting.
Experts say that this represents a dark new world of online abuse, in which AI takes abuse from hateful words and sends them into terrifying images you can believe.
The Rise of AI-Generated Harassment
Not too long ago, online abuse consisted of crude remarks, photoshopped memes or feuding DMs. However, today, with generative AI models having the potential of creating lifelike images and videos in seconds, things have changed completely.
According to Roper, who works against the exploitation and misogynistic media of Collective Shout, the threats started as posts on social media — but quickly escalated into synthetic videos of her being murdered.
“When you see yourself dead in your own clothes, in your own house – it feels real,” she told Australian media outlets.
“This is no longer just some Internet trollies.” It’s psychological warfare.”
Cyber-security specialists state that such AI-driven campaigns are the new wave of online harassment that combines the use of personal data, deepfake technology, and automated content generation.
How Artificial Intelligence Makes Threats Real
Even those digital safety researchers who saw these fabricated scenes were shocked by their realism.
Using as few as one or two publicly available photos (usually scraped from social media or a news report), malicious actors can now generate AI-generated “death videos” or images of people superimposed in scenes of extreme violence.
These do not need technical genius or expensive equipment. AI generators, for example, for free or for a low price can produce them with surprising believability for the images.
Researchers from the University of New South Wales say once someone’s likeness is digitised “it’s only a matter of time before it’s weaponised.”
The ease with which these tools are available means that harassment that required coordination in the past now takes minutes — and the perpetrators are protected by anonymity.
Platform Entrepreneurial Teams Struggling to Keep Up
Roper complained to various social media companies – but got mixed results. Some posts were deleted in very little time; others were left up for days. At one point, Roper says, she was locked out of her own account while reporting the harassment.
This inconsistency is indicative of a rising issue and concern, namely, content moderation systems were designed to handle text and straightforward imagery content, not AI-generated content.
Reactive moderation – a form that is common in social networks as users report abuse and moderators investigate. But threats generated by AI are happening a lot faster than the platforms can react.
The debate is that detection and prevention of AI-generated content should now be part of the safety infrastructure of all major platforms.
“What AI platforms need, says Dr. Daniel Maimon, a researcher of cybersecurity at Georgia State University, is AI that can recognize other AI. Otherwise, they are fighting 21st-century attacks with 20th-century tools.”
The Conceptual grey area of ugly machine abuse in the workplace
Legal experts say the case reveals a regulatory gap on AI aided harassment.
While many people might agree that death threats are illegal, current legislation is not specific to synthetic media and using AI to create personalised violent content. Consequently, it may be difficult for prosecutors to use traditional statutes to prosecute AI-generated content.
Australia’s eSafety Commissioner, Julie Inman Grant, has been at pains to point out that AI is getting ahead of legislation.
In a recent report, she found a 43% rise in reports of image-based abuse related to AI tools in 2025 compared with 2023 according to a report from her office.
For instance, the UK and Canada are already considering dedicated “synthetic threat” legislation and the EU’s AI Act cites some uses of generative models to harass people as “unacceptable risk.” Australia needs to act urgently to fill its own policy gap, experts warn.
AI Harassment: A New Psychological Weapon
What makes AI-generated harassment uniquely harmful is its psychological realism.
Another reason why victims feel afraid is that they experience a realistic vision of harm, and this will result in panic, anxiety and trauma.
Commentators describe this as a virtual form of “stalking with special effects.”
A 2025 study published in the Journal of Online Safety and Psychology found victims of air derived threats were three times more likely to report feeling paranoid and hypervigilant than people who were exposed to text-based abuse.
Roper says the experience makes her more careful online:
“”It’s to be used to silence you — to make you back down.” And that is why it is so dangerous for activists.
The Tech Behind the Threat
The attackers are believed to have used text-to-picture and picture-to-video generators based on an open-source diffusion model. These tools now provide the ability to create photorealistic content from natural language scene descriptions.
Some of the same AI engines that are used to create sites like Midjourney or RunwayML can also be leveraged for synthetic violence.
To complicate matters further, many models are able to be run locally on consumer machines with no place for investigators to check on server logs and no trace for auditing.
Cybercrime experts say this is part of a larger trend called AI democratization – the tools that enable the creators are enabling the abusers.
What Needs to Change
1. Increased Regulation and Responsibility
Governments need to develop legislation that explicitly makes AI assisted threats, impersonation and synthetic harassment a crime.
AI content is seldom targeted by existing harassment laws and is often anonymous and therefore impossible to attribute.
2. AI Detection and Digital Forensics
Initiatives such as platforms must invest in synthetic media detection systems – AI fingerprints, image discrepancies, or model signatures-equipped algorithms.
Collaborative databases can be used by platforms to identify repeat offenders or repeated threat templates.
3. Platform Transparency
Social networks need to be more transparent about the way they deal with AI-generated content, including:
- Number of Synthetic threats detected and eliminated
- Response time for users reporting
- Coordination with law enforcement departments
4. Putting Users and Victims in Power
Victims should be given quick reporting options, trauma-informed support and direct human moderator escalation.
Digital-safety training, especially for journalists, activists and influencers, can help to reduce the exposure and increase the resilience.
A Call to the Tech Industry
For the technology industry, Roper’s example is a caution – but also an opportunity.
If platforms and AI developers move quickly then they can help shape the ethical frameworks and technologies that protect against it before synthetic abuse becomes mainstream.
“We have to stop thinking of AI safety as being an academic problem,” says cybersecurity expert Rachael Falk of the Cyber Security Cooperative Research Centre.
“So it’s just become a human problem, people’s mental health, reputations, and lives are at risk.”
Several large AI labs like OpenAI, Google DeepMind and Anthropic already added watermarking and traceability technologies to new models. But smaller open-source models, which are not overseen by anyone, are still the biggest blind spot in the safety landscape.
A Human Story at the Heart of a Tech Crisis
Often lost in policy arguments and debates about ethics of AI, however, are the human cost of AI. For Caitlin Roper, the harassment was not in theory – it was in flesh.
Each of the images that she had created of herself being killed was a calculated form of intimidation, meant to use the reality of technology against her bravery.
Her story is now energizing activists, policymakers and technologists alike to call for a coordinated response.
“AI should be a tool for empowerment and innovation — it should be about fighting for creativity and driving progress, and not terror,” she said in a statement. “If technology is not safe to begin with, we’ll pay the price.”
The Bigger Picture: The Digital World We’re All In
The revelation of this transformation of technology and society into AI is much more than just an isolated case in the white-collar crime that caught Caitlin Roper by surprise; it is an international tipping point in the narrative of the interaction of technologies with the vulnerabilities.
The machines that are able to write poetry and script film and design structures can also create horror.
As society hurtles towards the AI age, there will also be a need to strengthen digital trust to ensure that technology will serve truth, and not terror.
For platforms, regulators, and developers alike, the message is clear:
AI-driven harassment isn’t the future: it’s the present.
And the time to act is now.








