Trump signs new bill against AI-generated harassment online

“Anyone who distributes explicit images without subject’s consent will face up to three years in prison," says Trump

By
AFP
|
US President Donald Trump signs the bill during the ceremony for the Take it Down Act, in the Rose Garden of the White House in Washington, D.C., U.S., May 19, 2025. — Reuters
US President Donald Trump signs the bill during the ceremony for the Take it Down Act, in the Rose Garden of the White House in Washington, D.C., U.S., May 19, 2025. — Reuters

  • Law makes it crime to share intimate images without consent.
  • First Lady calls it national victory to protect children, families.
  • Law requires social media platforms to remove flagged content.


WASHINGTON: US President Donald Trump has signed a new law that makes it a crime to share fake or real intimate images created without someone’s permission.

The law targets videos and pictures made using artificial intelligence — known as deepfakes — which are often used to harass or embarrass people online.

The new rules mean anyone who shares such content without consent could face up to three years in prison.

The Take It Down Act, passed with overwhelming bipartisan congressional support, criminalises the non-consensual publication of intimate images, while also mandating their removal from online platforms.

“With the rise of AI image generation, countless women have been harassed with deepfakes and other explicit images distributed against their will,” Trump said at a signing ceremony in the Rose Garden of the White House.

“And today we’re making it totally illegal,” the president said. “Anyone who intentionally distributes explicit images without the subject’s consent will face up to three years in prison.”

First Lady Melania Trump endorsed the bill in early March and attended the signing ceremony in a rare public White House appearance.

The First Lady has largely been an elusive figure at the White House since her husband took the oath of office on January 20, spending only limited time in Washington.

In remarks at the signing ceremony, she described the bill as a “national victory that will help parents and families protect children from online exploitation.”

“This legislation is a powerful step forward in our efforts to ensure that every American, especially young people, can feel better protected from their image or identity being abused,” she said.

Deepfakes often rely on artificial intelligence and other tools to create realistic-looking fake videos.

They can be used to create falsified explicit imagery of real women, which is then published without their consent and widely shared.

Some US states, including California and Florida, have laws criminalising the publication of sexually explicit deepfakes, but critics have voiced concerns that the Take It Down Act grants the authorities increased censorship power.

The Electronic Frontier Foundation, a non-profit focused on free expression, has said the bill gives “the powerful a dangerous new route to manipulate platforms into removing lawful speech that they simply don’t like.”

The bill would require social media platforms and websites to have procedures in place to swiftly remove non-consensual intimate imagery upon notification from a victim.

Harassment, bullying, blackmail

An online boom in non-consensual deepfakes is currently outpacing efforts to regulate the technology around the world due to a proliferation of AI tools, including photo apps that digitally undress women.

While high-profile politicians and celebrities, including singer Taylor Swift, have been victims of deepfake videos, experts say women not in the public eye are equally vulnerable.

A wave of AI video scandals has been reported at schools across US states, with hundreds of teenagers targeted by their own classmates.

Such non-consensual imagery can lead to harassment, bullying or blackmail, sometimes causing devastating mental health consequences, experts warn.

Renee Cummings, an AI and data ethicist and criminologist at the University of Virginia, said the bill is a “significant step” in addressing the exploitation of AI-generated deepfakes and non-consensual imagery.

“Its effectiveness will depend on swift and sure enforcement, severe punishment for perpetrators and real-time adaptability to emerging digital threats,” Cummings told AFP.