Back to main

U.S. Lawmakers Push for Deepfake Image Criminalization in Wake of Taylor Swift Scandal

United States lawmakers are rallying behind the urgent need for legislation to criminalize the production of deepfake images, spurred by the widespread circulation of explicit fake photos featuring Taylor Swift.

These fabricated images made their way onto various social media platforms, including X and Telegram, prompting a call for action.

U.S. Representative Joe Morelle took to X to express his vehement disapproval of the dissemination of these images, labeling it as appalling. He underscored the Preventing Deepfakes of Intimate Images Act, a legislative initiative he authored to establish non-consensual deepfakes as a federal offense, emphasizing the urgency of addressing this pressing issue.

Deepfakes employ artificial intelligence (AI) to manipulate videos by altering the appearance of individuals.

Currently, there are no federal laws specifically addressing the creation or sharing of deepfake images, but some lawmakers are now taking proactive steps to tackle this problem head-on.

Representative Yvette Clarke, also on X, pointed out that the Taylor Swift incident is not an isolated case. She emphasized that women have been enduring the repercussions of this technology for years, highlighting the accessibility and affordability of creating deepfakes with advancements in AI.

On a positive note, X stated in a press release that it is actively removing the fraudulent images and taking appropriate actions against the accounts responsible for their dissemination.

The platform assured the public that it is closely monitoring the situation, ready to promptly address any further violations and ensure the removal of such content.

In a parallel development, the United Kingdom made the sharing of deepfake pornography illegal as part of its Online Safety Act in 2023.

READ MORE: Blackberry Uncovers Cyber Attack Targeting Mexican Cryptocurrency Exchanges

A State of Deepfakes report from the same year revealed that a majority of deepfakes posted online involve pornography, with nearly 99% of victims being women.

The global concerns surrounding AI-generated content have escalated, with the World Economic Forum highlighting the adverse consequences in its 19th Global Risks Report.

This report underscores the intended and unintended negative impacts of AI advancements, including generative AI, on individuals, businesses, ecosystems, and economies.

Even Canada’s primary national intelligence agency, the Canadian Security Intelligence Service, has expressed alarm over disinformation campaigns utilizing AI-generated deepfakes on the internet.

On a broader scale, the United Nations, in a report dated June 12, recognized AI-generated media as a significant and pressing threat to information integrity, particularly on social media.

The U.N. stressed that the risk of online disinformation has surged due to rapid technological advancements, notably in generative artificial intelligence, with a special focus on the troubling proliferation of deepfakes.

Discover the Crypto Intelligence Blockchain Council

Disclaimer: This article is provided for informational purposes only. It is not offered or intended to be used as legal, tax, investment, financial, or other advice.

Read on Crypto Intelligence Investment Disclaimer