News

Defeating Deep Fakes

As the use of artificial intelligence technology to develop non-consensual deepfake content spirals out of control, the need for effective legislation has become more apparent than ever.

Reading Time: 4 minutes

Westfield High School, a small public high school in New Jersey, made national headlines two years ago after several students fell victim to non-consensual pornographic artificial intelligence (AI)-generated depictions of themselves. Distressed parents implored the district to take disciplinary and preventative action to no avail. Nothing was done because nothing could be done; no law was made to address this circumstance, so law enforcement and school administrators had no way to respond. Unfortunately, the lack of legal framework surrounding this issue has made the rate of non-consensual AI-generated content spiral out of control. Deepfakes—digitally altered or generated (particularly with AI) content that falsely depicts an individual saying or doing something that they hadn’t—has become a widespread but undercovered issue. One study found that one in 17 teens had disclosed becoming a victim to this technology, while one in 10 reported knowing someone who has been impacted by it. Over recent years, this horrifying trend has skyrocketed at the expense of massive amounts of people, particularly children, across the world. Deepfake crimes have only been worsened because of how accessible this technology has become to the public; not only do many report how easy it is to access these harmful tools, but these platforms have not faced any legal consequences for their role in creating non-consensual explicit content. In addition, social media, particularly when unregulated, is an abundantly effective method of spreading this content. These factors have culminated in a 550 percent increase in deepfake AI-generated content—nearly 98 percent of which was both pornographic and non-consensual.

Fortunately, after nearly a decade of activism, the house passed the Take It Down Act 409-2, and was recently championed by Melania Trump and subsequently signed into law by President Donald Trump. In addition to criminalizing those who spread these images, the act also allows users to report videos, with a 48 hour grace period before the social media platform is held in violation of the Federal Trade Commission if failing to take it down. The successful passage of this bill is monumental, because it represents a major bipartisan victory in the race between rapidly-advancing AI and slow regulations.

However, the Take It Down Act is only a band-aid on a bullet hole; while it is well-intentioned, it is inadequate at addressing the growing issue of non-consensual deepfake pornography. Through its vague phrasing and reactive rather than proactive nature, the Take it Down Act falls short of what is needed to combat non-consensual explicit deepfake content, yet it may unnecessarily violate free speech in other regards.

First, the Take It Down Act puts an unfair burden on the victim to report deepfaked content. In the case of celebrities, it would be impossible to respond to such large swaths of content, defeating the point of having regulations at all. In the case of an ordinary teenager, it would make the process exponentially more traumatizing since they are forced to respond. Moreover, the act doesn’t proactively regulate social media companies, meaning that in the absence of reporting, companies may allow this content to thrive on their platforms. These are just a few of the many instances that this bill fails to cover. Given that these companies have been fully able to monitor child sexual abuse material, they are clearly capable of doing the same for non-consensual deepfake content. Legislation should use these companies’ abilities to its advantage to ensure as little deepfake content is spread as possible.

Second, the bill itself is poorly written, creating much leeway for free speech violations outside the realm of the explicit content itself. While the bill claims to target sexually explicit conduct, the definition is too vague for policymakers to make the right decisions. Given that there is a long track record of internet censorship unfairly targeting non-explicit LGBTQ+ content, experts raise concerns of unfair censorship. This is only worsened by the fact that social media companies rely on notoriously unreliable automated reporting systems that could easily cave to floods of false reports. In fact, following the signing of the bill by Trump, he remarked “I’m going to use that bill for myself too, if you don’t mind, because nobody gets treated worse than I do online—nobody,” raising concerns over what misuse of this bill could mean for free speech of dissenters. Ultimately, as digital rights groups and policy experts across the world have noted, while the Take it Down Act represents progress, it is too broad and reactive—due to it not being able to prevent it being posted before the fact—to tackle this issue properly.

Despite the government taking much needed steps to prevent the circulation of non-consensual explicit AI-generated content, it is clear that the Take It Down Act has countless omissions and inadequacies that severely limit its ability to be effective. To be clear, making effective policies is not as difficult as it may seem—countries across the world have already begun to achieve success in their responses. Through its Online Safety Act, the United Kingdom, has not only required social media companies to implement removal procedures for deepfaked content, but also proactively prevent it from being uploaded in the first place. Similarly, South Korea has pushed for aggressive fines on companies that don’t effectively prevent the initial spread of the content. Even individual U.S. states like New York and California have found relative success in state laws banning pornographic deepfakes. Similar to these aforementioned examples, the U.S. must prioritize legislation that is specific, effective, and proactively regulates rather than reactively. This could look like forcing social media companies to take down the content on their own and mandating the construction of more effective monitoring mechanisms to avoid mass false-reporting. Given the urgency and potential of these reforms, it is time that the federal government stands up and does its job to put an end to this dangerous trend—while still considering free speech—once and for all.