Considering the almost daily advances of AI technology in contrast to the development of legislation, it’s easy to view one as the hare and one as the tortoise.
The use and development of generative AI technology has ballooned in recent months. So much so that it’s outpaced existing regulation aimed to safeguard against malicious or unethical uses.
Deepfakes need regulating — but why now?
The recent call for new laws to criminalize the creation of deepfake images in response to AI-generated explicit images of Taylor Swift circulating online has shone a light on what many have been saying for years. Legislation is lagging behind when it comes to protecting against harmful activities, and it’s time lawmakers caught up.
Deepfake images may be fake, but their impact and fallout can be very real. It’s important to highlight that while not everyone who is using deepfake technology is doing so with malicious intent, those who are must face the consequences.
As the recent news surrounding Taylor Swift has highlighted, deepfaked explicit imagery is among the most serious problems legislators are looking to address. Distressingly, pornography makes up the overwhelming majority of deepfakes. Not only are the images themselves incredibly traumatic for victims, but to make matters worse, they can also be used as a means to extort or blackmail.
While many politicians and regulators are considering their next steps to tackle deepfakes, legislation must go a step further than only addressing explicit images. Why? Because the harmful implications of deepfakes don’t end there. Deepfake laws should also look to prevent:
- The spread of misinformation: Several videos of politicians or well-known public figures, such as Barack Obama and Tom Hanks, have emerged over the last year or so. Often, these videos portray inflammatory views, spout disinformation, or falsely promote products. 2024 is a year of global elections, and convincing deepfake videos during election cycles could erode trust in an online ecosystem already rife with disinformation.
- The erosion of trust: Some experts predict that up to 90% of online content could be synthetically generated within a few years. While President Biden's executive order on AI last year called to certify legitimate content, this raises yet more questions about what qualifies as ‘legitimate’. Does anything not legitimately certified invite doubt? Such approaches must be careful not to create a situation in which there are two tiers of trustworthy information, nor to feed conspiracy theories. How would we determine the difference between a real video of a politician saying something inflammatory that they don’t want seen in the public eye against a deepfake doing the same? Certification doesn’t solve that problem. The complexities could soon blur the lines of misinformation further, and already exacerbate an extremely problematic issue.
- Identity fraud and scams: Fraudsters are increasingly using deepfakes as a way to attempt to dupe identity verification systems. For example, to open illegitimate bank accounts. At Onfido, we’ve seen a 3,000% increase in deepfakes as part of fraudulent account onboarding attempts. There’s also been a rise in scams where fraudsters pose as family, friends or colleagues to get individuals to hand over money. And it doesn’t just affect the vulnerable, as we’ve seen in the case of an energy company CEO scammed into handing over almost $250,000.
How are regulators approaching deepfake legislation?
Regulators are taking steps to update legislation to better protect victims of deepfakes. However, with different markets taking different approaches, and regulating deepfakes to varying degrees, current legislation is somewhat fragmented.
Deepfake laws: United States
Federal laws
There are currently no federal laws in the US that prohibit the sharing or creation of deepfake images, but there is a growing push for a change to federal law.
In January 2024, representatives proposed the No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act. The bill establishes a federal framework to protect individuals against AI-generated fakes and forgeries by making it illegal to create a ‘digital depiction’ of any person, living or dead, without permission. This would include both their appearance and voice.
Other proposed legislation includes:
- The Senate’s Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act, would protect the voice and visual likeness of performers.
- The Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act would allow people to sue over faked pornographic images of themselves.
State laws
Some individual US states have already implemented, or are in the process of implementing deepfake legislation. However, the current laws vary in many ways depending on the state. This includes the definition of deepfakes and the type of liability they impose.
States with legislation that specifically targets deepfake content include:
- Florida
- Georgia
- Hawaii
- Illinois
- Minnesota
- New York
- South Dakota
- Tennessee
- Texas
- Virginia
California deepfake law
California is at the forefront of AI regulation in the US. The California deepfake law was one of the first in the country to take effect back in 2019. The legislation not only criminalized non-consensual deepfake pornography but also gives victims the right to sue those who create images using their likenesses (Assembly Bill 602) and bans the use of AI deepfakes during election campaign season (Assembly Bill 730).
Texas deepfake law
Texas was one of the first states in the country to pass a law prohibiting the creation and distribution of videos intended to harm or influence elections (Texas Senate Bill 751). Since then, Texas deepfake law has introduced the Unlawful Production Or Distribution Of Certain Sexually Explicit Videos law, making it a criminal offense to produce explicit deepfake videos without the depicted person’s permission.
Deepfake law: UK
The UK Online Safety Act passed in 2023 has made it illegal to share explicit images or videos that have been digitally manipulated. However, this only applies in circumstances where they have intentionally or recklessly caused distress to an individual. The Act does not prevent the creation of pornographic deep fakes or indeed sharing them where intent to cause distress cannot be proved.
The amendments also don’t make it an offense to create any other type of AI-generated media without the subject's consent. In these instances, only those whose deepfaked likeness has been used to cause harm can seek redress, relying on defamation, privacy and harassment, data protection, IP, or other criminal laws, which can be complicated and difficult to establish.
Deepfake law: EU
In the EU, deepfakes will be regulated by the AI Act, the world’s first comprehensive AI law. The proposed AI Act will not bar the use of deepfakes outright but attempts to regulate them through transparency obligations placed on the creators under Article 52(3) of the proposed Act.
Negotiators from the European Parliament and Council Presidency came to a surprise agreement on the EU AI Act in December 2023 — it’s likely regulators will finalize the text of the Act in the first quarter of 2024.
The tip of the iceberg — how far should deepfake legislation go?
Even taking into account new proposals, current legislation only addresses the tip of the iceberg.
For one thing, many existing rules are only aimed at political misinformation or sexually explicit deepfakes. Deepfakes are also helping criminals to open bank accounts, manipulate or blackmail individuals, and cause distress more broadly. Criminals and fraudsters will always seek loopholes and new ways to leverage technology to exploit existing systems. Deepfake legislation needs to close those loopholes where it can.
But this doesn’t come without challenges. For one thing, the worst abusers of the technology are the most difficult to catch. They operate anonymously, share information via borderless and anonymous online platforms, and continually adapt their tactics.
Law enforcement officials have also pointed out that the industry still struggles to detect deepfakes, making it much harder to monitor and prosecute any malicious use of the technology when laws are broken. We’re in the midst of a technological arms race between deepfake creators and deepfake detectors. Robust deepfake detection technology will be crucial to implementing effective legislation.
To this end, regulation not only needs to protect victims, but must allow for innovation and the right data flows, aligned with data protection law, to allow for the development of cutting-edge AI deepfake detection solutions like Onfido's Fraud Lab.
Without effective legislation, no one wins the race, least of all the victims.
Fraud Lab stays at the cutting edge of identity fraud — we’re using purpose-built AI to identify increasingly sophisticated attack vectors, like deepfakes.