What Are Deepfakes – how can they cause damage & how does the law apply?
The digital era has been transformed with the rise of artificial intelligence. Generative AI tools are readily available today. They create documents, artwork, and even videos in a matter of seconds. They are powerful. But not all humans utilize them in a positive way.
The most disconcerting abuses are the formation of deepfakes.
A deepfake refers to an audio-visual material, edited or generated by AI, in which a person is shown doing or saying something they never did. The software learns from existing images, videos and audio clips to mimic a person’s face, voice and expressions. Then, it reuses that data to make fake but convincing content. In theory, anyone with internet access can create a deepfake. The results can be incredibly realistic.
Most often, deepfakes target celebrities, politicians and public figures. Unfortunately, many are also to disseminate misinformation. For example, one deepfake video in 2023 featured a fake explosion near the Pentagon. Another showed a fake Elon Musk promoting a hoax. There were fake celebrity clickbait containing explicit photos, and a deepfake of President Zelenskyy instructing his soldiers to surrender. These are just the beginning.
Deepfakes and Defamation
Deepfakes are not just deceptive. They can cause serious reputational damage by
the emotional, financial or even political effect. A deepfake, in terms of UK law, can be libelous. According to Section 1(1) of the Defamation Act 2013, a statement is libelous when it is false, published to a third party, and causes or is likely to cause serious harm to someone’s reputation.
If a deepfake takes someone down in the public eye, it can cross the line. If the person is mocked, hated, or ostracized by others due to a deepfake, the damage could be actionable at law.
Can a Viewer Know It’s Fake?
One of the questions people are asking is whether the typical viewer can tell a deepfake apart. If so, is it nevertheless defamatory?
Unfortunately, AI is getting better fast. Deepfakes are now harder to detect. Most people viewing them online won’t know they’re fake.
In legal terms, the “reasonable viewer” is not expected to fact-check content. They are presumed to interpret videos in their natural and ordinary meaning. Courts assume that people are not overly suspicious and won’t jump to the worst conclusion, but they also take what they see at face value – unless it’s openly satire or a gross fake. The UK Supreme Court applied this position in Lachaux v Independent Print Ltd [2019]. Hence unless a deepfake is clearly false, it may still cause reputational harm under the law.
Privacy and Data Protection
Deepfakes not only destroy reputations. They can also violate privacy rights. Producing or sharing a deepfake on a subject’s face, voice or identity without their consent can infringe their right to privacy. It can also lead to an action for invasion of private information. Even if the content is not authentic, the courts have recognized that it can still invade privacy. The seminal case Campbell v MGN Ltd [2004] set this. The court decided that false or partially false information can nevertheless invade privacy -especially if the subject had a reasonable expectation that they were being kept secret and the story was not in the public interest. Deepfakes of intimate, sensitive or private acts – albeit false – may survive that test.
Data Protection and Deepfakes
Insofar as the UK General Data Protection Regulation (UK GDPR) is concerned, deepfakes also breach data protection law. Why? Because deepfakes will likely process personal data; images, voice or biometric features – on no legitimate ground like consent. This is especially serious when it is put online. Facial imagery, the Information Commissioner’s Office (ICO) states, is “special category data.” It requires a higher level of protection. Creating and sharing a deepfake without consent may trigger data protection claims, especially if the content causes harm.
In Bridges v Chief Constable of South Wales Police [2020], the Court of Appeal affirmed how seriously the law treats biometric data like facial recognition. Deepfakes fall into that category when they mimic real people.
What to Do If You’re Deepfaked
If you are a victim of a deepfake, act quickly. Save the evidence first. Take a screenshot,
download the videos and get a record of who posted the material. Then report the material to the platform. Most social media platforms allow you to report abusive or deepfake content directly. The Online Safety Act 2023 went live in January 2024. The law created new offences for intimate image abuse, including deepfakes. Publishing or threatening to publish explicit deepfake images – without consent – is a criminal act. It doesn’t matter if the image was intended to cause distress or not. Large platforms like Facebook, TikTok and Instagram must have easy ways of reporting this type of abuse.
Finally, consider taking action through the courts. Victims of deepfakes may be able to sue for:
- Defamation
- misuse of private information
- infringement of data protection rights
The number of these claims is increasing rapidly in the UK. Courts are recognising the serious harm that deepfakes can cause.
How We Can Help
Taylor Hampton Solicitors are specialists in these matters. We advise on defamation, privacy and data protection issues. If you’ve been targeted by a deepfake or need legal advice on removing harmful content, please get in touch. We are here to help. Contact enquiries@taylorhampton.co.uk or +442074275970.