top of page
Writer's pictureDurham Pro Bono Blog

Deepfake: Can the Law Limit its Political Impact?

Disclaimer: The views expressed are that of the individual author. All rights are reserved to the original authors of the materials consulted, which are identified in the footnotes below.


By Alex Roper


Since the invention of the camera, there has been a human tendency to believe what one sees; photographs and videos have been viewed as infallible sources of information for a period approaching 200 years, with a unique ability to mould opinion and stir emotion. In this period, visual media was routinely staged and manipulated in pursuit of an agenda, be it political or otherwise.


However, in the ‘fake news’ era, a more alarming threat to political discourse has arisen in the form of ‘deepfake,’ a technological development with the capacity to distort how we view images and videos to a far greater extent.


‘Deepfake’ is the common term for technologies that use ‘deep learning artificial intelligence to replace the likeness of one person with another in video and other digital media.’[1] These ‘deepfakes’ use ‘neural networks involving autoencoders’[2] in order to perform this replacement of likeness, studying the original video clips and then ‘mapping the person onto the individual in the target video by finding common features.’[3] Generative Adversarial Networks are then used to modify the videos in an attempt to avoid detection.[4]



This is a sophisticated but potentially sinister use of machine learning that could have wide-ranging implications, particularly as the technology involved develops. Whilst deepfake programmes are still in their infancy, the ‘technology is improving at a breath-taking pace’[5] according to a Guardian report, with experts predicting the impossibility of distinguishing real from deepfake images by eye in the near future.[6]


Whilst this article explores the political implications of the emerging technology, it is noteworthy that the initial use of deepfake has been centred predominantly around pornography. Forbes estimated that 96% of deepfake material online was pornographic in 2019,[7] often involving female celebrities and almost exclusively without the consent of the party. This is a similarly sinister utilisation of the technology that must be urgently addressed in itself by the criminal law, as it poses significant threats to dignity, privacy, and intellectual property rights.


Regarding the existing political use of deepfake, there have been some tentative steps to employ the technology in US politics. There have been notable deepfake videos depicting, for example, ex-US President Barrack Obama ‘using an expletive to describe’ then-President Donald Trump[8]; the speaker of the US House of Representatives Nancy Pelosi drunkenly slurring throughout a speech[9]; and Mark Zuckerberg stating Facebook’s goal is to exploit its users.[10]


Whilst these videos were noticeably inauthentic, they gained significant traction on social media, with then-President Donald Trump retweeting the deepfake of Nancy Pelosi to his millions of followers, displaying the potential for misinformation arising from the technology, which will only grow as it becomes more sophisticated.


Additionally, the presence of deepfake technology casts doubt on the authenticity of legitimate digital media. In Gabon, a deepfake conspiracy surrounding a video of the country’s President was central to the political destabilisation and resulting military coup in the region.[11]


However, deepfake technology has the capacity to have a far greater political impact. US Senator Marco Rubio warned of deepfakes being used to sway public opinion on the eve of an election, leaving experts too short a period to analyse the video’s authenticity.[12] The technology could also be used to create videos of politicians threatening foreign states, or making announcements on changes to law. If deepfake technology is left unchecked, the possibilities for its misuse are limitless; it could change the political landscape entirely and make it increasingly difficult for the public to ascertain the authenticity of the media they consume online.


The current UK law on deepfake is ‘wholly inadequate at present’[13] to counter the threat it poses to political discourse, though there are a number of existing legal principles that may mitigate the technology’s effect to a minor extent. For example, intellectual property laws (trademark infringement), the tort of defamation, harassment, and data protection laws[14] may provide some protection for individuals affected, though only in civil law.


Nonetheless, is it accurate to say that ‘the law is lagging two steps behind technology,’[15] and existing laws are not specific or even applicable to political uses and misinformation. The UK could adopt a similar approach to the proposed change of law in the US, where a number of states are working towards criminalising the ‘malicious creation and distribution of deepfakes.’[16]


However, specific laws on political applications of the technology have a number of disadvantages. Firstly, deepfakes are often created by individuals who are overseas, who may be extremely difficult to identify.[17] Additionally, those attempting to undermine democracy in the UK are unlikely to be deterred by legislation, reducing the preventative effect of potential laws.[18] Finally, the legal system simply cannot react fast enough to the creation and distribution of deepfakes; legislation would be futile against a video shared widely the night before an election or major political event. This suggests the law is not capable of limiting the political impact of the emerging technology.


Dealing with the issues arising from deepfake will instead fall to social media companies, such as Twitter and Facebook. Platforms will need to become much faster at detecting widely shared deepfakes and communicating the authenticity of digital media clearly to users. Experts predict that machine learning programmes that aim to detect deepfakes will ‘adapt quickly as new deepfake technology emerges.’[19] ‘Big Tech’ companies will be forced to invest heavily in these detection technologies; the government may need to regulate to enforce this investment and implementation.


Overall, it is clear that deepfake technology, as it becomes more advanced and less expensive, will blur the lines of authenticity in digital media. Whilst the legal system is perhaps not best placed to mitigate its political impact, regulation could require social media companies to become more active in policing misinformation, which could limit the political effect of the technology.


 

[1] Dave Johnson, ‘What is a deepfake? Everything you need to know about the AI-powered fake media’ (Business Insider, 22 January 2021) <https://www.businessinsider.com/what-is-deepfake?r=US&IR=T> accessed 12 June 2021.

[2] Ibid.

[3] Ibid.

[4] Ibid.

[5] Simon Parkin, ‘The rise of the deepfake and the threat to democracy’ (The Guardian, 22 June 2019) <https://www.theguardian.com/technology/ng-interactive/2019/jun/22/the-rise-of-the-deepfake-and-the-threat-to-democracy> accessed 13 June 2021.

[6] Ibid.

[7] Rob Toews, ‘Deepfakes Are Going To Wreak Havoc On Society. We Are Not Prepared.’ (Forbes, 25 May 2020) <https://www.forbes.com/sites/robtoews/2020/05/25/deepfakes-are-going-to-wreak-havoc-on-society-we-are-not-prepared/?sh=5280d29a7494> accessed 13 June 2021.

[8] Ibid.

[9] Parkin (n 5).

[10] Toews (n 7).

[11] Ibid.

[12] Parkin (n 5).

[13] Carlton Daniel, Ailin O’Flaherty, ‘The Rise of the “Deepfake” Demands Urgent Legal Reform in the UK’ (National Law Review, 23 March 2021) <https://www.natlawreview.com/article/rise-deepfake-demands-urgent-legal-reform-uk> accessed 14 June 2021.

[14] Ibid.

[15] Ibid.

[16] Parkin (n 5).

[17] Ibid.

[18] Ibid.

[19] Ibid.

36 views0 comments

Recent Posts

See All

Comments


bottom of page