Maria Angela Maina
In March 2023, a workshop hosted by WITNESS in Nairobi, Kenya, brought together individuals from six African countries to discuss the implications of generative AI and synthetic media on human rights, journalism, and misinformation. The event highlighted the potential opportunities and alarming threats posed by Artificial Intelligence (AI)-generated deepfakes in Africa.
Deepfakes involve highly realistic video manipulation, while “cheap fakes” are quicker, less resource-intensive, and often less realistic. Cheap fakes include videos taken out of context, simple edits, face-swapping, and lip-syncing.
While deepfakes have garnered significant attention globally as they have an impact in spreading misinformation in African States. This article explores the evolving landscape of AI-generated deepfakes in Africa, delves into a case study, examines the legal issues, and concludes with a call to address these challenges.
Case Summaries: The Emergence of Deepfakes in Africa
AI deepfake technology allows users to create fabricated personas that can disseminate any message, even one that opposes their true beliefs. The use of deepfake technology for disinformation campaigns is a global problem, and its impact in Africa is growing as internet accessibility expands across the continent.
1. Burkina Faso
In January, unusual videos started circulating on social media platforms in Burkina Faso. These videos featured diverse individuals, seemingly endorsing the military junta. However, their movements were stiff, and their lips were out of sync with their words — a classic characteristic of deepfakes.
President Bongo had been out of the country for medical treatment, leading to suspicions about his well-being. When a video of the president was eventually released, it raised questions due to its unusual nature.
3. African Union Commission (AUC) Chairperson, Moussa Faki
Fraudsters impersonated the Chairperson of the AUC, Moussa Faki, by creating realistic fake videos and audio recordings using deepfake technology. They even engaged European leaders in video calls under his guise.
These synthetic videos, generated and manipulated by AI software, have raised concerns in Burkina Faso, Gabon, AUC and beyond.
The Interconnected Legal Issues in Contention
The rise of deepfake technology in Africa poses several complex legal challenges:
- Infringment of data privacy: The creation and spread of deepfakes can infringe upon individuals’ rights to privacy. AI-deepfakes can also greatly infringe on an individual’s right to dignity in the case a deepfake is created that honestly damages an individual’s reputation or life in a community since they often appear real and true if the viewer does not know deepfake detection. The recent KnownBe4 survey across five African countries reports only over 50% of respondents were aware of what deepfakes are, while 48% had little understanding or were unsure about them, thus making this situation a potential reality.
- Spread of misinformation: Deepfakes can undermine trust in official communications and exacerbate political instability. In some cases, as seen in Gabon in 2018, suspicion surrounding deepfake videos can lead to political crises, such as coup attempts. Ensuring national security in the face of deepfake-induced misinformation is a growing concern. The KnownBe4 survey further reveals that 74% of respondents admitted to believing in communications via email, direct message, or media content (photo or video) that were deepfakes. Deepfake technology uses AI and machine learning to manipulate data and images, making it challenging to spot fakes.
- Regulation and accountability of AI companies: The regulatory framework surrounding deepfakes in Africa is in its infancy. Legal structures are needed to hold perpetrators accountable for their actions. Companies are developing AI-powered tools to detect AI-generated content, but they have limitations. For instance, OpenAI’s detection tool had a low success rate in identifying AI-generated text.
- Hinders administration of justice: A 2022 journal article outlines another negative outcome of AI-deepfake technology as posing a threat to the justice system. Courts often rely on videos and photographs as evidence, and if these visual materials are manipulated or forged, it can lead to erroneous legal decisions. The article provides examples of cases where manipulated videos were presented in court to create a false impression of guilt. This can result in miscarriages of justice and incorrect verdicts.
Conclusion: Crucial Need for Legislation, Digital Literacy and Accountability for AI-Deepfakes
In the face of an evolving deepfake landscape, it is crucial to focus on public awareness, accountability of AI companies and legal reform to mitigate the risks while harnessing the potential benefits of AI-generated deepfakes.
We need to enhance advocacy to create awareness and increase literacy. Efforts to educate the public about the existence and implications of synthetic media must be intensified. Media literacy campaigns can empower individuals to critically assess information and engage with governments and civil societies to develop inclusive policies. Training and awareness campaigns should be conducted to educate the public and professionals, including journalists and forensic experts, on how to detect deepfakes.
Technology companies, legislators, and digital platforms must also be held accountable for the harm they cause or fail to prevent as synthetic media tools are deployed. Regulations that require comprehensive human rights assessments before deploying AI models and tools are essential.
Above all, governments and policymakers should establish laws governing the creation, distribution, and use of deepfake technology, criminalize malicious activities, and ensure accountability.
To finish off, it is important to contemplate the question raised by a 2020 BBC article that appears to have been ahead of its time: Are deepfakes a threat to democracy or just a bit of fun? In this case, I agree with the author’s sentiments on the impact of deepfakes in developing countries as a threat to democracy. Deepfakes could be especially dangerous in developing countries with limited digital literacy. The spread of manipulated content could have severe societal consequences, including inciting violence.