Rising Deepfake concern
Deepfake technology has become a rising concern in recent times, primarily due to advancements in AI and machine learning, making it easier and more convincing than ever before. These technological improvements have enabled the creation of highly realistic and difficult-to-detect fake videos and images. This growing realism and accessibility heighten the risks of misinformation, privacy violations, and the potential for malicious use in politics, personal attacks, and fraud. In this section, we will discuss what Deepfake is, some real-world examples, its detrimental impact on society, and what we can do to mitigate it.
Figure 9.2 – A face covered by a wireframe, which is used to create Deepfake content
What is Deepfake?
Deepfake is a technology that uses artificial intelligence to create or alter video, images, and audio recordings, making it seem as if someone said or did something they did not. It typically involves manipulating someone’s likeness or voice.
Some real-world examples of Deepfake
The following are some early real-world examples of Deepfakes that have raised significant concerns and exacerbated the need for their prevention:
- In 2019, a UK-based energy firm’s CEO was tricked into transferring EUR 220,000 after receiving a phone call from what he believed was his boss. The caller used Deepfake technology to imitate the boss’s voice, convincing the CEO of the legitimacy of the request (https:// www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-Deepfake-was-used-to-scam-a-ceo-out-of-243000/?sh=4721eb412241).
- Edited videos and speeches have also been Deepfaked. For instance, a manipulated video of Facebook’s Mark Zuckerberg talking about the power of having billions of people’s data and a fake speech by Belgium’s prime minister linking the coronavirus pandemic to climate change are examples of Deepfake usage (https://www.cnn.com/2019/06/11/tech/ zuckerberg-Deepfake/index.html).
- Concerns regarding the objectification of women due to Deepfake adult videos have been rising. The prevalence of AI-generated pornographic content that unlawfully uses the faces of women without their consent is increasingly troubling, particularly in the online world of notable influencers and streamers. This issue came to light in January when “Sweet Anita,” a prominent British live streamer with 1.9 million Twitch followers, discovered that a collection of fake explicit videos, which illegitimately featured the faces of various Twitch streamers, was being shared online. Sweet Anita is well-known on Twitch for her gaming content and interactive sessions with her audience (https://www.nbcnews.com/tech/internet/Deepfake-twitch-porn-atrioc-qtcinderella-maya-higa-pokimane-rcna69372).
- In early 2024, AI-generated Deepfake images of Taylor Swift, some of which were sexually explicit, spread across social media platforms, leading platforms such as X (formerly Twitter) to block searches for her name and renew calls for stronger AI legislation. The images, seen by millions, prompted actions from social media companies and discussions about the need for legal and regulatory responses to the misuse of AI technologies.