Deepfake

The camera function became a competitive feature of mobile phones in the mid-2000s. At the same time, social media platforms, like Facebook, began gaining popularity. The combination of mobile cameras and social media radically expanded the possibilities of who could distribute different kinds of content, to whom, and at what speeds. Therefore, the amount of fake content, such as deepfakes, increased too.

Cheap fakes

Cheapfakes

In contrast to deepfakes, сheap fakes – also known as shallow fakes – are audiovisual (AV) manipulations created with cheaper, more accessible software (or none at all). Such photos, audios or videos doesn’t rely on machine learning, like deepfakes. They are created with the help of simple techniques available on any video recording or editing software. There are a lot of such techniques: photoshopping, lookalikes, recontextualization, and speeding and slowing moving images. 

Photoshopping

Photoshop

Photoshop and similar image manipulation software can be used to manipulate photographic media in countless ways, but the stakes of such manipulations are most clear in cases of faked pornography. People have manipulated pornographic photos and films since the very inventions of those technologies. The distribution of faked pornographic images has become more widespread since mid-2000s as Photoshop and related digital editing tools became more accessible. 

For instance, in 2016 a 17-year-old Australian woman, Noelle Martin, found her face photoshopped onto pornographic images circulating on the internet. Martin is hardly alone. The internet at once facilitates access to pornographic materials made with and without consent, and provides unchecked and unregulated atmosphere that normalize misogyny, hurtful sexuality and racism. The debate and legislation over this phenomenon continues today and sets the background for most legal discussion around deepfake pornography.

Lookalikes

Image-based sexual violence can be used to inflict other types of harm, such as the suppression of the press, civil society, or political opposition. Many of the most public examples of this have been directed toward female politicians or activists, often simply using similar-looking actors to depict them in sexualized or pornographic footage. Such a strategy is a type of in-camera editing that requires no technical skill, simply the desire and ability to find a lookalike for the target. There is a large amount of such deepfake content on the internet. 

Re-contextualizing

Perhaps the easiest mode of producing a cheap fake is simply cutting together existing footage and spreading it under false pretenses. For example, in April 2018, a falsified BBC report began circulating on WhatsApp, presenting a false story of nuclear escalation between NATO and Russia. The shared clip was four minutes long, and featured nuclear mushroom clouds, the Queen’s evacuation from Buckingham Palace, a Russian ship firing missiles, and NATO aircraft being launched in retaliation. As the clip spread, alarmed viewers began sending messages to the BBC, which issued a statement explaining that the clip was cut together from YouTube footage uploaded in 2016 by the Benchmarking Assessment Group, an Irish marketing company. The original footage was a 30-minute video designed as a psychometric test for reactions during a disaster scenario. Outside of the focus group, the video was never intended to be taken seriously. But, a still-unknown manipulator was able to re-contextualize and disseminate it through WhatsApp as an imminent threat.

Speeding and Slowing

Time

The speed of a video has a large effect on its interpretation. We were recently reminded of this in May 2019, when a faked video appeared to depict House Speaker Nancy Pelosi drunkenly talking about Donald Trump caused waves of public outrage. In fact, the recording was slowed to around 75% of its original speed, while raising the pitch so the voice still sounded natural. The results: The viewer was given a plausible impression that Nancy Pelosi was drunk. The video was shared millions of times on social media. This shows how even the simplest forgeries can distort reality and be exploited for political purposes. 

Conclusion

All the examples above show that audiovisual manipulations in the form of cheap fakes are very dangerous. Even the simplest fake can destroy the truth. Each technical tool set described was previously only available to experts, but in the context of technological advancement and widespread social media use, these technical tool sets are more accessible to amateurs. It was always very difficult to falsify content and to make the subject perform completely different movements or speak completely different words than in the original video. Until now. Nowadays it is a piece of cake.

LEAVE A REPLY

Please enter your comment!
Please enter your name here