Generative AI and Disinformation: A History of Synthetic Media
In recent years, the rise of generative artificial intelligence (AI) has been a major game-changer in the world of media. Generative AI is a type of machine learning that enables computers to generate new content from existing data. This technology has revolutionized how we create and consume digital media, allowing us to produce more realistic images, videos, audio clips, and text than ever before.
But this same technology can also be used for nefarious purposes — namely creating synthetic media or “deepfakes” as they are commonly known. Deepfakes are digitally manipulated images or videos that appear real but have been altered using generative AI techniques. They can be used to spread false information or manipulate public opinion by making it seem like someone said something they didn’t actually say or did something they didn’t actually do.
The history of deepfakes dates back to at least 2017 when an anonymous Reddit user created a video showing former US President Barack Obama saying things he never said in real life. Since then, deepfake technology has become increasingly sophisticated with improved facial recognition algorithms and better computer graphics capabilities enabling users to create more convincing fake videos than ever before. In 2020 alone there were over 1 million deepfake videos uploaded online according to research firm Deeptrace Labs — up from just 8500 in 2018!
This rapid growth in synthetic media poses serious risks for society as it becomes easier for malicious actors to spread disinformation campaigns through these technologies without detection. For example, during the 2020 US presidential election there were numerous reports of deepfake political ads being circulated on social media platforms such as Facebook and Twitter which could have had significant impacts on voter behavior if left unchecked.
As such governments around the world have started taking steps towards regulating this space with some countries even introducing laws specifically targeting deepfakes such as France’s “Loi Avia” which makes it illegal to disseminate false information via digital means including through synthetic media sources like deepfakes . Additionally tech companies themselves are beginning to take action against these types of manipulations by implementing policies that ban certain types of content related to synthetic media manipulation from their platforms altogether .
Despite these efforts however , much work still needs done if we want protect ourselves against potential misuse cases involving generative AI technologies . We need better methods for detecting fake content , stronger regulations governing its use , increased transparency about who is behind any given piece of content , and greater education among consumers so they know what signs look out for when trying detect whether something is genuine or not . Only then will we able truly combat the threat posed by disinformation campaigns leveraging generative AI technologies .
The Atlantic