bytefeed

Credit:
"The Start of a Deepfake: 8 Minutes and a Few Dollars Away" - Credit: NPR

The Start of a Deepfake: 8 Minutes and a Few Dollars Away

It’s becoming easier and cheaper than ever to create a deepfake. With just a few dollars and eight minutes, anyone can make one. And that’s only the start of the potential problems posed by this technology.

Deepfakes are videos or images that have been manipulated using artificial intelligence (AI) to make it appear as if someone said or did something they didn’t actually say or do. They’re created by feeding AI algorithms with large amounts of data, such as photos and videos of people talking, which allows them to learn how to mimic their movements and expressions in order to generate realistic-looking fake footage.

The technology has become increasingly accessible over the past few years due to advances in computing power and open source software tools like DeepFaceLab, which allow users with minimal technical knowledge to create convincing deepfakes quickly and easily. In fact, some experts estimate that it now takes less than $10 USD and 8 minutes for an average person with no prior experience in video editing or AI programming to produce a basic deepfake video from scratch.

This ease of access is concerning because it means anyone can use this technology for malicious purposes – such as creating false evidence against innocent people or spreading misinformation about political candidates during election season – without having any specialized skills or resources at their disposal. It also raises questions about our ability to trust what we see online since there’s no way for us know whether something is real or not without verifying its authenticity first hand.

Furthermore, deepfakes could be used by criminals looking for new ways of committing fraud since they could potentially be used to impersonate someone else on social media accounts in order steal money from unsuspecting victims through phishing scams or other types of cybercrime activities . This would be especially dangerous given how difficult it may be for law enforcement agencies detect these kinds of crimes due lack expertise when dealing with digital evidence related cases involving sophisticated technologies like AI-generated content manipulation techniques .

The implications go beyond criminal activity though; even seemingly harmless uses like creating celebrity lookalikes could lead us down a slippery slope where our perception reality becomes distorted because we’re unable differentiate between what’s real versus what isn’t anymore . As more people gain access these powerful tools , there will likely need greater public awareness around responsible usage so that everyone understands potential risks associated with creating manipulating digital content .

Fortunately , researchers are already working on developing methods detecting deepfakes before they spread too far across internet . For example , Google recently released an open source tool called “Assembler ” which uses machine learning algorithms analyze videos identify signs tampering within them . Other companies have developed similar solutions based on audio analysis techniques help spot discrepancies between original recordings faked versions . While these efforts may not completely solve problem yet , they certainly provide promising starting point towards better protecting ourselves against malicious misuse this technology going forward into future .

Original source article rewritten by our AI: NPR

Share

Related

bytefeed

By clicking “Accept”, you agree to the use of cookies on your device in accordance with our Privacy and Cookie policies