Mark Cuban, the billionaire entrepreneur and owner of the Dallas Mavericks, recently spoke out about how he believes artificial intelligence (AI) technology will make internet misinformation worse. He made his comments during a recent interview with CNBC’s “Squawk Box” program.
Cuban said that AI is making it easier for people to spread false information online, which can have serious consequences for society. He noted that AI-generated content has become increasingly sophisticated in recent years and can be used to create convincing fake news stories or manipulate public opinion on social media platforms.
He also warned that this type of technology could be used by malicious actors to target vulnerable populations or influence elections. Cuban argued that governments need to take steps now to regulate the use of AI before it becomes too widespread and difficult to control.
The billionaire went on to explain why he thinks AI is so dangerous when it comes to spreading misinformation: “It’s not just about creating fake news stories; it’s about manipulating public opinion through targeted campaigns using deep fakes [videos] or other forms of manipulation.” He added that these types of tactics are becoming more common as AI technology advances, making them harder for people to detect and combat effectively.
Cuban suggested several ways governments could address this issue, including increasing transparency around who is behind certain campaigns and requiring companies like Facebook and Twitter to disclose any automated accounts they may have created using AI tools. Additionally, he called for greater investment in research into how best to identify manipulated content online so users can better protect themselves from being misled by false information generated by machines rather than humans.
In conclusion, Mark Cuban believes that if we don’t act soon then artificial intelligence will only make internet misinformation worse over time due its ability generate convincing yet false content quickly at scale without detection from human eyes alone – something which could have devastating implications both socially and politically if left unchecked.. To prevent this from happening he recommends increased regulation around the use of such technologies as well as greater investment into researching methods which would allow us all better defend ourselves against such threats posed by machine learning algorithms today – something which many experts agree should be taken seriously given our current digital climate where trust in what we read online has never been more important nor fragile than ever before