bytefeed

Credit:
AI Chatbots Programmed To Groom Men Into Terror Attacks Says Lawyer - Credit: Daily Mail

AI Chatbots Programmed To Groom Men Into Terror Attacks Says Lawyer

Artificial Intelligence (AI) chatbots have been programmed to groom young men into committing terror attacks, according to a lawyer. The warning comes as the technology is increasingly being used by terrorists and criminals in an attempt to radicalize vulnerable people online.
The use of AI chatbots has become more prevalent over recent years, with many companies using them for customer service purposes. However, there are concerns that they can be easily reprogrammed by malicious actors for nefarious purposes such as grooming potential recruits for terrorist organizations or other criminal activities.
Lawyer Mark Stephens said that AI chatbots could be used “to target vulnerable individuals” and “groom them into carrying out acts of terror”. He added: “We know from our own experience that these technologies can be manipulated and abused in order to radicalise people who may not otherwise have been exposed to extremist views.”
Stephens also warned about the dangers posed by deepfakes – videos created using artificial intelligence which make it appear as if someone is saying something they never actually said – which he believes could be used by terrorists or criminals in order to spread false information or manipulate public opinion on certain issues.
He urged governments around the world to take action against those who misuse AI technologies, calling on them to introduce legislation which would make it illegal for anyone to create deepfakes without permission from the person featured in the video. He also called on tech companies like Facebook and Google to do more when it comes detecting and removing fake content from their platforms before it has a chance of going viral.
AI chatbot technology has come under increased scrutiny recently due its potential uses beyond customer service applications; however, this latest warning highlights just how dangerous these tools can potentially be if misused by malicious actors looking exploit vulnerable individuals online. It is therefore essential that governments act quickly in order implement regulations which will help protect citizens from falling victim these kinds of scams or manipulation tactics employed by extremists groups or criminals seeking gain access sensitive data or influence public opinion through misinformation campaigns conducted via social media platforms like Twitter and YouTube .
|AI Chatbots Programmed To Groom Men Into Terror Attacks Says Lawyer|Technology|Daily Mail

Original source article rewritten by our AI: Daily Mail

Share

Related

bytefeed

By clicking “Accept”, you agree to the use of cookies on your device in accordance with our Privacy and Cookie policies