bytefeed

Most Public Believes Artificial Intelligence Tools Can Achieve Singularity & Pose Threat To Humanity - Credit: Morning Consult

Most Public Believes Artificial Intelligence Tools Can Achieve Singularity & Pose Threat To Humanity

The potential of artificial intelligence (AI) has been a topic of debate for decades, and the idea that AI could one day achieve singularity – or become smarter than humans – is an increasingly popular concept. A recent survey conducted by Morning Consult revealed that most people believe this is possible, and even more worryingly, they think it could pose a threat to humanity.

The survey asked 2,200 U.S. adults about their views on generative AI technology and its implications for society in the future. The results showed that nearly three-quarters (73%) of respondents believed it was likely or very likely that AI would eventually become smarter than humans; only 11% said it was not at all likely.

When asked if they thought such advanced AI posed a risk to humanity’s safety and security, almost two-thirds (63%) agreed with the statement “Yes, I am concerned about the potential risks associated with advanced artificial intelligence” while just over one-third (37%) disagreed with it.

These findings suggest that there is widespread public concern about what might happen when machines become as intelligent as humans – or even surpass them in terms of cognitive abilities. This fear may be driven by science fiction stories which depict robots taking over the world or other dystopian scenarios involving out-of-control AI systems wreaking havoc on society; however, many experts have argued against these fears due to our current understanding of how machine learning works and its limitations in comparison to human capabilities like creativity and empathy.

Despite this reassurance from experts in the field, many members of the public remain unconvinced: according to our survey results, only 27% felt confident enough in their knowledge about generative AI technology to say they were not worried at all about its potential risks for humanity’s safety and security; meanwhile 73% expressed some level of concern regarding these issues.

It appears then that although most people understand there are still significant hurdles before we reach true artificial general intelligence capable of achieving singularity — such as developing algorithms able to learn without being explicitly programmed — they are nonetheless wary about what might happen once we get there. It will therefore be important for researchers working on generative AI technologies going forward to continue engaging with members of the public so everyone can better understand both its possibilities and limits alike — especially given how quickly this area is advancing today!

|Most Public Believes Artificial Intelligence Tools Can Achieve Singularity & Pose Threat To Humanity|Technology|Morning Consult

Original source article rewritten by our AI: Morning Consult

Share

Related

bytefeed

By clicking “Accept”, you agree to the use of cookies on your device in accordance with our Privacy and Cookie policies