bytefeed

Credit:
"ChatGPT AI Tech Called Out by Computing Expert for Fabricating Responses" - Credit: CNET

ChatGPT AI Tech Called Out by Computing Expert for Fabricating Responses

Computing Guru Criticizes ChatGPT AI Tech for Making Things Up

Artificial intelligence (AI) technology is becoming increasingly popular in the world of computing, but one computing guru has recently criticized a particular type of AI tech known as ChatGPT. According to this expert, ChatGPT can be dangerous because it often makes things up when responding to queries.

ChatGPT is an open-source natural language processing (NLP) system developed by Microsoft Research Asia and Beijing University. It uses deep learning algorithms to generate responses based on conversations that have been previously recorded or written down. The idea behind it is that it can provide more accurate answers than traditional search engines like Google or Bing, which rely solely on keywords and phrases for their results.

However, according to computer scientist Dr. David Evans from the University of Virginia, there are some serious flaws with ChatGPT’s approach. He believes that while the technology may be able to accurately answer simple questions such as “What time is it?” or “How old are you?”, its ability to make up information when responding to more complex queries could lead people astray and cause them harm if they take its advice too seriously without verifying its accuracy first.

Dr Evans explains: “The problem with these systems is that they don’t know what they’re talking about – so they just make stuff up.” He goes on to say: “If someone asks ‘what’s the best way to invest my money?’ then a system like this might give them bad advice because it doesn’t understand finance.” This means that users need to be aware of how much trust should be placed in any response generated by an AI system before acting upon it – something which many people may not realise until after taking action based on incorrect information provided by an AI chatbot or similar program.

In addition, Dr Evans also points out another potential issue with using NLP systems such as ChatGPT; namely their tendency towards bias due to being trained only on certain types of data sets which may contain inherent biases themselves – leading the resulting output from these programs potentially containing inaccurate assumptions about certain topics or groups of people depending upon what kind of data was used during training sessions.. As he puts it: “These systems learn from whatever data set you give them — so if your dataset contains gender stereotypes then those will show up in your output”.

Ultimately though, despite his criticisms Dr Evans does believe there are still plenty of useful applications for NLP technologies such as ChatGPT; particularly within customer service scenarios where automated responses can help reduce costs associated with providing support staff 24/7 while still delivering satisfactory levels customer satisfaction through quick and accurate replies.. However he cautions against relying too heavily upon these kinds of technologies without proper oversight and verification processes in place first – especially when dealing with sensitive matters such as financial investments where even small errors could have large consequences further down the line..

All in all then while artificial intelligence technologies do offer great potential benefits across various industries including healthcare and finance; caution must always be taken when trusting any machine-generated outputs due both possible inaccuracies caused by making things up along with potential biases present within underlying datasets used during training sessions.. By following this advice however we can ensure our use cases remain safe whilst still enjoying all the advantages offered by modern day AIs!

Original source article rewritten by our AI:

CNET

Share

Related

bytefeed

By clicking “Accept”, you agree to the use of cookies on your device in accordance with our Privacy and Cookie policies