AI-written text is becoming increasingly common, and it can be difficult to tell the difference between AI-generated content and human-written content. As a result, many websites have sprung up claiming to be able to detect AI-generated text. Unfortunately, most of these sites fail spectacularly at their task.
A recent study conducted by researchers from the University of Washington found that most sites claiming to catch AI written text are not accurate enough for practical use. The team tested eight different services designed to detect machine generated texts and found that none of them could accurately identify all types of AI generated texts with any degree of accuracy or consistency.
The researchers tested each service using a dataset consisting of both human written articles as well as computer generated articles created using GPT2 (Generative Pre-trained Transformer 2), an open source language model developed by OpenAI in 2019. They then compared the results from each service against a baseline set which was manually labeled by humans as either “human” or “machine” written texts.
The results were quite shocking; none of the services performed better than random guessing when it came to detecting machine written texts! Even more concerning was that some services actually performed worse than random guessing – meaning they were less accurate than if someone had just guessed whether an article was human or machine written without even looking at it! This suggests that these services are completely unreliable when it comes to identifying artificial intelligence generated content.
So why do so many websites claim they can accurately detect AI written text? It seems likely that this is due in part because there has been very little research into how best to distinguish between human and machine writing styles until recently, leaving companies scrambling for solutions without much guidance on what works best. Additionally, since there is no single standard for measuring accuracy across all types of artificial intelligence models yet, companies may also be tempted into making overly optimistic claims about their ability to detect such texts based on limited data sets or testing methods used internally rather than externally validated ones like those used in this study..
This lacklustre performance highlights just how far we still have left before being able to reliably differentiate between natural language produced by machines versus humans – something which will become increasingly important as more businesses begin relying on automated systems for tasks such as customer support and marketing automation where trustworthiness is paramount . Until then however , consumers should remain wary when dealing with any website claiming its abilityto spot artificially intelligent writing styles with certainty . After all , if you can’t trust them with something simple like this , why would you trust them with anything else ?