On Tuesday, OpenAI, the company behind the viral chatbot ChatGPT, said: released the tool Detect if a chunk of text was written by an AI or a human. Unfortunately, in 4 times he is only accurate once.
“Our classifier is not completely reliable,” the company wrote in a blog post on its website. “We are making [it] It’s published to get feedback on whether such imperfect tools are useful. ”
OpenAI found that its detection tools correctly identified 26% of text written by AI as “likely written by AI” and incorrectly identified human-written text as written by AI 9% of the time. claimed to have labeled
Since its launch in November, ChatGPT has become a worldwide favourite, answering all kinds of questions with seemingly intelligent answers.last week was report ChatGPT passed the final exam for the Wharton School MBA program at the University of Pennsylvania.
Bots have caused particular concern among academics worried about high school and college students using bots to complete their homework and assignments. became darling Professors are all over the place after setting up a website that can detect if a sentence was created using ChatGPT.
OpenAI seems to be aware of the problem. “We are working with US educators to learn what they are seeing in the classroom and discuss ChatGPT’s capabilities and limitations. As we learn, we will continue to expand our outreach.” ‘, the company wrote in a statement.
Still, with OpenAI’s own endorsement and BuzzFeed News’ completely unscientific testing, no one should rely solely on the company’s detection tools.
We asked ChatGPT to write 300 words each about Joe Biden, Kim Kardashian, and Ron DeSantis, and used OpenAI’s own tool to detect if the AI wrote the text. I got three different results. Tool said it was “very unlikely” that the article on Biden was generated by his AI, and that it was “probably” possible that the article on Kardashian was generated by his AI. The tool was “unclear” about whether his DeSantis article generated by ChatGPT was generated by his AI.
Others who played with the detection tool found it messed up pretty spectacularly.When Sam Biddle of The Intercept pasted a chunk of Bible text, OpenAI’s tool found that He said it was “highly likely” that it was generated by AI.