OpenAI scuttles AI-written text detector over ‘low rate of accuracy’

OpenAI has shut down its AI classifier, a tool that claimed to determine the likelihood a text passage was written by another AI. While many used and perhaps unwisely relied on it to catch low-effort cheats, OpenAI has retired it over its widely criticized “low rate of accuracy.”

The theory that AI-generated text has some identifying feature or pattern that can be detected reliably seems intuitive, but so far has not really been borne out in practice. Although some generated text may have an obvious tell, the differences between large language models and the rapidity with which they have developed has made those tells all but impossible to rely on.

TechCrunch’s own test of a gaggle of AI writing detection tools was conclusive that they are at best hit and miss, and at worst totally worthless. Of seven generated text snippets given to a variety of detectors, GPTZero identified five correctly, and OpenAI’s classifier only one. And that was with a language model that was not cutting-edge even at the time.

But some took the claims of detection at face value, or rather well above it, since OpenAI shipped the classifier tool with a list of limitations significant enough that one wondered why they put the thing out at all. People worried that their students, job applicants, or freelancers were submitting generated text would put it into the classifier to test it, and while the results should not have been trusted, they sometimes were.

Given that language models have only improved and proliferated, it seems someone at the company decided it was time they took this fickle tool offline. “We are working to incorporate feedback and are currently researching more effective provenance techniques for text,” reads a July 20 addendum to the classifier announcement post. (Decrypt seems to have been the first to notice the change.)

I asked about the timing and reasoning behind shuttering the classifier, and will update if I hear back. But it’s curious that it should happen around the time OpenAI joined several other companies in a White House-led “voluntary commitment” to develop AI ethically and transparently.

Among the commitments made by the companies is that of developing robust watermarking and/or detection methods. Or attempting to do so, anyway: despite every company making noises to this effect over the last 6 months or so, we have yet to see any watermark or detection method that is not trivially circumvented.

No doubt the first to accomplish this feat will be richly rewarded (any such tool, if truly reliable, would be invaluable in countless circumstances) so it is probably superfluous to make it a part of any AI accords.

Previous post Prosus, a major investor in Byju’s, says the company “regularly disregarded advice” despite repeated efforts
Next post Blue Origin, Astrobotic, Varda Space and others win NASA funding to develop advanced space tech