Detecting AI-Written Content
When chatGPT hit academia hard at the start of this year, there was much fear from teachers at all grade levels. I saw articles and posts saying it would be the end of writing. A Princeton University student built an app that helps detect whether a text was written by a human being or using an artificial intelligence tool like ChatGPT. Edward Tian was a senior computer science major. He has said that the algorithm behind his app, GPTZero, can "quickly and efficiently detect whether an essay is ChatGPT or human written."
GPTZero is at gptzero.me. I was able to attend an online demo of the app now that it has been released as a free and paid product, and also communicated with Tian.
Because ChatGPT has exploded in popularity, it has gotten interest from investors. The Wall Street Journal reported that parent company OpenAI could attract investments valuing it at $29 billion. But the app has also raised fears that students are using the tool to cheat on writing assignments.
GPTZero examines two variables in any piece of writing it examines. It looks at a text's "perplexity," which measures its randomness: Human-written texts tend to be more unpredictable than bot-produced work. It also examines "burstiness," which measures variance, or inconsistency, within a text because there is a lot of variance in human-generated writing.
Unlike other tools, such as Turnitin.com, the app does not tell you the source of the writing. That is because of the odd situation that writing produced by a chatbot isn't exactly from any particular source.
There are other tools to detect AI writing - see https://www.pcmag.com/how-to/how-to-detect-chatgpt-written-text
Large language models themselves can be trained to spot AI-generated writing if they were trained on two sets of text. One text would be AI and the other written by people, so theoretically you could teach the model to recognize and detect AI writing.
Trackbacks
Trackback specific URI for this entryThe author does not allow comments to this entry
Comments
No comments