By navigating our site, you agree to allow us to use cookies, in accordance with our Privacy Policy.

This Was (Not) Written by AI

How to Determine Whether an Assignment Was Authored by ChatGPT

-By Becks Simpson for Mouser Electronics

Computer-AI
Image Source: Brian/Stock.adobe.com

As natural language processing (NLP) models from the generative pre-trained transformer (GPT) family stun the world with their ability to produce human-like content, answer complex questions, and debate various topics with deftness, the battle to prevent their misuse is also intensifying. In particular, while these models may be useful for many writing applications, using them for school writing assignments circumvents student learning. Educators are now looking for ways to detect whether a piece was written by artificial intelligence (AI) both through manual means, such as looking at the content itself for signs, and through automatic means, such as using text classifiers built from AI. Others are experimenting with ways to structure the assignments so that students either are unable to do them with AI or feel encouraged to do the work themselves. Of all the methods, restructuring assignments seems to be having the biggest impact.

Using a (Not Always) Helpful Tool

New AI models in NLP, particularly those of the GPT variety (ChatGPT, GPT-3, etc.), are completely changing the dynamic between people and writing. Having access to such powerful technology means that writers can produce content faster than ever before, generating whole articles from a single idea or a series of bullet points. They can defeat writer’s block by getting helpful prompts and different ideas to inspire writing. This recent generation of NLP models has been embraced as writing tools for the experienced and inexperienced alike, with the latter being freer to turn their ideas into eloquent articles and stories than before these tools were so readily available.

However, a downside exists to using these AI models as writing aids. In certain settings, such as secondary and postsecondary education, the goal of writing exercises is to learn how to write well and how to formulate and express ideas and arguments persuasively. Writing assignments are also a vehicle for learning how to research evidence to support points made in the piece and for honing critical thinking skills. Relying entirely on an AI model to do the work circumvents that process, with students failing to learn the important skills. As a result, efforts are underway to understand how one can determine whether something was written by AI.

Identifying Telltale Signs: How AI Gives Itself Away

As AI-authored articles become more mainstream, humans have more access to content that is generated by AI, giving them a window into the type of writing that it produces. This has helped immensely in identifying the features of content to determine whether the author was, in fact, a machine. Interestingly, most of these signs are related to the overall flow and feel of the prose rather than the fine details of grammar and spelling—though factual accuracy is a more specific indicator. However, for highly popular topics where a clear answer or common set of facts is usually cited, it is very likely that even human content will appear similar to and will overlap with content written by AI. This is especially true for shorter texts, regardless of who wrote them, where the content will tend to look and sound the same. For this reason, trying to identify AI-written assignments only works if the length of the text is sufficient.

With that in mind, AI-generated articles tend to repeat certain pieces of content, especially as the article gets longer. For models like ChatGPT, the next token predicted is conditioned on all the previous tokens predicted; so the more certain words appear earlier in a predicted series of text, the more likely they are to reappear. Another sign that writing was produced by AI is a lack of voice, where the content seems flat and devoid of any particularly strong emotion or opinion. This is particularly evident if the reader knows the writer’s personality, as is the case with teachers and students. AI-generated text may also be easier to spot if other examples of text from the same author are available, especially those under supervised conditions like an exam, which are likely to be less polished and more representative of the author’s true voice. If the common expressions and language used differ significantly between two pieces, one likely wasn’t written by the same person.

Another telltale sign is whether facts are correct and given correct citations. Generative models like GPT-3 and ChatGPT are known to have a difficult time producing the right answers—even though their writing may confidently assert otherwise—because although they were trained using large swaths of the internet’s textual information, they don’t actually have everything memorized. As such, these models hold an approximate knowledge of things and, instead of regurgitating trivia, are merely using what they’ve seen to reproduce content that seems like human language. Many of these models are not connected to anything that can check facts, so finding an assignment littered with errors that would have been caught by a human who had done sufficient research is also a good indicator that an article might be AI-derived. This is even more likely if the content is related to current events, people, or places because the most recent versions of publicly available models have only been trained on data up to a certain date, typically 2021 or 2022. When evaluating for this specific sign, though, checking whether new model versions have been released with more up-to-date knowledge is important.

AI Checking for AI

As AI becomes increasingly sophisticated, some of these signs will disappear or become so subtle that they will be more difficult for humans to detect. Some emerging AI tools can detect whether written content comes from AI; for example, some newer large language models (LLMs) are being trained to know when to make certain requests to external sources to retrieve factual information or run mathematical calculations. To combat the likelihood that people may eventually struggle to determine whether an assignment was written by AI, AI models that determine the provenance of content are being released more frequently. Even the producer of ChatGPT, OpenAI, has released a tool that lets educators classify whether something was written by an LLM. In the test set, it was able to identify 26% of AI-written text but came with a false positive rate of 9%, where it mislabeled human-written text as AI-written text. Despite the issues with accuracy, OpenAI is hoping these imperfect tools will still help in the fight against academic misconduct using AI.

Combining classifiers is another approach that can improve the success rate of detecting AI-written text by using AI. Additional document classifiers like GPTZeroX and DetectGPT look at burstiness and perplexity. These two metrics respectively measure the likelihood of the next word being predicted by a bot, with the expectation that humans are more random; and how perplexity changes across sentences, with the expectation that AI tends to keep perplexity the same. Although these classifiers come with limitations—for example, DetectGPT only works for certain GPT models—combining the results from several of them may better indicate AI-written text.

Avoiding the Problem Altogether

If this seems like a circular chase of AI trying to beat AI, in a way it is! As the checking AI improves, the writing AI will eventually be taught how to circumvent it. The cycle will continue until at some point it may become nearly impossible to determine whether a piece was written by AI just by examining the writing alone. This has led educators to experiment with assignment structure and tasks that make using AI either difficult or undesirable—for example, by choosing niche topics for the assignments (e.g., the local community or lesser-known historical figures), having students write about something personal, or focusing more on project-based learning.

Choosing niche topics is useful because typically AI will have less knowledge about them, and students will have to do their own research to find the right answers. They may still use the models to produce the written content, but at least they will have learned the process of researching and assessing the relevance of information. Requiring students to write about personal topics works more for psychological reasons because people are much more inclined to do the work themselves when they are the focus. Teachers have found that students are much more open and excited to work on personal topics than more external-facing topics. Finally, project-based learning helps because it encompasses multiple tasks of varying complexity, often with a practical aspect that is impossible for AI to accomplish. For example, one project across school districts involved looking at how policies affect water quality, which meant that students needed to not only research the policies but also go to the field to take water quality measurements. They also needed to present their findings in the form of graphs, charts, and a story around the data, something that ChatGPT would find difficult to do.

Conclusion

While the GPT family of models may be immensely helpful in certain domains, for others like secondary or postsecondary education, these models should be used sparingly to avoid both overreliance and disregard for learning outcomes. As such, educators are finding ways to determine whether something was written by AI by examining the text itself for signs like poor factual recall, unoriginal or uninspired writing, and excessive repetition. Others are looking to AI-based detection tools for help. However, because both of these approaches may be insufficient—or worse, produce false positives—a push exists to find ways to structure assignments so AI can’t do them or so students won’t want to try using AI to do them. In the end, the latter might be the best approach since it largely circumvents the desire to use AI in the first place and keeps students more engaged in the long run.

Author Bio

CHATBecks is a fullstack AI lead at Rogo, a New York-based startup building a platform to allow anyone to analyze and gain insights from their own data without a background in data science. In her spare time, she also works with Whale Seeker, another startup using AI to detect whales so that industry and these gentle giants can coexist profitably. She has worked across the spectrum in deep learning and machine learning from investigating novel deep learning methods and applying research directly for solving real world problems to architecting pipelines and platforms to train and deploy AI models in the wild and advising startups on their AI and data strategies.
To know more, click here

Tags

Mouser Electronics

Mouser Electronics, a Berkshire Hathaway company, is an award-winning, authorized semiconductor and electronic component distributor focused on rapid New Product Introductions from its manufacturing partners for electronic design engineers and buyers. The global distributor’s website, Mouser.com, is available in multiple languages and currencies and features more than 5 million products from over 750 manufacturers. Mouser offers 23 support locations around the world to provide best-in-class customer service and ships globally to over 600,000 customers in more than 220 countries/territories from its 750,000 sq. ft. state-of-the-art facility south of Dallas, Texas.

Related Articles

Upcoming Events