Detecting AI Created Content

ITS and Online Learning Services do not currently support or recommend any tool for the detection of AI-generated writing.

All available research to date indicates that such tools are easily fooled and may risk harm to students by incorrectly flagging human authored writing as AI-generated. 

Where we stand — February 2024

ITS and Online Learning Services are acutely aware that artificial intelligence tools like ChatGPT have been disruptive to the assessment methods used by many instructors. We have long been aware of the risk of ghostwriters and creative solutions for cheating on exams, but the ease with which these new AI tools can generate believable content has led to a sharp increase in questions about how to determine that the work submitted by students is original (Mills 2023). The detection of AI content is notoriously difficult (Edwards 2023; Heikkilä 2022).

To date, there are no tools that can reliably detect writing that is AI-generated (Weber-Wulff et al. 2023). Multiple independent studies have found that while some AI detectors can identify raw output copy/pasted directly from ChatGPT at an initial rate over 90%, even the most initially accurate detectors can easily be fooled by very basic manipulation of AI-generated writing such as light paraphrasing, re-wording by a human, or even a second pass through another AI writing tool. The rate of “false negatives” – incorrectly attributing AI-generated text as human-written – approaches 50% in testing where the text is subject to these minor alterations before submission.

In July of 2023, OpenAI, the company behind ChatGPT, disabled public access to its AI classifier tool (Hendrik Kirchner et al. 2023). Their announcement of the change cited the classifier’s “low rate of accuracy,” even with text generated by their own ChatGPT service. While it was available, the classifier incorrectly identified human-written text as AI-generated – a “false positive” – in 9% of cases in OpenAI analysis. Independent testing found even higher rates (Elkhatat, Elsaid, and Almeer 2023).

In the context of academic integrity, the risks of false positives are significant (Klee 2023; Fowler 2023). Unreliable AI detection not only fails to improve academic integrity but may deepen existing inequalities. Non-native English speakers are flagged by AI detection tools at a disproportionate rate (Myers 2023). Other tools like Grammarly with legitimate academic applications, particularly for writers with dyslexia and other learning disabilities, also increase the likelihood of being flagged by AI detectors (Shapiro 2018; Steere 2023).

What to do?

All this leaves instructors in a challenging position where the best recommendations being put forward are to redesign their assessments. Redesigning assessments is difficult and time consuming, and the new assessment methods often require more time to grade. Just as AI tools are beginning to make the process of writing faster and easier for everybody, it feels unfair that teachers of writing are forced to spend more of their own precious time on addressing the downsides and potential misuse of these tools.

This change in the digital writing landscape has been foisted upon us suddenly and leaves us all scrambling to respond. Even so, these tools are available to learners and there is no way to prevent students from using them — the chat is already out of the bag, so to speak. Any response will consume our time and energy, so it is important our efforts are spent in ways that will genuinely address the problem. Time spent chasing false positives created by inadequate and biased tools is time wasted and puts at risk our relationship with our students. Our time is better spent adapting our teaching and assessments to reflect the changing landscape of writing technology.

The most optimistic stance in the face of this challenge is to recognize that the vast majority of our students are honest and deeply invested in genuine learning. As with all academic dishonesty, we should resist letting the actions of a few bad actors color our impression of our extraordinary student population. Still, the temptation remains for well-meaning students to use AI to cut what appear to them as minor corners. Students are at risk for harm if they are not educated on responsible use of these new technologies. If they use AI for course work, one risk is that learners miss an opportunity to internalize important information or fail to master the topics at hand. Worse, it’s possible for an AI to feed them misinformation that they are unable to distinguish from research-backed conclusions. Either problem can lead to costly mistakes down the road (Brodkin 2023). AI is rapidly becoming pervasive in the world beyond the university’s walls, and students deserve to be taught how to use AI tools thoughtfully and effectively in the future.

One place to begin is to formulate a statement on the use of AI in your course and communicate it clearly to students. Syracuse University’s Provost Office has published guidance and boilerplate language to include in course syllabi (“Syllabus Recommendations - Center for Learning and Student Success – Syracuse University,” n.d.). Instructor across the globe are contributing syllabi policies to a shared repository (Eaton, 2023). Addressing the issue directly and discussing it openly can help students make responsible decisions about using AI in their coursework. Ideally, an instructor would be able to help students understand where AI can be helpful and harmful in their specific discipline — where it can help speed up work and generate ideas, and where it’s likely to lead to faulty conclusions. 

Units across campus will continue to provide forums for faculty to discuss the implications of AI and approaches to take in response. With no reliable detection tools on the horizon, these conversations, both on campus and off, represent our best avenue to authentic assessment of our students and their work (McMurtrie 2023; “Authentic Assessment,” n.d.).

Online Learning Services will continue to evaluate new teaching and learning technologies and remains available to consult with faculty on teaching and technology. ITS will continue to provide access to effective tools where they are available. In addition to technological considerations, the Center for Teaching and Learning Excellence has pedagogical and policy resources for instructors on strategies they might take to improve their assessments (CTLE, n.d.).

AI created content detection and Turnitin

In April 2023, Turnitin released an AI writing detector. This tool was enabled in the Syracuse University Turnitin system as a preview. During the preview there were no fees associated with the tool. Turnitin initially reported low rates of false positives, but those have since been called into question. (Chechitelli 2023; D’Agostino 2023). The detector’s false negative rate was close to 40-50% in tests where AI-generated text was reworded by a human or by a separate AI paraphrasing tool (Weber-Wulff et al. 2023).

At the end of the free preview on December 31, 2023, Turnitin announced that it would begin charging an additional license fee for the use of the AI Detection tool. Given the concerns about its effectiveness, ITS elected to not license the AI content detection tool. We are not alone in this choice as multiple R1 universities have made a similar decision (Brown 2023; Coley 2023; “Known Issue – Turnitin AI Writing Detection Unavailable – Center for Instructional Technology | The University of Alabama” 2023).

We are also unable to recommend any alternative technological solution. None of the AI detection tools currently available online are accurate enough to provide credible evidence in academic integrity investigations. The risk of misleading results harming students who are acting in good faith is too great. ITS is committed to thorough and transparent vetting of any new tools that emerge in the future. If a reliable tool for AI detection becomes available, ITS will evaluate the tool and consider recommending it to the Syracuse University academic community.


“Authentic Assessment.” n.d. Center for Innovative Teaching and Learning. Accessed February 27, 2024.

Brodkin, Jon. 2023. “Lawyer Cited 6 Fake Cases Made up by ChatGPT; Judge Calls It ‘Unprecedented.’” Ars Technica. May 30, 2023.

Brown, Joseph. 2023. “Why You Can’t Find Turnitin’s AI Writing Detection Tool.” The Institute for Learning and Teaching. April 26, 2023.

Chechitelli, Annie. 2023. “AI Writing Detection Update from Turnitin’s Chief Product Officer.” May 23, 2023.

Coley, Michael. 2023. “Guidance on AI Detection and Why We’re Disabling Turnitin’s AI Detector.” Vanderbilt University. August 16, 2023.

CTLE. n.d. “CTLE and CLASS Tips and Strategies for Faculty and Instructors: What We Know About ChatGPT and Options for Responding - CTLE Resources - Answers.” Accessed February 27, 2024.

D’Agostino, Susan. 2023. “Turnitin’s AI Detector: Higher-Than-Expected False Positives.” Inside Higher Ed. June 1, 2023.

Eaton, Lance. 2023. “Syllabi Policies for AI Generative Tools.” Google Docs. January 16, 2023.

Edwards, Benj. 2023. “Why AI Detectors Think the US Constitution Was Written by AI.” Ars Technica. July 14, 2023.

Elkhatat, Ahmed M., Khaled Elsaid, and Saeed Almeer. 2023. “Evaluating the Efficacy of AI Content Detection Tools in Differentiating between Human and AI-Generated Text.” International Journal for Educational Integrity 19 (1): 17.

Fowler, Geoffrey A. 2023. “Analysis | We Tested a New ChatGPT-Detector for Teachers. It Flagged an Innocent Student.” Washington Post, April 14, 2023.

Heikkilä, Melissa. 2022. “How to Spot AI-Generated Text.” MIT Technology Review. December 19, 2022.

Hendrik Kirchner, Jan, Lama Ahmad, Scott Aaronson, and Jan Leike. 2023. “New AI Classifier for Indicating AI-Written Text.” January 31, 2023.

Klee, Miles. 2023. “She Was Falsely Accused of Cheating With AI -- And She Won’t Be the Last.” Rolling Stone (blog). June 6, 2023.

“Known Issue – Turnitin AI Writing Detection Unavailable – Center for Instructional Technology | The University of Alabama.” 2023. August 1, 2023.

McMurtrie, Beth. 2023. “How ChatGPT Has Shaped Teaching — So Far.” The Chronicle of Higher Education. December 21, 2023.

Mills, Anna R. 2023. “Advice | ChatGPT Just Got Better. What Does That Mean for Our Writing Assignments?” The Chronicle of Higher Education. March 23, 2023.

Myers, Andrew. 2023. “AI-Detectors Biased Against Non-Native English Writers.” May 15, 2023.

Shapiro, Lisa Wood. 2018. “How Technology Helped Me Cheat Dyslexia.” Wired, June 18, 2018.

Steere, Elizabeth. 2023. “The Trouble With AI Writing Detection.” Inside Higher Ed. October 18, 2023.

“Syllabus Recommendations - Center for Learning and Student Success – Syracuse University.” n.d. Accessed February 27, 2024.

Weber-Wulff, Debora, Alla Anohina-Naumeca, Sonja Bjelobaba, Tomáš Foltýnek, Jean Guerrero-Dib, Olumide Popoola, Petr Šigut, and Lorna Waddington. 2023. “Testing of Detection Tools for AI-Generated Text.” International Journal for Educational Integrity 19 (1): 26.

  • No labels