Making Courses Resistant to ChatGPT Plagiarism


 

by Dr. Sarah Ruth Jacobs

Making Courses Resistant to ChatGPT Plagiarism

Paulm1993/Shutterstock

Since its public release by OpenAI in November of 2022, ChatGPT has proven itself to be an advanced tool for gathering research and synthesizing text, though its exact list of sources is unknown. Despite its tendency to hallucinate, the chatbot has already been listed as an author on research papers and over 200 Amazon Kindle titles. While OpenAI grants ownership of all ChatGPT output to the user, it is unclear whether the original authors of ChatGPT’s sources may be able to advance copyright claims on ChatGPT’s output. The ability of ChatGPT to parse meaning and explain concepts using novel phrasing has started a new anti-plagiarism arms race. In his ultimately optimistic article, Stephen Marche declares “The College Essay Is Dead,” while gesturing toward a future in which the humanities become inseparable from the sciences. Will entire curriculums need to be re-worked to evade the skills of ChatGPT, only for faculty to face the same process again when a new iteration of the chatbot is released?

Unfortunately, subtle forms of plagiarism with ChatGPT are untraceable. Specifically, the chatbot might be used to organize topics, bypass traditional research methods, or generate ideas, leaving little hope for faculty seeking to determine the level of student input. As of the date that this article was written, ChatGPT seems capable of editing student writing without the output being detectable as AI-generated. Specifically, ChatGPT rewrote a portion of this article to contain more “varied sentences,” and the resulting text was deemed entirely human by the AI text detector ZeroGPT and “very unlikely AI-generated” by OpenAI’s text classifier. Even more alarming, when ChatGPT was asked to give a feminist analysis of a novel and the output was rephrased using Quill, a rephrasing tool, the resulting text was deemed by ZeroGPT to be entirely human, while OpenAI’s classifier said the text’s origin was “unclear.” Sophisticated students who use ChatGPT will be able to evaluate their own work for AI plagiarism and adjust it accordingly.

Sam Altman, the CEO of OpenAI, has confirmed that the company has plans to help schools more definitively spot plagiarism, including perhaps “watermarking” text — creating subtle word patterns that signal the text is AI-generated. However, in the same article, he acknowledges that those who are “determined” will always find ways to escape detection.

There are some approaches that faculty can take to protect academic integrity despite Pandora’s box being unsealed:

  • Openly discuss the ethics of using ChatGPT in a variety of ways. A course policy that specifies allowable uses of the chatbot — and/or penalties for using it — can help to manage everyone’s expectations. For example, a course might allow for certain uses of ChatGPT, as long as the work includes the chat transcript and an explanation of how the chatbot’s output was evaluated, coordinated with other research, and fact-checked.
  • Test assignments with ChatGPT. While the chatbot is excellent at summarizing well-established concepts, it is less proficient at complex, novel analysis of individual texts or sections of texts. Entering an assignment prompt into ChatGPT can be a great way for faculty to gauge and then leverage the tool’s blind spots. If a student’s work does not deeply engage with an assignment, it may not be possible to prove that the work is AI-generated, but the student can still lose significant credit, which should hopefully encourage him or her to do the needed deep work.
  • Flip the classroom. Having students do assignments in the classroom creates an opportunity for faculty to assist them, and it also establishes certain baselines for each student’s work. When a student’s in-class work bears little resemblance to his or her out-of-class work, perhaps a non-punitive, open-minded dialogue will help the student and the faculty member to determine if the student is working in a way that serves his or her best interests (and that is consistent with course expectations).
  • Assign more original research, timely issues, hands-on work, or projects requiring personalized/localized knowledge. By asking students to apply course concepts via appropriately challenging original research, faculty can breathe more life into the course material, as well as remove the potential for plagiarism. Additionally, ChatGPT’s knowledge ends in September 2021, so asking students to apply concepts to recent events or publications will, at least currently, stump the chatbot.
  • Use AI text detection tools with caution. Unlike traditional plagiarism, which usually involves taking exact wording from or lightly rephrasing a source, AI plagiarism is quite often not provable, and faculty members and administrators will perhaps find themselves on the defensive when they are unable to point to any original sources or wording, even when AI detection tools are on their side. Students may have complex reasons for using AI, such as a sense of inadequacy or life circumstances that make completing work on time very difficult. A compassionate approach that seeks to understand and address the root cause of the suspected AI use, rather than a one-size-fits-all lecture or punishment, will most likely hold the highest hope for positive change.

When it comes to ChatGPT, the Cassandras seem to outnumber the Pollyannas, and probably rightfully so. Prominent among the naysayers, Noam Chomsky is quoted as calling the current version of ChatGPT “high-tech plagiarism” and “a way of avoiding learning.” Numerous school districts have already banned the chatbot, which is admittedly prone to misinformation, baked-in human biases, and other problems. Other scholars, such as Cathy N. Davidson, acknowledge that faculty cannot escape this new technology, so the best they can do is collaborate with students to interrogate and explore ChatGPT as a flawed tool. Provost Andrew A. Workman at Widener University states that “to the extent that these tools can improve productivity, we have had a robust discussion on campus about how to use this and other AI chatbots to enhance our students’ abilities to learn and function in their lives after graduation. This will include teaching how the technology works, its limitations, and also its best use in accomplishing intellectual work.” Will there be a day when an AI chatbot can at least acknowledge and inform users of its own biases, even if it cannot remedy them? Will, one day, AI be able to play a part in educating and lifting people out of poverty? Or will AI usher humanity further down a path of increasing inequality? While these are large questions, they nonetheless have resonance in the context of higher education. Students who are taught to critically engage with tools like ChatGPT will be far better equipped for the future than those who are simply told, “don’t touch.”


Disclaimer: HigherEdJobs encourages free discourse and expression of issues while striving for accurate presentation to our audience. A guest opinion serves as an avenue to address and explore important topics, for authors to impart their expertise to our higher education audience and to challenge readers to consider points of view that could be outside of their comfort zone. The viewpoints, beliefs, or opinions expressed in the above piece are those of the author(s) and don’t imply endorsement by HigherEdJobs.



Source link