Overview:
Generative AI is reshaping writing instruction, raising questions about academic integrity, authorship, and the ethical integration of technology in the classroom.
Imagine that you’ve just assigned your students a reflective essay on something that they’ve read or on a project that they recently completed. Eventually, you begin sifting through the submissions, but you notice some marked differences from previous years. For instance,several essays are polished but kind of generic, with an overly refined tone that doesn’t at all match how the students communicate verbally in class.
One student reluctantly admits to using ChatGPT to “get started,” and the others follow suit with their admissions. They argue it’s still their work since they modified GPT’s outputs, which, of course, raises a dilemma: the work blurs the lines of both authorship and academic integrity.
In recent years, the integration of artificial intelligence (AI) tools like ChatGPT into educational settings has sparked heated discussion and debate among educators, as many worry about what this kind of technology means for our students, especially because this presents an ethical gray area.
The Rise of Generative AI in Education
Initially, I believed that the specialized nature of my writing assignments would render AI assistance less effective. However, as the tool evolved, I began noticing subtle yet significant changes in student submissions. Assignments exhibited fewer grammatical errors and a more polished structure. Conversely, some writing lacked the nuance of “human touch”, presenting information that felt impersonal and a little bit too formulaic. Additionally, instances of incorrect or fabricated content emerged, without proper citations or with citations from sources that were also completely fabricated. These anomalies led me to consider the extent to which students were relying on AI tools like ChatGPT.
Challenges in Detection and Academic Integrity
One of the biggest concerns as an Educator was how use of such tools fit within the confines of Academic Integrity policies. In response to such concerns, AI-detection tools emerged; however, many of these tools struggled to identify AI-generated content due to its unique phrasing and structure. Turnitin, the popular plagiarism-detection software, developed an AI detection tool, released in April 2023, boasting a 98% accuracy rate. However, while initially some of these tools seemed promising, concerns about false positives and biases in the accuracy rate grew. In fact, a story published in The Washington Post details educator tests of the tool and found that it had flagged content not generated by AI.
OpenAI, the company that created ChatGPT, even offers its own AI-detection tool; however, this tool, like many of the others, carries a 10% chance of false positives, highlighting the ongoing challenges in accurately discerning AI-assisted writing.
Using AI as a Learning Tool
While concerns about AI tools like ChatGPT are certainly valid, it’s essential to recognize their potential benefits. And also, we likely need to come to terms with the fact that Generative AI is not going away. Instead, we may need to consider how best to integrate use of such tools in the classroom. For instance, I have long recommended tools like Grammarly to my students for improving grammar and style. In similar ways, AI can be harnessed to enhance learning when used responsibly.
Some educators advocate for integrating AI into the learning process. For example, according to a New York Times article, an English teacher from Oregon utilized ChatGPT to generate essay outlines for “The Yellow Wallpaper,” which students then expanded upon in handwritten essays. This approach allowed students to refine their ideas before drafting, demonstrating a constructive use of AI in education.
However, the capabilities of AI tools distinguish them from traditional writing aids, raising concerns about their impact on various professions. A research paper released in March 2023 by OpenAI and the University of Pennsylvania identified fields such as tax preparation, writing, web design, mathematics, and financial analysis as highly susceptible to automation by AI, with exposure rates nearing 100%.
Shocking as these findings may seem, they underscore the need for a balanced approach to AI integration, considering both its advantages and drawbacks. We must understand, too, that ethical considerations are paramount, particularly regarding the extent to which automation should replace human roles.
In my courses at Carnegie Mellon University, I have tried to address these challenges by incorporating prompt engineering into the curriculum along with discussions on ethical use. By equipping students with these skills, we, as educators, aim to prepare them for a future where AI plays an integral role across industries. In the past few years, it has been crucial for me to foster a mindset that embraces technological advancements while upholding ethical standards, ensuring that AI serves as a tool for enhancement.
However, my embrace of Generative AI in the classroom has not come easily. I was initially resistant, worried about its implications for academic integrity and student engagement; In some ways, I still am. And despite my efforts to teach students how to use AI ethically and effectively, challenges remain. As the technology evolves, I encounter new hurdles, from keeping up with AI advancements to refining assignments that encourage critical thinking rather than reliance on AI.
Ultimately, educators and institutions must remain vigilant, adapting curricula and policies to navigate the complexities introduced by these technologies. Through responsible integration and ethical guidance, we can harness the benefits of AI while mitigating its risks, preparing students to thrive in an increasingly automated world.
Haylee Massaro is an Associate Teaching Professor at Carnegie Mellon University’s Heinz College of Information Systems and Public Policy where she teaches graduate-level courses in writing and communications with research focused in communication best practices related to Information Systems Management as well as emerging ideas in Generative AI. Prior to joining Carnegie Mellon, Haylee was a writing instructor working primarily with deployed military within University of Maryland’s distance learning program. She also has several years’ experience in Arts Management, working within Carnegie Museums of Art and Natural History’s Visitor Experience Department. Previously, she dedicated most of her time to the field and study of education, working with adjudicated youth and as an active participant in literacy programming through Americorps. Beyond her work and research in education and communications, Haylee has spent time as a freelance writer and editor, contributing to textbooks, journals, and education-related platforms.