Marley Stevens, a junior at the University of North Georgia, found herself at the center of a viral storm when she was accused of academic misconduct due to her use of grammar-checking software. Her story, which she described as a “debacle,” began when a professor flagged her paper as robot-written after it was processed through an AI-detection system. Despite her insistence that the work was solely her own, Stevens faced severe consequences, including a zero grade on the paper, which led to a decline in her final course grade, threatening her HOPE Scholarship eligibility and landing her on academic probation.
✅ AI Essay Writer ✅ AI Detector ✅ Plagchecker ✅ Paraphraser
✅ Summarizer ✅ Citation Generator
Key takeaways:
- The use of grammar-checking tools alongside AI detection systems raises concerns about fairness in college.
- Marley Stevens’ case illustrates the serious consequences of being wrongly accused by AI software.
- Universities need clearer rules about when and how students can use AI tools to avoid misunderstandings.
- Tech companies, schools, and students need to work together to solve issues with AI detection and academic honesty.
Stevens, determined to make the story public took it to TikTok, posting a video warning, reports EdSurge. She alerted her fellow students about the potential traps of using grammar-checking software in conjunction with AI-detection systems. Her initial warning video garnered over 5.5 million views and sparked a series of follow-up clips where she chronicled her struggle against what she viewed as flawed AI-detection tools increasingly adopted by colleges and professors.
Facing the Fallout. Academic Penalties Revealed
Stevens suffers severe consequences as a result of the incident. Not only did she face academic penalties, but she also incurred financial costs, being required to attend a seminar on cheating and pay a $105 fee for it. The student’s university declined to reveal its AI detection rules but did restate its commitment to academic integrity:
This case raises more relevant questions about the role of AI in education. As grammar-checking features become more integrated into everyday writing tools, distinguishing between legitimate assistance and academic misconduct becomes much trickier. Stevens questioned the fairness of penalizing students for using commonly available tools like Grammarly, which many professors themselves recommend:
“I’ve had other teachers at this same university recommend that I use [Grammarly] for papers. So are they trying to tell us that we can’t use autocorrect or spell checkers or anything? What do they want us to do, type it into, like, a Notes app and turn it in that way? My whole thing is that AI detectors are garbage and there’s not much that we as students can do about it. And that’s not fair because we do all this work and pay all this money to go to college, and then an AI detector can pretty much screw up your whole college career.”
Industry’s Response to the Situation
Our student likely didn’t anticipate such a powerful reaction to her video. Not only did the student community react immediately, but all parties involved, including the college and Grammarly authorities, responded quickly to address the disagreement, aiming to ease tensions and reduce the intensity surrounding the scandalous event.
In response to the controversy, the University of North Georgia issued a cautionary email to all students, highlighting the potential consequences of using generative AI tools like Grammarly:
The professor involved in Stevens’ case mentioned the use of Copyleaks, another AI-based detection tool, which also marked her paper as bot-written. Marley, however, claims that when she ran her work through CopyLeaks later, the tool reversed its initial judgment and labeled the text as human-generated. When asked for their comments CopyLeaks’ representatives didn’t provide any assessments or judgements.
Stevens’ plight reached the hearts of many students across the country who shared similar experiences of being falsely accused of cheating due to AI-detection software. Moved by her story, supporters rallied behind Stevens, contributing to a GoFundMe campaign aiming to cover her scholarship loss and potential legal expenses. Notably, Grammarly, the very tool implicated in the incident, donated $4000 to Stevens’ cause and offered her a role as a student ambassador, signaling a shift in their approach toward addressing the issue. Marley says,
Grammarly’s behavior is quite transparent as are its interests. Clearly, along with trying to help the girl who inadvertently got into trouble for using their tool, the company is now trying to whitewash its reputation and change the girl’s voiced intention that made the video go viral:
“If you have a paper, essay, discussion post, anything that is getting submitted to TurnItIn, uninstall Grammarly right now.”
The debate surrounding AI-detection tools extends beyond just Stevens’ case. Turnitin, a major plagiarism-detection service, also involved in the incident, acknowledged the limitations of AI in accurately identifying bot-written content. Annie Chechitelli, Turnitin’s chief product officer, emphasized the need for teachers to exercise discretion and engage students in dialogue rather than relying solely on AI-generated flags:
“A lot of institutions at the faculty level are unaware of how often these AI-detection services are wrong. We want to make sure that institutions are aware of just how dangerous having these AI detectors as the single source of truth can be. We very much had to train the teachers that this is not proof that the student cheated. We’ve always said the teacher needs to make a decision.”
Despite all efforts to mitigate the impact of AI detection errors, concerns regarding their reliability are growing. This puts students, educators, and, inevitably, companies in a more difficult position. While some are outraged and seek answers, others are trying to provide explanations and address issues.
According to a Turnitin spokesperson, common grammar-checking tools do not trigger alarms in the company’s internal testing. Jenny Maxwell of Grammarly, says that even if an AI-detection system is correct 98% of the time, it would wrongly flag around 2% of papers. And, given that a single university may get 50,000 student papers each year, if all instructors applied an AI detection system, 1,000 papers would be mistakenly labeled as cheating.
Grammarly initiated a meeting with the University of North Georgia to discuss concerns raised by Stevens’ case. While the university initially removed Grammarly from its list of recommended resources following the viral TikTok videos, discussions were held between Grammarly representatives and university officials. These negotiations aimed to foster a constructive dialogue and find common ground regarding the appropriate use of grammar-checking tools in conjunction with AI-detection systems. Ultimately, the University of North Georgia reinstated Grammarly on the list of recommended resources. Miss Maxwell sounds encouraged by the results,
These facts reflect a recognition of the valuable role that Grammarly and similar tools play in supporting students’ writing, while also recognizing the need for clearer guidelines and communication surrounding their use within academic contexts.
Related
Follow us on Reddit for more insights and updates.
Comments (0)
Welcome to A*Help comments!
We’re all about debate and discussion at A*Help.
We value the diverse opinions of users, so you may find points of view that you don’t agree with. And that’s cool. However, there are certain things we’re not OK with: attempts to manipulate our data in any way, for example, or the posting of discriminative, offensive, hateful, or disparaging material.