Usually, students are the ones breaching academic integrity rules by using AI irresponsibly. But “how the turntables!” One of the students saw that their paper’s evaluation was pretty vague and after inspecting it closer came to a conclusion that it was written by ChatGPT. The thing is – the mark was based on this AI assessment and not on professor’s personal opinion.

Woman shrugging
✅ AI Essay Writer ✅ AI Detector ✅ Plagchecker ✅ Paraphraser
✅ Summarizer ✅ Citation Generator

Key Takeaways:

  • One professor used ChatGPT for assessing the papers and copy-pasted the results, which was shown by a GPTZero report.
  • Teachers should certainly use AI to improve their work routine, but keep the human touch in their assessments.
  • Educators in all sorts of institutions should have basic AI training at least to both implement it effectively and teach students on how to use it responsibly.

Recently, a student revealed a surprising discovery: their professor was using ChatGPT to grade assignments. The student noticed patterns and generic feedback in their graded essays, leading to the suspicion and eventual confirmation that an AI was involved in the evaluation process. This situation has sparked a heated debate among people on Reddit, as it usually happens.

Is It Okay for Teachers to Use AI?

On one hand, using AI can help teachers manage their workload and streamline daily tasks. However, the issue arises when the AI-generated feedback is vague and abstract, as noted by the student.

“The feedback is a whole load of nothing minus the punctuation error in that one sentence. Everything else was both vague and didn’t actually need improvement. As for the mark itself, while not terrible, I was not happy as I delivered a very well structured report.”

The professor’s reliance on ChatGPT for grading does not necessarily reflect the quality of the student’s work. Comments like “Your essay is well-written but could use more depth” don’t provide actionable guidance for improvement. Another Reddit commenter expressed frustration, saying:

“This is going to accelerate the dead education theory, where students will us AI to make assignments, and professors are going to use AI to grade them.”

AI can be an efficient assistant, helping with repetitive tasks and providing initial assessments. Yet, this approach can be problematic if it becomes the primary method of evaluation. The nuances of a student’s argument, the creativity in their expression, and the subtlety of their analysis are elements that AI might (or definitely will not) not fully grasp. This lack of personalized feedback can leave students feeling shortchanged and underappreciated, which was the case in the original thread.

If It’s Okay – Then What’s the Problem?

The main concern is responsible use of AI. While it’s acceptable to use AI as a supportive tool, overreliance on it can lead to problems. For instance, AI might miss the nuances and creativity that a human teacher would catch. Students might feel undervalued if they think their hard work is being judged by a machine. The lack of personal engagement and constructive criticism can hinder students’ growth and learning experience.

Another Reddit user pointed out:

“I know it’s hard when you’ve put so much work into something & it feels like it’s only been looked at for a few minutes. Unfortunately, this is standard across the industry (particularly in undergraduate degrees). People marking above & beyond that are usually doing extra unpaid hours to fit it all in.”

The need for meaningful, personalized feedback that AI often cannot provide is clear here. The absence of human touch in grading can make students feel disconnected from their educators and diminish the educational experience. The use of AI should improve the educational process, not replace the essential human element that is the main goal of learning and development.

What Should Schools and Institutions Do Then?

First, AI should not be a taboo topic in education. Transparency is key. Schools should openly discuss how AI can be integrated into the learning process. Teachers need proper training to understand these tools and use them effectively. They should also teach students how to responsibly use AI for their own learning. Educational institutions can leverage AI without compromising the quality of education which has been the main goal since ChatGPT became popular.

One of the professors spoke up under the thread, offering another perspective that perfectly summarizes the path forward:

“I’m a high school teacher and I find AI is an outstanding teaching assistant. I use it to help draft feedback, assist in checking assignments – although – I never get it to grade them for me. I am the human.
I grade the assignment based on the students work, but I do use AI to help write the feedback. I then modify the feedback in my own words – but it definitely saves so much time, and also allows the student to get more personalised feedback than would have ever been possible in the past. (…) AI isn’t going away, it’s here to stay. So I am teaching my students about the world they are entering – and many of them need to be aware of how to use AI in an ethical and appropriate manner.”

Educators should be encouraged to use AI as an aid rather than a replacement for their judgment. AI can handle initial assessments and provide a foundation, but the final evaluation should come from a teacher who understands the context and the student’s unique perspective – this is the baseline. Proper training for teachers can help them use AI tools wisely and provide valuable insights into their students’ work. Schools can develop clear guidelines on how AI should be used in grading and feedback processes to keep transparency and trust in the process. As the commenter above said, “AI is here to stay” and institutions better learn how to deal with it.

Opt out or Contact us anytime. See our Privacy Notice

Follow us on Reddit for more insights and updates.

Comments (0)

Welcome to A*Help comments!

We’re all about debate and discussion at A*Help.

We value the diverse opinions of users, so you may find points of view that you don’t agree with. And that’s cool. However, there are certain things we’re not OK with: attempts to manipulate our data in any way, for example, or the posting of discriminative, offensive, hateful, or disparaging material.

Your email address will not be published. Required fields are marked *


Register | Lost your password?