The rapid evolution of generative AI last winter turned academia upside down. Many educators feared these new tools would lead to constant cheating so they took measures to prevent this nightmare. In some cases, AI tools like ChatGPT were simply banned from school networks, in others, strict AI detection tools were introduced to handle the situation. However, these technologies turned out to be less advanced leading to many students being falsely accused of using AI in their academic writing. This tendency, as well as a fair share of students’ complaints, forced universities to rethink their policies and find a new approach towards AI in written assignments.
Use the most powerful academic tools to write better with AI, check for plagiarism and detect AI content!
- AI detectors proved to be inaccurate in their predictions with many students reporting being falsely accused of cheating and AI use in their works
- More universities refuse to utilise AI detection tools during the education process, trying to, instead, ethically incorporate AI so that it would be beneficial for the curriculum
- Educators are advised to switch up their courses and teach students how to use AI to help them instead of completing the whole work for them
Previously, when AI use was frowned upon by the academic community, special tools that helped detect it started flourishing. As ChatGPT and similar technologies kept on emerging, teachers saw it as a threat to academic writing. What student wouldn’t be tempted by an opportunity to get his homework completed by AI? Therefore, AI detectors started to be widely used to find cheaters benefiting from recent technological advancements.
The Downfall of AI Detection
As time passed though, it turned out that it wasn’t the most reliable technology and had pretty much the same problems as the well-known ChatGPT. Evidently, the AI detectors gave out a lot of false positives, as students reported their work being labelled as “AI-generated” despite everything being written by themselves.
False accusations aren’t new – we covered a few situations of the sort. There were stories of students being kept in university for 3 hours because professors thought they were using ChatGPT to cheat. Even though the student tried to talk things over, no one believed them. There even was a moment when students were given 0 just for using Grammarly in their assignments. Turned out that detectors could flag anything as an AI component.
Due to the many conflicts like this, Open AI even closed its own tool – AI Classifier – as it didn’t live up to the set expectations. Many universities followed in the same direction. The University of Pittsburgh, for example, closed its AI detection feature that it had in its Turnitin package. Instead, they created a page dedicated to teaching educators how to use ChatGPT and similar resources in the academic process.
Vanderbilt University is another institution that did the same and discontinued Turnitin’s new AI detection feature. They emphasized that, since quality AI detection wasn’t yet possible, it’s important to keep a balance between “mitigating inappropriate AI usage while also being mindful of AI’s benefits in the teaching and learning process.”
AcademicHelp asked Jon Gillham, Founder at Originality.AI, and Andrew Rains, Co-Founder at Passed.ai, what their thoughts are regarding these changes and what their tool is going to do in the light of new policies.
A*Help: Do you think that these updates from universities were necessary?
Jon: We understand that false positive lead to false accusations for students, which is the very reason we created our secondary verification tools. Our aim is not to merely provide an opaque score, but rather to give the educators as much data as possible about a student’s work, allowing them to make an informed decision based on available data and their own intuition and expertise.
A*Help: Taking into account that OpenAI shut down its AI detection tool, and even universities close down its AI detection features, will other detection services follow the same pattern?
Andrew: The text-classifier provided by OpenAI was never a 100% solution. As for Originality.AI and Passed.AI, we continue to invest heavily in. We’re collectively striving to solve for AI Detection with secondary verification, and offer the tools that educators absolutely need going forward.
A*Help: If newly introduced policies state that AI detectors are not foolproof then how can integrity be maintained in the classroom?
Jon: It’s hard to answer because different educational institutions have different experiences with cenratin AI detectors. From my side, I can tell that Originality has invested heavily in training and re-training the AI model to reduce false-positives and improve accuracy. At Passed.AI, we’re focused on providing best-in-class secondary data verification on top of the AI Detection. We believe the combination of our AI detection, the document audit, and the document replay gives the educator as close to full transparency as possible.
A*Help: What does it take for AI detectors to prove that they can protect the authenticity of the written work?
Andrew: To gain trust as a protector of authenticity in written work, an AI detector needs to stand up to a rigorous test again a large data-set.
What is the Direction Now?
As of today, universities tend to look into incorporating AI technologies into the studying process rather than just ignoring them. Even though this process is not easy, as educators are forced to review their usual teaching methods as well as curriculum, this is seen as a much-needed adaptation. Tools like ChatGPT are not going anywhere and since there’s no effective way to prevent students from using it, it’s a better call to use it to their advantage.
This change of course has led to many discussions on which is the better way of implementing AI in studying. Of course, many point out that this needs to be done with caution. Most teachers are now turning to tasks focused on critical thinking. Artificial intelligence can be used to inspire students and give them some ideas instead of just doing all the work for them.
Recommendation for AI Use in Classroom
Aside from focusing more on unique and thought-provoking assignments, there are a number of other recommendations that can be used to integrate AI in the classroom better and keep academic integrity in place:
- Change the format of written tasks. Make students write on topics that were discussed on the same day in class.
- Teachers can also give writing assignments directly in the classroom. It can be done at the beginning of the semester so that they get accustomed to the style of each student. This will give educators an opportunity to recognize cheating if it happens in the future quickly
- Use current topics to discuss in the classroom. AI chatbots do not usually have access to the most recent data, so this will minimise the possibility of their use in such a setting.
- Educators should hold conversations with students about the ethical way of using AI. If anyone is caught cheating it’s also advised to talk the situation over rather than just accuse students giving them 0 on their assignments.
The rapid evolution of generative AI and its following influence on the academic community has been nothing short of transformative. While initial fears pointed towards the potential misuse of such technologies, recent shifts in university policies signal a change in perspective. As the landscape of education continues to be reshaped by technology, the emphasis on academic integrity, combined with a proactive approach towards AI, offers a balanced pathway for the future. Embracing the technology and leveraging its strengths, while fostering an environment of critical thinking, seems to be the golden ticket for the modern world of academia.
Follow us on Reddit for more insights and updates.