Imagine you’ve just spent months, maybe even years, sweating over this groundbreaking paper. You’re convinced it’s going to shake things up in your field. You hit “submit,” and then you wait. And wait until finally the feedback comes in. Three reviewers are okay with it, but there’s this one odd comment. It makes you feel like someone wrote it without thinking much about your ideas or their contribution to the field. This is where our story begins. 

Woman shrugging
✅ AI Essay Writer ✅ AI Detector ✅ Plagchecker ✅ Paraphraser
✅ Summarizer ✅ Citation Generator

Key Takeaways:

  • AI may influence peer reviews, leading to generic feedback
  • Address vague feedback, seeking clarity and consulting mentors
  • Engage editors for clear guidance on non-specific or unusual reviews.

Peer review, the required checkpoint in academic publishing, is supposed to make sure that only the right and worthy stuff makes it through. It’s like the bouncer at the club, only letting in cool ideas. But what happens when the bouncer might be a bot? That’s the oddball situation one Reddit user found themselves in. After submitting a paper to a prestigious Elsevier journal, they received a review so vague and generic that it seemed like it might have been created by AI.

The author has reasons to think so. They took their paper, ran it through an AI tool designed to imitate expert reviews, and got back comments eerily similar to the mysterious review. “So what?” you may think. The problem is that the AI’s advice resulted in the author adding 2000 more words to the paper, which they believed only decreased the quality of their argument. 

My problem is that, if I try to address these comments from an AI, I have to add a lot of extra content in the manuscript because the comments are not specific, which I did and I have an extra 2k words and I personally believe that they are useless and damage the academic integrity of the paper.

The situation didn’t sit right with our hero, who wondered if academic integrity took a backseat when robots joined the editorial board. Their story only opens up a can of worms about artificial intelligence and academic severity wider since it’s not the first time AI leaves spots in scholarly journals. Is the future of peer review automated, and if so, is that necessarily a bad thing? It’s already a spicy topic, and this Reddit post only adds fuel to the fire.

Shared Struggles

To their surprise (or maybe anticipation), the Reddit post author wasn’t alone in their attempts to make sense of that bizarre review. Quite a few discussion participants shared that they’d had similar experiences.

One commenter suggested a clever way out: “If the comments are not specific, then address them in a specific way that saves you time and effort.” This basically means “toss in a couple of sentences to acknowledge it and move on.” It’s like when you’re asked about your plans for the future at a Thanksgiving dinner. You don’t present your five-year plan. You say, “Just exploring my options,” and pass the potatoes.

Another shared their strange encounter with what they suspected was a bot reviewer. An academic referee suspects their counterpart might also be using AI. 

“I recently refereed a paper where I’m almost positive the other referee used AI (the editor sent both of us a copy of each other’s reports). The summary sounded very AI-esque so I did some similar sleuthing to you and discovered that Claude (but not ChatGPT) popped out something akin to what they wrote. Imagine if both referees used it! On the other hand, getting two almost identical reports would be easier to deal with…”

Why not? If both were AI-generated, that’s less trouble. 

Handling AI-Generated Reviews: What Redditors Think

While a part of the feedback was just someone’s insights and general discussion, there were users who suggested practical advice on how to respond and what to do next. A voice of reason says, 

“So email the editor and say “I appreciate Reviewer 4’s feedback, but I’m having difficulty implementing some of their suggestions due to lack of specificity. Would it be possible for you and I to discuss their suggestions?””

This sounds like really good advice. If your review feels like it was written by Siri’s less helpful cousin, it’s okay to ask for clarity. This approach can help you come up with the right solution while also projecting yourself as an ardent and hardworking student

Some folks cautioned against jumping to the AI blame game too quickly. After all, not only ChatGPT but also human reviewers can be as unpredictable as a twist in a “Game of Thrones” episode:

“The thing is, the review being generic isn’t strong evidence that this was ChatGPT. It’s unfortunate, but some reviewers do half-ass their reviews. Personally, I would drop the whole ChatGPT angle. You don’t have strong enough evidence for your suspicions and it will be extremely difficult for the journal to find out for sure. If the feedback is so vague that you can’t respond to it, then you can ask the editor for guidance or additional clarification. As others have said, if you can come up with a way to address the generic feedback then you should do just that.”

The advice is clear: focus on what you can control, like responding to the feedback, even if it’s vague. And before you do anything drastic, like rewriting half your paper, maybe have a chat with someone who’s been in the trenches longer than you, like a mentor or advisor.

“If you have substantive disagreements with any of the critiques you can also address that in your response to the editor; for example if you are accused of not doing something you did do, you can point out where you did it. Though usually in that case it’s smart to throw the reviewer a bone and do something small to address their comment. Also, don’t do anything without talking to your advisor first.”

Lastly, if you can make sense of the review, even if it requires some effort and creativity, why not just roll with it? Sometimes, the path of least resistance is also the path to getting your work out there, even if it means dealing with reviews that leave you scratching your head.

The Main Point

So, here we are, at the crossroads of innovation and tradition, wondering if our academic gatekeepers might soon include algorithms alongside humans. It’s a wild thought, but aren’t we living in times where cars drive themselves and fridges can order groceries? Why not have an AI that thinks it can critique academic papers? The future is now, and it’s as unpredictable as ever. But one thing’s for sure: it’s never going to be dull. Even in higher education.


Opt out or Contact us anytime. See our Privacy Notice

Follow us on Reddit for more insights and updates.

Comments (0)

Welcome to A*Help comments!

We’re all about debate and discussion at A*Help.

We value the diverse opinions of users, so you may find points of view that you don’t agree with. And that’s cool. However, there are certain things we’re not OK with: attempts to manipulate our data in any way, for example, or the posting of discriminative, offensive, hateful, or disparaging material.

Your email address will not be published. Required fields are marked *


Register | Lost your password?