Is Your Editorial Office Ready for 2030? Watch the Webinar.

Watch Now!
Blog Feb 23, 2026 | Disruption to Direction

From Disruption to Direction: Reasserting Human Judgment in Peer Review in the Age of AI

8

Ashutosh Ghildiyal Vice President – Growth and Strategy

Peer review faces unprecedented pressure. Submission volumes continue to rise, reviewer fatigue is widespread, integrity risks are multiplying, and AI is now embedded across editorial workflows. In this environment, it is tempting to frame the future of peer review as a race toward automation, efficiency, and throughput.

That would be a mistake.

In the age of AI, human peer review is non-negotiable—but it must also evolve.

The central question facing publishers today is no longer whether AI should be used in peer review (it already is), but whether we are designing systems that protect and amplify human attention and judgment—or quietly undermine them through overload, fragmentation, and misplaced automation.

When it comes to content—especially content intended to inspire, inform, and build trust—review is the most critical step, whether that content is created by humans, by AI, or through AI with human guidance. In the age of AI, research integrity ultimately depends on whether we have truly applied ourselves. At its best, peer review protects this depth of human engagement with research. The future of publishing lies in a hybrid approach: AI as an assistant handling the mechanical aspects of peer review within structured workflows, and humans as the context-givers, readers, and reviewers. Rethinking peer review in the AI era means embracing innovation without surrendering the human values that define our work.

Ashutosh Ghildiyal, Vice President – Growth, Integra

1. AI Accelerates—but Cannot Replace Human Attention

AI can accelerate checks, surface patterns across large submission volumes, and flag risks in text, images, references, and metadata. Used well, it removes friction from the review process and frees humans from repetitive, low-value tasks.

What AI cannot do is replace human attention.

It cannot interpret nuance, understand disciplinary context, or take responsibility for editorial decisions. It cannot weigh novelty against rigor, assess coherence across an argument, or judge whether something “feels right” within a field. Most importantly, it cannot be accountable.

The credibility of scholarly publishing still rests on expert reviewers applying informed reasoning, contextual understanding, and responsibility for editorial decisions. That foundation has not changed, even as the tools around it have.

In a system increasingly optimized for speed, attention itself has become the scarcest and most valuable resource.

2. Human Peer Review Must Evolve—not Be Romanticized

Defending human peer review does not mean defending how it has always been done.

Traditional peer review systems were designed for a different era: lower submission volumes, slower publication cycles, and implicit trust. Today, those assumptions no longer hold. Reviewer identities can be manipulated, conflicts of interest are harder to detect, and editorial teams are stretched thin.

At the same time, peer review often fails to engage reviewers meaningfully. Reviewing is largely voluntary, under-recognized, and poorly supported—treated as an act of academic goodwill rather than a core professional activity requiring structure, training, and cognitive support.

What must evolve is the environment in which human judgment operates.

Human peer review must now function within controlled, well-designed systems that:

  • Safeguard against manipulation, conflicts of interest, and reviewer fraud
  • Provide structured workflows rather than informal, fragmented handoffs
  • Enable transparency and auditability across editorial decisions
  • Reduce cognitive overload, allowing reviewers to focus on deep evaluation rather than administrative noise

This evolution is not about constraining editors and reviewers. It is about protecting the conditions under which good judgment, sustained focus, and meaningful engagement are possible.

3. Designing for Focus, Not Just Participation

Much of the current conversation about peer review focuses on participation: how to recruit more reviewers, reduce declines, and shorten turnaround times. While important, this framing misses a deeper issue.

The problem is not only reviewer scarcity—it is the failure to support reviewers as knowledge workers whose effectiveness depends on sustained attention and cognitive flow.

Quality peer review depends on focus. Yet today’s review environments are fragmented, interruption-heavy, and cognitively taxing. Reviewers are asked to perform deep intellectual work in systems that actively work against concentration.

AI, when thoughtfully designed, can help reverse this dynamic—not by replacing judgment, but by supporting the mental conditions required for it. This means minimizing distractions, removing administrative friction, and enabling reviewers to engage with manuscripts in structured, intentional bursts of deep work rather than scattered, reactive sessions.

4. AI as Signal-Giver, Not Decision-Maker

One of the most important design principles for the future of peer review is clarity of roles.

AI should function as an assistive layer—a signal-giver rather than a decision-maker or opinion-giver. Its role is to highlight where attention is needed, not to determine outcomes.

Used well, AI can:

  • Perform mechanical checks consistently and transparently
  • Surface anomalies or risks that warrant human review
  • Support communication, organization, and workflow coordination

Used poorly, AI:

  • Flattens judgment
  • Obscures accountability
  • Creates the illusion of rigor without its substance

When AI begins to shape decisions rather than support them, trust erodes—quietly at first, then all at once.

5. Integrity at Scale Requires System Design, Not Heroics

For too long, peer review has relied on individual effort and professional goodwill to compensate for weak systems. Editors and reviewers have been expected to defend integrity through vigilance alone.

That model is no longer sustainable.

We cannot scale trust by exhausting people.

Integrity must be designed into workflows, not enforced through burnout. Controlled environments matter because they make good behavior easier and bad behavior harder. They provide traceability when decisions are questioned, enable learning and improvement over time, and reduce reliance on individual heroics.

This is how peer review scales responsibly: not by replacing humans, but by supporting them with systems worthy of their expertise.

6. From Efficiency to Trust: Reframing the Objective

Much of the AI discourse in peer review is framed around efficiency—faster decisions, lower costs, higher throughput.

But efficiency is not the ultimate objective of peer review. Trust is.

  • Trust from authors that their work is evaluated fairly and thoughtfully
  • Trust from reviewers that their time, focus, and expertise are respected
  • Trust from readers that published research is credible and meaningful

Efficiency that undermines trust is not progress. Efficiency that reinforces trust is. That distinction should guide every decision publishers make about AI adoption.

7. From Disruption to Direction

The future of peer review will not be determined by how aggressively AI is deployed, but by how thoughtfully it is governed and integrated into human-centered systems.

The path forward is clear:

The future of peer review is human judgment, supported by systems that perform mechanical checks, reduce cognitive load, and enable focused, accountable decision-making within controlled environments.

This is not a call to slow innovation. It is a call to direct it with intention—away from blind automation and toward designs that respect attention, preserve accountability, and strengthen trust.

Further Reading

I have explored these themes in depth over the past year, focusing on the role of human attention, system design, and responsible AI in peer review:

About the Author

Ashutosh Ghildiyal is Vice President – Growth & Strategy at Integra, where he works at the intersection of scholarly publishing, peer review, research integrity, and AI-enabled workflows. With nearly two decades of experience across global publishing markets, he writes and speaks extensively on editorial trust, human-centered system design, and responsible innovation in scholarly communication.

🔗 ORCID: 0000-0002-6813-6209
LinkedIn: https://www.linkedin.com/in/ashutoshconsult/


Recent Blogs

Research Integrity vs. Publication Integrity: Clarifying Responsibility in Scholarly Publishing
Research Integrity

Research Integrity vs. Publication Integrity: Clarifying Responsibility in Scholarly Publishing

From Disruption to Direction: Reinventing Academic Book Workflows with AI
Disruption to Direction

From Disruption to Direction: Reinventing Academic Book Workflows with AI

From Disruption to Direction: AI Across the Ecosystem
Artificial intelligence

From Disruption to Direction: AI Across the Ecosystem

Want to
Know More?