In Part One of our conversation, Ashutosh Ghildiyal spoke about purpose and people as the foundation of meaningful transformation in scholarly publishing. In this continuation, we explore how vision translates into execution, how technology and humanity intersect, and what kind of leadership this moment demands. Ashutosh’s perspective—shaped by years leading innovation initiatives offers a systems-oriented, human-centered compass for navigating change.
Q: Ashutosh, in our last conversation you spoke extensively about purpose and people. But transformation also demands systems, processes, and operational excellence. How do you balance vision with execution?
A: It’s a delicate balance, and honestly, it’s one I think about constantly. Vision without execution is just daydreaming—inspiring perhaps, but ultimately inconsequential. But execution without vision is equally problematic—it’s mere efficiency for its own sake, activity without direction, motion that may not be progress.
At Integra, we’ve become quite disciplined about focusing on the “why” first—understanding the problem we’re actually trying to solve, the outcome we’re trying to create, the value we’re trying to deliver. Only then do we translate that into systems, workflows, routines, and metrics that deliver meaningful outcomes rather than just measurable outputs.
For example, we embed continuous feedback loops—both with clients and internally within our teams—so that strategy isn’t static. It reveals better options over time as we learn what actually works in practice versus what looks good in planning documents.
These feedback loops become what I call flywheels: small, consistent improvements that compound into real momentum. You make one process 5% better, then another 3% more reliable, then you eliminate a friction point that’s been annoying everyone for months. Individually, these seem minor. Collectively, they transform capability.
The key is that the improvements are directional—they’re all pointed toward the same north star rather than being random optimizations. That’s how vision and execution connect.
Q: Much of your work involves artificial intelligence and advanced automation, yet you also run deeply human-centered services like editorial support and peer review management. What’s your philosophy on integrating these seemingly opposing forces?
A: I don’t see them as opposing—I see them as complementary when properly designed. But yes, the integration requires thoughtfulness about what each does best.
My core belief is straightforward: technology should elevate human potential, not replace it. I actually prefer the term “assisted intelligence” over “artificial intelligence” in our context—it more accurately describes tools that support and enhance editorial decision-making rather than attempting to override or automate it entirely.
Take our EditorialPilot system as an example. It analyzes manuscripts and flags potential risks—similarity to published work, integrity concerns, structural issues, methodological weaknesses. It can recommend next steps based on patterns it’s learned from thousands of previous editorial decisions. But it never dictates final decisions. It never says “reject this manuscript”—it says “here are factors an editor should consider.”
The same philosophy applies throughout our research integrity work. AI is remarkably good at detecting patterns—unusual image manipulations, statistical anomalies, citation networks that suggest manipulation. But interpreting those patterns in context, understanding intent, weighing severity, determining appropriate responses—these require human judgment informed by experience, ethics, and understanding of scientific culture.
The goal is liberating people to focus on uniquely human work: the interpretation, the judgment calls, the empathetic understanding of author intent, the assessment of broader significance. Technology handles the scalable pattern-matching that would exhaust human capacity.
When we get this balance right, editors tell us they feel more empowered, not deskilled. They’re making better decisions because they have better information, not fewer decisions because machines are deciding for them.
Q: You’re known for moving quickly and driving innovation, yet you recently mentioned that you’ve been “thinking about slowness.” Can you explain that apparent paradox?
A: Yes, this has been an important evolution in my thinking. I spent years optimizing for speed—faster development cycles, quicker decisions, rapid iteration. Speed has genuine value, especially in competitive markets and when solving urgent problems.
But I’ve come to appreciate that depth, trust, and lasting alignment are often born in slower moments. The moments when you ask one more clarifying question instead of rushing to a solution. When you listen longer to understand not just what someone is saying but why they’re saying it. When you pause to rethink assumptions that everyone has accepted but no one has recently questioned.
Some of our best product decisions have come from pausing—sometimes uncomfortably—to clarify the problem we’re actually solving. We thought we understood a publisher’s need, but when we slowed down to ask deeper questions, we discovered the real problem was different from what they initially articulated. Solving the real problem, even if it took longer to understand, created far more value than quickly solving the stated problem.
I think of it this way: there’s tactical speed—how quickly you execute once direction is clear—and strategic slowness—the deliberate pace at which you ensure direction is actually clear and correctly oriented.
As Integra scales and takes on more complex challenges, protecting space for that thoughtful slowness becomes essential. It’s the difference between moving fast and moving fast in the right direction.
Q: Integra works with publishers across multiple continents, cultural contexts, and publishing traditions. What does truly global collaboration mean in practice, beyond the buzzwords?
A: Global collaboration, done well, means mutual respect and shared ambition—but it starts with a crucial acknowledgment: we never assume we know best.
This might sound obvious, but it’s surprisingly easy to fall into the trap of thinking “we’ve solved this problem for Publisher A, so we can deploy the same solution for Publisher B.” Publishing looks deceptively similar across contexts, but the differences matter enormously.
So in practice, global collaboration means investing time upfront to truly understand local context: What are the actual constraints this publisher faces? What does success look like in their ecosystem? What are the cultural norms around authorship, peer review, and editorial authority in their region? What technological infrastructure can they realistically support?
Then we co-create solutions—and I mean genuinely co-create, not “we build and they provide feedback.” Their editorial teams, their workflow experts, their technical staff are partners in design, not just recipients of solutions.
This approach also requires building intentionally diverse teams on our side—diverse in editorial background, technical expertise, linguistic capability, cultural perspective, and program management approach. When our teams reflect global diversity, we’re better equipped to understand and respect local nuance rather than imposing one-size-fits-all solutions.
It’s harder than standardization. It’s slower than templating. But it builds partnerships that last and solutions that actually work in the real complexity of global scholarly publishing.
Q: There’s enormous hype around AI in publishing right now. What’s one pervasive myth about AI you’d most like to debunk?
A: That AI will solve everything—or even most things—on its own.
The hype cycle around AI encourages this magical thinking: implement AI and suddenly all your problems disappear, your processes become effortless, your costs plummet while quality soars. It’s seductive but fundamentally misleading.
AI is genuinely powerful, but it’s also fundamentally limited. It needs high-quality, relevant training data—which is often harder to obtain than people realize. It needs deep domain knowledge to be deployed appropriately—you can’t just apply generic language models to specialized scientific communication and expect reliable results. It requires constant human oversight to catch errors, bias, and inappropriate applications. And it demands robust ethical guardrails to prevent misuse and maintain trust.
The biggest opportunities—the ones I get excited about—lie in augmentation rather than automation. Helping editors make better-informed decisions by surfacing relevant information they might have missed. Helping readers find credible research faster by improving search and recommendation systems. Helping authors understand how their work fits into existing literature by mapping citation networks and conceptual relationships.
These augmentation use cases create genuine value. They make human expertise more effective rather than trying to replace it.
The automation fantasy—AI taking over entire workflows without human involvement—consistently disappoints because publishing is fundamentally a judgment-intensive enterprise. You can automate steps within workflows, but you can’t automate the judgment that determines whether the workflow is pointed in the right direction.
So my advice is: be enthusiastic about AI’s potential but realistic about its limitations. Invest in augmentation use cases with measurable value. Be deeply skeptical of anyone promising that AI alone will solve your strategic challenges.
Q: You’ve written thoughtfully about peer review and the question of “who should hold the gate: humans, AI, or both?” What’s at the core of that argument?
A: The core is rejecting false dichotomies.
The publishing industry has a tendency to swing between extremes. One camp denounces peer review as fundamentally broken, slow, biased, and unsustainable—and wants to tear it down or bypass it entirely through preprints and post-publication review. Another camp wants to automate peer review entirely, as if matching algorithms and AI quality checks could replace expert human evaluation.
Both extremes miss the mark. The right path, I believe, is building a thoughtful editorial framework where humans and machines collaborate strategically.
This means preserving editorial judgment and accountability—the buck stops with human editors who are accountable to their communities, not with algorithms that can’t be held responsible. But it also means using AI strategically for triage, detection, and efficiency improvements that make human review more effective.
For example, AI can screen for technical completeness—does the manuscript include all required sections, appropriate statistical reporting, proper figure legends? It can flag potential integrity issues that warrant closer examination. It can help match manuscripts to reviewers based on expertise and availability. It can identify relevant prior work that should be cited.
All of these applications make the human peer review process more thorough and efficient. They don’t replace the fundamental task of peer review: expert evaluation of novelty, significance, methodological soundness, and contribution to knowledge.
But here’s the crucial part: strengthening peer review isn’t just about better tools. It’s also about investing in the human infrastructure—reviewer diversity, recognition, training, and editorial support. We need to make reviewing more sustainable, more rewarding, and more developmental for reviewers. We need to combat reviewer fatigue and reviewer concentration where the same overextended experts are asked repeatedly.
Technology helps with some of this—better matching, reduced administrative burden, improved communication—but it doesn’t solve the fundamental challenge that expert peer review is time-consuming intellectual labor that we systematically undervalue and under-support.
Q: Following up on that—is there something you believe should never be delegated to AI in peer review, regardless of how sophisticated the technology becomes?
A: Yes, absolutely: meaning-making.
Machines can summarize findings. They can identify technical flaws. They can flag statistical issues or detect image manipulation. They’re getting better at these pattern-recognition tasks.
But interpreting a study’s significance—understanding why it matters, what it changes, how it advances a field—is fundamentally human work. So is assessing the ethical dimensions of research: Are the right consent procedures in place? Are vulnerable populations adequately protected? Are potential conflicts of interest properly disclosed and managed?
Judging broader implications requires understanding not just what a paper says, but how it fits into ongoing scientific conversations, how it might be used or misused, what questions it opens up for future research, whether it challenges or reinforces existing power structures in knowledge production.
These are interpretive tasks that require situated knowledge, cultural understanding, ethical reasoning, and professional judgment shaped by years of experience in a field. They’re exactly the kinds of tasks that statistical pattern matching, no matter how sophisticated, cannot reliably perform.
Editorial discernment is the backbone of credibility in scholarly publishing. It’s what makes peer-reviewed publication meaningful—the assurance that expert humans have judged this work worthy of the scholarly record. If we delegate that discernment to algorithms, we undermine the very foundation of trust that makes scholarly communication valuable.
So yes, I’m enthusiastic about AI tools that support editors. But I’m unequivocal that final editorial judgment must remain with accountable human experts.
Q: You’ve described what you call a “crisis of cognitive empathy” in peer review. That’s a striking phrase. What do you mean by it?
A: We’re facing a broader attention crisis that’s affecting how we engage with complex ideas, and peer review is particularly vulnerable to it.
Effective peer review requires what I call intellectual empathy—the capacity to step into an author’s argument, understand what they’re trying to accomplish, test the internal logic, consider alternative interpretations, and ultimately ask: Does this work matter? Does it advance knowledge in a meaningful way?
This kind of engagement is cognitively demanding. It requires sustained attention, charitable interpretation, and genuine intellectual curiosity. You have to be willing to be surprised, to have your assumptions challenged, to learn something you didn’t expect.
But we’re increasingly incentivized toward transactional engagement—checking boxes, processing quickly, moving on to the next task. Reviewing becomes something you squeeze into gaps between other obligations rather than serious intellectual work deserving focused time.
When reviewing becomes merely mechanical—checking whether the methods section exists, whether the references are formatted correctly, whether statistical tests are nominally appropriate—we lose the empathetic engagement that identifies truly novel insights, that recognizes significance even when it’s unconventionally presented, that helps authors strengthen arguments rather than just accept or reject them.
This empathy crisis isn’t caused by individual reviewers being lazy or uncaring. It’s a systemic problem driven by perverse incentives: reviewing isn’t recognized in promotion decisions, isn’t compensated, takes time away from activities that do count toward career advancement. We’ve created conditions that make empathetic engagement increasingly difficult to sustain.
So part of fixing peer review is creating conditions that make intellectual empathy possible again—manageable workloads, meaningful recognition, training in developmental reviewing, editorial support that helps reviewers see their contribution’s value.
Technology can help with some of this—reducing administrative burden, surfacing relevant context—but the fundamental solution is cultural and structural, not technological.
Q: You’ve thought deeply about leadership in times of disruption. What kind of leadership does this particular moment in scholarly publishing require?
A: This moment requires what I call leadership from inner clarity.
Here’s what I mean: people resist change when it threatens their sense of identity and competence. An editor who’s spent decades developing expertise in manuscript evaluation might resist AI-assisted screening—not because they’re stubborn or afraid of technology, but because they worry it devalues their expertise or changes their role in ways that feel like a loss of professional identity.
Leaders who recognize this don’t dismiss resistance as irrational or try to overcome it through force. Instead, they enable transformation by creating psychological safety, holding space for discomfort, and modeling curiosity rather than certainty.
Psychological safety means people can voice concerns, ask questions that might sound naive, admit confusion, and experiment with new approaches without fear of judgment or punishment. It means “I don’t know” and “I’m not sure this is working” are acceptable statements, not signs of weakness.
Holding space for discomfort means acknowledging that transformation is unsettling—it’s supposed to be—rather than pretending it’s easy or painless. People need permission to struggle with change, not just pressure to adapt quickly.
Modeling curiosity means leaders demonstrate genuine openness to learning, willingness to revise their own thinking, and comfort with uncertainty. When leaders say “I don’t have all the answers, let’s figure this out together,” it creates permission for others to be in that exploratory space as well.
I’ve learned that the most lasting transformations come from presence and personal example, not from top-down mandates or change management programs. People change when they see leaders they respect embodying new ways of working—taking risks, learning publicly, admitting mistakes, staying curious.
This is harder than directive leadership. It’s slower. It requires more self-awareness and emotional labor. But it builds capability that survives beyond any individual leader’s tenure, because you’re developing people’s capacity to navigate change themselves rather than just implementing your specific change initiative.
Q: Impostor syndrome is remarkably common in high-performance environments, including publishing. How do you think about and address it—both personally and in the teams you lead?
A: I’ve definitely experienced impostor syndrome throughout my career—that persistent feeling that you don’t really belong, that you’re going to be exposed as not knowing enough, that your success is somehow accidental rather than earned.
What I’ve learned is that impostor syndrome often signals you’re doing something challenging and meaningful—you’re stretching beyond what feels comfortable, which is exactly where growth happens. So in some ways, its presence can be a useful indicator that you’re pushing boundaries.
But it’s also genuinely debilitating when it prevents people from contributing their full capabilities or speaking up with valuable insights because they don’t feel qualified.
In teams I lead, I draw on three practices:
Soshin—beginner’s mind: Creating space where not knowing is acceptable, even valuable. Some of the best questions come from people new to a domain who haven’t yet learned what’s “obvious” and therefore ask questions experts have stopped asking. When we treat beginner’s mind as an asset rather than a deficit, it reduces the pressure to appear expert in everything.
Kaizen—continuous improvement: Emphasizing that we’re all always learning and developing. There’s no state of “finished expertise” where you’ve arrived and just maintain. Everyone, regardless of experience level, is continuously improving. This makes learning visible and normal rather than a sign of inadequacy.
Gratitude practices: Regularly acknowledging contributions and helping people see the impact of their work. Impostor syndrome often stems from disconnection between effort and impact—you’re working hard but can’t see that it matters. Making impact visible helps people recognize their genuine value.
Perhaps most importantly, I try to create environments where people can experiment, fail safely, and learn—because you don’t need certainty to be useful. Some of the most valuable contributions come from people trying something they’re not sure will work, sharing an idea they’re not confident about, or asking for help with something they find difficult.
The goal isn’t eliminating impostor syndrome entirely—I’m not sure that’s possible or even desirable. It’s building cultures where it doesn’t prevent people from contributing their gifts.
Q: You’ve also written about integrity and communication as inseparable in scholarly publishing. How does systems thinking apply there?
A: Integrity and communication are indeed inseparable, though we often treat them as separate concerns.
Integrity ensures research is worthy of trust—that it’s honest, rigorous, properly attributed, and conducted ethically. Communication makes that trust visible and usable—it helps people find reliable research, understand its implications, and apply it appropriately.
You can have highly rigorous research that no one can find or understand—that’s an integrity success but a communication failure. Or you can have beautifully communicated research that’s methodologically flawed—that’s a communication success but an integrity failure. Both undermine the social value of research.
Systems thinking means embedding integrity and communication throughout the research lifecycle, not treating them as add-ons or final checks.
This requires shared responsibility:
- Training researchers in transparency practices—not just telling them to be transparent, but teaching specific practices like preregistration, open data sharing, clear reporting standards
- Supporting publishers with integrity workflows built into editorial systems—automated checks for common issues, clear policies, efficient investigation processes
- Investing in communication channels that contextualize findings for broader audiences—not just press releases, but ongoing engagement with science communicators, journalists, and public intellectuals who help translate research
When integrity and communication are both embedded from the start, research becomes more trustworthy and more accessible. That’s the systems approach: recognizing that these aren’t competing priorities or separate functions, but mutually reinforcing capabilities that together determine whether research achieves social impact.
Q: As we close, what would you say to the next generation of publishing leaders—people early in their careers who will shape scholarly communication over the coming decades?
A: Several things, though I’m conscious this risks sounding like advice I should follow better myself.
Be connectors. The future belongs to people who can bridge domains—editorial and technical, commercial and mission-driven, global and local, traditional and innovative. Publishing increasingly requires translating between different languages and worldviews. Develop that capacity.
Put readers at the center of every decision. It’s easy to optimize for what’s convenient for publishers, or what researchers want, or what technology enables. But ultimately, publishing serves readers—current and future—who need access to reliable knowledge. When decisions are difficult, asking “what serves readers better?” often clarifies.
Keep an ecosystem view. Understand your role in the broader value chain and how your work ripples outward. Publishing doesn’t exist in isolation—it’s embedded in systems of research funding, institutional evaluation, knowledge dissemination, and public understanding. Know where you fit and how others depend on you.
Protect editorial judgment. As technology becomes more sophisticated and commercial pressures intensify, there will be constant temptation to let efficiency or automation override editorial discernment. Resist. Editorial judgment—human expert evaluation of what deserves publication and how to present it well—is what makes publishing valuable. Guard it zealously.
Use AI thoughtfully, not reflexively. Embrace tools that genuinely augment human capability. Be skeptical of automation that reduces judgment or creates new risks. Always ask: who benefits from this technology, and who might be harmed?
Prioritize communication. Help stakeholders understand your contribution and value. Publishing’s vital role in knowledge creation is often invisible or taken for granted. Make it visible. Tell the story of how careful editing, rigorous review, and professional publishing serve research and society.
Build what’s worth building. You’ll face constant pressure to do what’s expedient, profitable, or impressive. Develop the clarity and courage to focus on what’s genuinely valuable—work that serves knowledge creation, expands access, maintains integrity, and advances human understanding.
That’s the work that will matter twenty years from now, when specific technologies and business models have evolved but the fundamental mission of scholarly communication endures.
Closing Thoughts
Ashutosh Ghildiyal’s leadership offers an important reminder that innovation is not only about algorithms, automation, and operational efficiency—though those matter. It’s equally about discernment, attention, presence, and the wisdom to know what should change and what should be preserved.
As Integra continues exploring new frontiers in scholarly publishing—from AI-assisted peer review to systems-level integrity workflows to next-generation editorial tools—Ashutosh’s perspective provides a practical compass: grounded in operational reality, oriented toward systems-level change, and anchored in humanistic values.
The transformation of scholarly publishing will be shaped by many forces: technological capability, market dynamics, funder mandates, institutional pressures. But it will ultimately be determined by the choices individual leaders make about how to apply technology, what values to embed in systems, and what vision to pursue.
Ashutosh’s contribution is showing that these choices need not be between efficiency and integrity, between innovation and humanity, between speed and thoughtfulness. The right question isn’t which to choose, but how to integrate them—building systems that are simultaneously more capable and more humane, more efficient and more trustworthy, more innovative and more purposeful.
That integration—difficult as it is to achieve—is what defines meaningful progress in scholarly communication.
About Ashutosh Ghildiyal
Ashutosh Ghildiyal is Vice President, Growth and Strategy at Integra, where he leads innovation initiatives spanning peer review, research integrity, editorial systems, and publishing transformation. His work focuses on building human-centered technology solutions that strengthen scholarly communication while preserving editorial judgment and community trust. Ashutosh has contributed thought leadership to venues including The Scholarly Kitchen and speaks regularly on AI in publishing, systems approaches to research integrity, and the future of peer review. He brings a distinctive perspective shaped by both operational publishing experience and systems-level strategic thinking.
Connect with Ashutosh: LinkedIn | ashutosh.ghildiyal@integra.co.in
Learn more about Integra’s innovation initiatives: Visit our Services Page
Recent Blogs
Research Integrity in Action: Key Insights from Our Panel Discussion
Human + AI, Rethinking Collaborative Learning Experiences in Education
