AI in Pathology

Pathologists have been hearing about artificial intelligence for years; image classification tools, tumor detection, pattern recognition. Most of us have seen the demos but over the past 2 years, something has shifted. AI in pathology is no longer a theoretical tool being explored at academic centers or tech conferences. It’s showing up in workflows, procurement cycles, and regulatory discussions.

This guide is not for those new to AI. This is for pathologists who’ve been practicing long enough to remember when digital pathology itself was the experiment. If you’re asking what has changed, and whether we’re at a point where AI should be part of your day-to-day practice or lab planning, this piece is for you.

What’s Different Now

Digitization is finally infrastructure, not aspiration

We’ve talked about whole slide imaging for more than a decade. But for most labs, it remained limited to research, tumor boards, or specialist consultations. That’s no longer the case. Reimbursement is starting to materialize, vendors are producing systems with better integration, and the pressure to deal with workload imbalances is real.

According to a 2025 market report, only 13% of US academic centers and 2% of community hospitals had full digital integration but the rate of adoption is accelerating fast as AI-enabled tools come online and slide archives become searchable in new ways (Newswire, 2025).

AI is moving beyond classification

Many pathologists still associate AI with tools that tell you “tumor vs no tumor” or try to match a Gleason score. But recent systems are doing more—flagging image quality issues, generating draft reports, and integrating with genomic and radiologic data. The emergence of large-scale foundation models trained on millions of slides is what’s driving this shift (ArXiv, 2024).

These models don’t need to be retrained for each new task and can be adapted to local workflows. They’re also being built with feedback from working pathologists—something that wasn’t always the case early on.

Where the Value Is Showing Up

Clinical performance is improving

A deep learning tool for prostate core biopsies reduced IHC usage by up to 44% while maintaining false negatives at zero. AUC values across validation sets ranged from 0.95 to 0.99 (BMJ J Clin Pathol, 2024). This means pathologists can move through cases faster and with more confidence in borderline scenarios.

In GI pathology, an AI model developed for coeliac disease matched expert pathologist accuracy and generated diagnoses in seconds (The Guardian, 2025). This turnaround time would reshape how labs manage triage and routine screening.

Interobserver variability is tightening

In HER2 scoring, an AI tool presented at ASCO improved agreement across pathologists from 73.5% to 86.4%. Misclassification rates dropped significantly, helping to match patients to the right HER2-low therapies (Pathology News, ASCO 2025).

This leads us to see hwo we can affect patient selection and access to treatment.

It’s helping with real-world workload

AI isn't solving workforce shortages, but it’s buying us time. At centers using Deciphex or similar services, productivity gains of 30 to 40% have been reported, especially for high-volume triage cases (The Times UK, 2025).

What You Need to Watch Closely

Validation must go beyond internal testing

Most early papers were proof-of-concept with training and test data drawn from the same institution. Today, there’s more scrutiny around generalizability. If you’re evaluating a tool, ask where it was validated, how many labs were involved, and whether it has seen data that looks like yours. External validation should be standard, not optional (Nature News, 2025).

New model types, new complexity

Generative and foundation models have broader capabilities but also more layers to audit. If you can’t understand how a tool arrives at its recommendation, that’s not just a black box issue; it’s a potential risk. These models aren’t plug-and-play. They need institutional oversight, version tracking, and clear handoffs between AI and human review (Modern Pathology, 2025).

Regulation is still catching up

While the FDA and other bodies are pushing forward, there’s a gap between what’s available commercially and what’s been thoroughly vetted. Groups like the CAP, RCPA, and the FDA’s SaMD program are beginning to define expectations, but for now, pathologists will need to take the lead in vetting tools and ensuring clinical governance is in place (CAP AI Resource Center, RCPA, 2025).

Final Thoughts

If you’re practicing in a lab that is seeing growing caseloads, delayed turnaround times, or inconsistent scoring across sub-specialists, AI is likely to be on the table soon. The infrastructure, evidence, and tools are reaching a level that requires engagement, not just observation.

Experienced pathologists are needed now more than ever, not just to review AI results, but to shape the questions we ask of these systems, oversee how they’re validated, and make sure they’re embedded into practice safely.

Next
Next

Operational Readiness and Real-World Implementation