The story of artificial intelligence in medical devices has, for most of the past decade, been told through the lens of radiology. Hundreds of cleared CADe and CADx algorithms, a mature SaMD regulatory vocabulary, and a steady drumbeat of 510(k)s have made the radiology reading room the default mental model for "AI in healthcare." But as of this writing, at the end of April 2026, a quieter and more structurally interesting story is unfolding one floor down, in the pathology lab. Pathology AI has not produced hundreds of clearances. It has produced fifty-one. And the composition of those fifty-one, the pathways they took, and the office that reviewed them together reveal why digital pathology is not simply a slower version of digital radiology. It is a fundamentally different regulatory animal.
This article is a snapshot, not a year-in-review. Instead of cataloguing what happened in a calendar year, it captures where the FDA-authorized pathology AI landscape actually stands at the current moment. It uses data pulled directly from the FDA Device Explorer to count, classify, and characterize every AI/ML-flagged device the agency has ever authorized in pathology-relevant review panels, and it uses that dataset to do two things the reference data rarely does: it separates the hematology-analyzer volume driver from the whole-slide-imaging (WSI) vanguard, and it walks through the concrete reason a software-only pathology algorithm, the kind of product a startup with ten ML engineers and zero wet-lab reagents ships, ends up reviewed as an In Vitro Diagnostic (IVD) instead of as Software as a Medical Device (SaMD).

What "Pathology AI" Actually Means in the 2026 Dataset 🔗
The fifty-one number needs an immediate caveat. The FDA does not publish a single bucket called "pathology AI." What exists is a set of review panels (Pathology, Medical Genetics, Hematology, Microbiology, Immunology), a set of product codes, and a machine-learning flag that Innolitics' Device Explorer applies to records whose summary, intended use, or indications describe algorithmic decision-support or automated classification. The fifty-one devices counted here are every authorization returned by that AI/ML flag across all pathology-adjacent panels, from the CellaVision DiffMaster Octavia in 2001 through Checkcells' Seaman Pro in April 2026.
Counting them this way matters because it resists two common distortions. The first is the "WSI-only" distortion, which limits the conversation to whole-slide-imaging algorithms and produces a count in the single digits. That is accurate for computational pathology in the narrow sense but misleading about how much AI is actually moving specimens through a lab. The second is the "everything remotely automated" distortion, which sweeps in decades of rules-based hematology analyzers and inflates the number beyond recognition. The AI/ML flag is a deliberate middle path: it keeps the modern CellaVision DM-series analyzers, the Scopio full-field blood smear systems, and the Sight OLO point-of-care analyzer, because their 510(k) summaries explicitly describe trained classifiers; and it keeps the genomic profiling tests whose variant-calling and signature-scoring pipelines are machine-learning based. The result is a defensible working set.

Inside that set, four modalities emerge and they are wildly uneven in both count and clinical footprint.
Hematology and body-fluid analyzers dominate by volume, with twenty-five authorizations. This is the category that has been quietly using machine learning in production the longest. CellaVision's automated white-blood-cell classifier has been iterating on the FDA since 2001. Scopio Labs' full-field peripheral blood smear system, Pixcell's HemoScreen, Sight Diagnostics' OLO, Athelas Home, Sysmex's XR series, and semen-quality analyzers from Bonraybio and Checkcells all live here. These devices are not the romantic version of pathology AI (no prostate cancer detection on a gigapixel slide), but they are the version that has actually put AI-assisted morphology into routine clinical workflows across thousands of labs. Both of 2026's pathology-relevant clearances to date are in this bucket: Athelas Home (K243348, February 6) and Checkcells' Seaman Pro (K252228, April 9).
Molecular IVDs contribute sixteen authorizations and are the other large cohort. This is Foundation Medicine's FoundationOne CDx, Myriad's myChoice HRD CDx, Exact Sciences' Cologuard Plus, Guardant's Shield, Tempus' xT and xR, Agendia's MammaPrint, Nanostring's Prosigna, and recent additions such as Biocartis' Idylla CDx MSI Test (P250005) and Tempus AI's xR IVD (K241868). The pathology here is genomic rather than morphologic: the assay performs sequencing or hybridization in a lab, and the "AI" is the machine-learning model that turns raw genomic reads into clinically actionable calls: variants, tumor mutational burden, homologous recombination deficiency scores, tissue-of-origin predictions, MSI status, and methylation-based screening results.
Whole-slide imaging AI, the category most clinicians and journalists mean when they say "digital pathology AI," contributes just seven authorizations. These are the algorithms that directly analyze a digitized histology slide. Four of them (Paige Prostate, Ibex's Galen Second Read, Hologic's Genius Cervical AI on the cytology side, and ArteraAI Prostate) represent the contemporary wave of de novo computational pathology. The other three are legacy IHC scoring systems from Applied Imaging, Tripath/Ventana, and Aperio that date from 2004 to 2009 and live under older product codes (NOT, NQN). Seven devices in twenty-plus years is the true scale of cleared WSI AI.
Cytology rounds out the count with two authorizations: Becton Dickinson's BD MAX CTGCTV2 System (K182692) and Hologic's Genius Digital Diagnostics System with Genius Cervical AI (DEN210035), which assists cytotechnologists in interpreting ThinPrep Pap tests.
The disparity in visibility among these four modalities is instructive. Hematology leads in volume but is so thoroughly integrated into lab workflows that few people outside the hematology community think of it as "AI." WSI AI leads in novelty and press coverage but is numerically tiny. Molecular IVDs sit in between: substantial in count, enormous in clinical and commercial impact, and almost always classified as IVDs by default because the assay is physical even when the decision-making is algorithmic.
The Arc From 1999 to April 2026 🔗
Plotted by year, the authorization history has the shape of a long flat plain followed by a visible 2023 to 2025 lift.

Twenty-two authorizations accumulated across the first two decades (1999 through 2019), driven almost entirely by hematology analyzers and a handful of molecular tests. From 2020 through 2023, the pace settled into two-to-four authorizations per year. 2024 held that cadence at four, including the cytology landmark, Hologic's Genius Cervical AI, in January. Then 2025 more than doubled the prior year's total: ten authorizations, including the first 510(k) clearance in the QPN (WSI AI for prostate cancer) product code (Ibex Galen Second Read, K241232), the first De Novo prognostic WSI algorithm (ArteraAI Prostate, DEN240068), two new molecular IVDs in the PZM product code (Tempus AI xR IVD, K241868; Geneseeq GENESEEQPRIME, K250003), a PMA for Biocartis' Idylla CDx MSI Test (P250005), and four hematology/semen-analyzer 510(k)s.
2026 has opened quietly. Through April 28, two authorizations have posted, both hematology (Athelas Home K243348 and Checkcells Seaman Pro K252228). No WSI AI, no cytology, no new molecular IVDs. Four months is a small sample (multiple 2025 milestones clustered in the second half of the year), but the immediate post-2025 pattern looks less like continued acceleration and more like a pause after a peak. Whether the full-year 2026 total ends up resembling 2024's four, 2023's four, or something closer to 2025's ten will depend on what exits FDA queues over the remaining eight months.
Three Pathways, and Why De Novo Does the Heavy Lifting 🔗
Across the fifty-one devices, the three regulatory pathways split in a specific way: forty-one 510(k) clearances, seven PMAs, and three De Novos.

That distribution is heavily shaped by category. The 510(k) column is dominated by hematology analyzers and legacy molecular tests, all of which have decades-old predicates to claim substantial equivalence against. The PMA column is mostly companion diagnostics, including FoundationOne CDx, Tempus xT CDx, myChoice HRD CDx, Cologuard Plus, Guardant Shield, 4Kscore, and Biocartis' Idylla CDx MSI, where the FDA has required full premarket approval because the test is tied to a specific therapeutic decision.
The three De Novos, namely Paige Prostate (DEN200080, 2021), Hologic Genius Cervical AI (DEN210035, 2024), and ArteraAI Prostate (DEN240068, 2025), are where the regulatory architecture for modern pathology AI has actually been built. Each of them established a new product code, which means each of them simultaneously cleared itself and created a regulatory slot that future devices can use as a predicate under 510(k). QPN was created by Paige Prostate and has now been used as the predicate for Ibex's Galen Second Read. QYV was created by Genius Cervical AI. SFH was created by ArteraAI Prostate. In a category with almost no legacy predicates (WSI AI has no 1990s ancestor to be substantially equivalent to), De Novos are not a minority pathway. They are the scaffolding that lets the 510(k) pathway work at all.

Seen another way, the top product codes in the dataset trace the history of how pathology AI has actually been reviewed. JOY (automated hematology differential cell counter) carries eleven authorizations. GKZ (hematology analyzer) carries eight. POV (semen analyzer) carries five. OIW (tissue-of-origin molecular test) carries four. NOT and NYI each carry three. The modern WSI AI codes (QPN, QYV, SFH) each have only one or two members, because they did not exist until 2021 onward. This is what it looks like for a new regulatory category to be built from scratch inside the existing taxonomy.
The Paradigm: Why Software-Only Pathology AI Is Still an IVD 🔗
The most important structural fact about pathology AI, and the part most newcomers to the space miss, is that in the FDA's organization chart, a pure-software pathology algorithm is still classified as an In Vitro Diagnostic and reviewed in OHT7 (the Office of In Vitro Diagnostics and Radiological Health's IVD offices), not as Software as a Medical Device in OHT8.

In radiology, an algorithm that reads an X-ray, a CT, an MRI, or an ultrasound lives in SaMD land. The review office, the guidance documents, the predicate ecosystem, and the clinical validation conventions all assume the product is a piece of software whose input happens to be an image acquired by a piece of radiological imaging equipment regulated separately. In pathology, that separation does not hold. The FDA's definition of an IVD is broad: reagents, instruments, and systems intended for use in the diagnosis of disease or other conditions using specimens taken from the human body. A digitized whole-slide image is treated as a digital representation of a physical tissue specimen, and the software that interprets it is treated as an extension of the in vitro diagnostic process that begins with the biopsy and continues through fixation, staining, scanning, and interpretation.
This is not a hypothetical boundary. It maps directly onto the decision letters. Paige Prostate's De Novo grant classified the device under Product Code QPN in 21 CFR 864.3700, a regulation that sits inside Part 864 (Hematology and Pathology Devices), firmly in the IVD regulatory framework. Hologic's Genius Cervical AI De Novo under QYV sits in the cervical cytology IVD regulations. ArteraAI Prostate's De Novo under SFH sits likewise. Ibex's 510(k) for Galen Second Read uses Paige Prostate as its predicate under the same QPN code. Every software-only WSI algorithm authorized to date has been reviewed as an IVD. There is no exception in the dataset.

The consequence is that two seemingly different products, a WSI AI algorithm from a pure-software company and a molecular IVD from a sequencing company, end up in the same regulatory bucket. Both paths start with a physical specimen. Both involve an instrument-mediated transformation of that specimen into a digital representation (scanner or sequencer). Both end with an algorithm that turns that digital representation into a diagnostic call. From the agency's perspective, both are IVDs and both are evaluated against IVD evidentiary norms.
This has downstream effects that software-only developers entering pathology often underestimate.
Clinical validation is specimen-aware, not just model-aware. OHT7 reviewers do not evaluate an algorithm the way an OHT8 reviewer evaluates a radiology SaMD. They have to account for more variability in the specimen to clinician pipeline: the types of stains used, the scanners that produced the training and validation images, the specific lab protocols that the studies ran on, the reader variability among the pathologists who produced the ground truth, and the lot-to-lot and site-to-site variation in the upstream physical specimen. Paige Prostate's authorization, for example, is specific about which scanners it is cleared on and under what magnification and staining conditions. A label claim extended to a new scanner typically requires additional validation.
The predicate system works differently. In SaMD, substantial equivalence can rest heavily on intended use and performance metrics relative to a prior software device. In IVDs, substantial equivalence involves the full system, including the physical components and upstream assays the software depends on. This is one of the reasons Ibex's Galen Second Read cleared under 510(k) against Paige Prostate as a predicate: Paige had already cleared the regulatory ground for a prostate-cancer WSI classifier with specific stain and scanner constraints, which Galen could adopt.
Post-market surveillance looks different. Under OHT7, post-market expectations focus on analytical drift, assay-level adverse events, and lot-traceable corrections. For software that can silently shift as scanners and stains evolve in the field, this is a meaningful operational cost, not a paperwork exercise.
The practical lesson for a would-be pathology-AI company is straightforward and easy to miss: if your mental model for "ship an AI medical device" was built on radiology SaMD examples, you will prepare the wrong submission. The agency's framing is that your algorithm is a component of a diagnostic assay, not a standalone software product that happens to read a medical image.
The Scale Gap Between Radiology AI and Pathology AI 🔗

The quantitative gap between radiology AI and pathology AI is the statistic most worth keeping in mind whenever anyone talks about "AI in healthcare" as a single market. The FDA's list of AI/ML-enabled medical devices, taken in its publicly-available form, contains hundreds of radiology SaMD authorizations, enough that year-in-review articles routinely count by specialty (cardiology alone had roughly ninety in 2025). The pathology WSI AI column of the same list contains seven.
Several structural reasons drive this gap, and they compound.
Data standardization. DICOM has been the lingua franca of radiology images for decades. Pathology's equivalent standard, DICOM WSI, is technically mature but adoption is still partial, and the de facto file formats are proprietary, including Aperio SVS, Hamamatsu NDPI, Philips iSyntax, 3DHistech MRXS, Leica GT450 SCN, and so on. A radiology startup can train on open datasets aggregated across hospitals with modest engineering. A pathology startup has to build format-conversion and color-normalization tooling before it can train anything, and must document that pipeline for the FDA.
Infrastructure penetration. Radiology departments went fully digital by the mid-2000s. Pathology remains largely glass. Even in 2026, after years of vendor investment and pandemic-accelerated digital pathology deployments, most U.S. labs still use glass slides and optical microscopes for primary diagnosis, with WSI reserved for consults, education, and selected computational use cases A cleared WSI AI algorithm has a smaller addressable installed base than a cleared radiology SaMD.
Regulatory burden. The IVD framing described above raises the evidentiary cost per submission compared to a typical radiology CADe/CADx, where cleared software predicates are abundant and the review conventions are well worn. For pure-software startups with limited wet-lab infrastructure, this often translates into expensive multi-site clinical studies that would not be required of an analogous radiology product.
Reference-standard cost. Ground truth for radiology AI often comes from existing reads, existing reports, and retrospective cohorts. Ground truth for pathology AI frequently requires expert pathologist re-review of large slide sets, sometimes multiple pathologists per slide to capture inter-reader variability. That is slow and expensive in a way that is not comparable to collecting radiology labels.
Taken together, these factors help explain not only the quantitative gap but also why the gap has not closed quickly despite obvious clinical demand and active venture-backed competition.
The Innovators Actually Clearing the IVD Bar 🔗
Given everything above, the list of companies that have brought true computational pathology through the FDA is short, concrete, and worth looking at head-on.

Paige.AI (Paige Prostate, DEN200080, September 2021) is the first and still the archetypal case: a pure-software company that achieved a De Novo clearance for a WSI-based prostate cancer detection algorithm and in the process created the QPN product code. The company's decision summary made the software-as-IVD framing explicit. Scanner and staining claims were specified, and performance was characterized across multiple institutions and pathologist readers.
Ibex Medical Analytics (Galen Second Read, K241232, January 2025) is the first 510(k) to use QPN as a predicate. It is an important proof-of-concept for the pathway Paige opened: that once a De Novo establishes a regulatory slot, subsequent entrants can come through 510(k) substantial equivalence, provided they respect the scanner/stain constraints and match performance.
Hologic (Genius Digital Diagnostics System with Genius Cervical AI, DEN210035, January 2024) did for cytology what Paige did for histology, creating the QYV code for cervical cytology AI. Because cervical cytology screening is a high-volume, well-structured clinical workflow, this clearance has the potential for broader immediate adoption than the histology WSI De Novos.
Artera (ArteraAI Prostate, DEN240068, July 2025) is the first prognostic, rather than purely diagnostic, WSI AI De Novo. Instead of detecting cancer on a slide, ArteraAI predicts patient outcomes (specifically, treatment benefit from androgen deprivation therapy in localized prostate cancer) from a biopsy slide plus clinical data. The SFH product code it created opens the door for future prognostic and predictive WSI algorithms, a category that until 2025 existed essentially only in academic publications.
Tempus AI (xR IVD, K241868, September 2025) and Biocartis (Idylla CDx MSI Test, P250005, August 2025) anchor the molecular side of the modern pathology AI cohort. xR IVD is a research-use-modified genomic profiling panel taken to IVD, cleared under the PZM product code. Idylla CDx MSI is a PMA companion diagnostic for microsatellite instability detection.
What the Snapshot Suggests About the Next Two Years 🔗
As of April 28, 2026, three patterns seem worth watching.
The De Novo scaffolding phase is mostly finished for core WSI use cases. With QPN (diagnostic WSI cancer detection), QYV (cervical cytology AI), and SFH (prognostic WSI) established, and with the first 510(k) already using QPN, the next wave of WSI submissions is much more likely to come in through 510(k) rather than De Novo. That should, in principle, shorten the median review timeline for follow-on products, because the novelty review burden transfers to the predicate.
Molecular IVDs will continue to dominate the pathology AI volume story even as WSI gets more press. Six of the ten 2025 authorizations, and the two PMA-class decisions, came from molecular IVD developers. This is the part of pathology AI that is already paid for by reimbursement and already integrated into oncology treatment decisions. Expect this column to keep growing regardless of how quickly WSI adoption progresses.
The quiet 2026 start is a data point, not a trend yet. Two hematology clearances through mid-April, after ten authorizations in 2025, could reflect nothing more than post-peak queue dynamics. But it is also a useful reminder that pathology AI's cadence is driven by a small number of active developers, and a single quarter without a major WSI or cytology submission can visibly change the annual curve in a category this small.
The broader takeaway from this snapshot is that pathology AI is now definitively a real category rather than a demonstration-project category. There are fifty-one cleared devices, eight active modern innovators, and three established product codes under which new entrants can come in through 510(k). But it is also, as of April 2026, a category whose total cleared volume is still roughly one-tenth of what a single active radiology specialty produced last year. Pathology AI is not behind because its developers are slower; it is behind because the regulatory, infrastructural, and data problems are genuinely harder. Anyone entering the space in the next eighteen months should plan accordingly.
References 🔗
[1] FDA Device Explorer by Innolitics. Authorization records for AI/ML-flagged pathology-relevant devices (panels PA, Pathology, Medical Genetics, HE, MI, Immunology; product codes QPN, QYV, SFH, QRF, OIW, NOT, NQN, NYI, PQP, PJG, PHP, PZM, SFL, JOY, GKZ, POV, GKF, OUY, PSY, QKQ). Data retrieved April 28, 2026. https://fda.innolitics.com
[2] Paige Prostate De Novo decision letter, DEN200080 (September 21, 2021). https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/denovo.cfm
[3] Hologic Genius Cervical AI De Novo decision letter, DEN210035 (January 31, 2024). https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/denovo.cfm
[4] ArteraAI Prostate De Novo decision letter, DEN240068 (July 31, 2025). https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/denovo.cfm
[5] Ibex Galen Second Read 510(k) summary, K241232 (January 24, 2025). https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm
[6] Tempus AI xR IVD 510(k) summary, K241868 (September 19, 2025). https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm
[7] Biocartis Idylla CDx MSI Test PMA approval, P250005 (August 15, 2025). https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpma/pma.cfm
Looking for a medical device software partner? 🔗
If you are building any of the products described in this article, or thinking about it, this is the work we do.
Innolitics is an engineering and regulatory consultancy for medical device software. We build the software, write the submission, run the validation, and stand up the quality system. Most of our clients hire us for some combination of the three. The ones who want to move faster than the competition hire us for all of it, end to end, from a product spec to an FDA letter.
What we actually do for digital pathology and IVD-software clients:
- Build the software. Whole-slide image pipelines, color normalization, scanner abstraction layers, model training and evaluation infrastructure, deployment, monitoring, and the IEC 62304-aligned software lifecycle that the FDA expects to see behind it. We have shipped production code for FDA-regulated AI/ML software.
- Run the regulatory strategy. Product-code analysis, predicate selection, Pre-Sub planning, intended-use scoping, De Novo vs. 510(k) vs. PMA decision support, CLIA categorization planning, and CDx pathway scoping when a therapeutic linkage is in play. For pathology AI specifically, we know the QPN/QYV/SFH predicate landscape and how the scanner/stain constraints cascade into a label.
- Author the submission. 510(k)s, De Novos, and PMA modules. Analytical performance plans built against CLSI (EP05, EP09, EP17, EP25, MM-series), clinical validation protocols, reader study designs where applicable, software documentation, cybersecurity packages, and PCCPs for model updates.
- Stand up the quality system. ISO 13485 / 21 CFR 820 / QMSR-aligned QMS for software-first companies, with the IVD-specific design controls (lot release, stability, traceability, reference materials) layered on for OHT7 submissions.
We work with software-first startups that are entering pathology, IVD, or radiology for the first time and need someone who can speak both ML and CLSI in the same meeting. We work with established device companies that need engineering capacity on a regulated build. And we work with regulatory teams who have a clinical strategy but need a partner to execute the software build and the submission together so the artifacts actually line up.
If any of that maps to a project on your desk, reach us at innolitics.com/contact. The first conversation is a working call, not a sales call: bring your intended use, your scanner or instrument context, and your timeline, and we will tell you what we think the cleanest path looks like.
