On February 17, 2026, The U.S. Senate HELP Committee released “Patients and Families First: Building the FDA of the Future": a sweeping assessment of where FDA regulation stands and where it needs to go. If you build, fund, or lead a company that makes medical device software with AI, this document is the single most important regulatory signal of 2026.
Here’s why: the Senate explicitly calls out AI’s potential to “detect diseases earlier and more accurately,” names it as a priority under President Trump’s mandate to cut red tape, and outlines a vision for regulatory modernization that would fundamentally change how AI-powered devices reach patients.
This is not a routine policy paper. It's a roadmap. One that validates positions I’ve been publishing for three years and opens a window for companies willing to move now.

Senate report highlights: AI to detect diseases earlier, Trump’s mandate, small- and medium-sized companies
Who This Hits Hardest 🔗
The report's reforms disproportionately affect small- and medium-sized companies. The Senate calls this out directly. The current system wasn't designed to disadvantage smaller companies, but the unpredictability in timelines and costs hits them harder because they have less margin to absorb it.
The Senate report highlights inconsistency in the review process. This isn't surprising given the breadth of what FDA reviewers are asked to evaluate, often with limited resources and rapidly evolving technology. But the result is that review outcomes can vary depending on the team assigned, creating unpredictable timelines and costs. For small companies with limited runway, that unpredictability adds real financial risk.
Risk Calibration in an Adversarial World 🔗
The Senate doesn’t mince words: they call China “adversarial” and flag that “China has surpassed the United States as the top venue for clinical trials.” That's a significant data point for industry and regulators alike.
This comes back to risk calibration, a societal calculation, not just a regulatory one. On one hand, if you’re too risk-averse, you fall behind. Your patients wait. Your competitors in other countries iterate faster. On the other hand, if you move too fast, patients get injured.
This calibration always shifts with the political landscape. And right now, it’s shifting toward rapid iteration with incrementally more accepted risk, provided that post-market controls are in place to catch problems early.
That "provided" is doing a lot of work in that sentence. More on that shortly.
The Senate also highlights that U.S. regulatory burdens for early-phase trials are driving sponsors abroad. The recommendation: reshore early-stage clinical research by reducing unnecessary duplication and aligning risk-based frameworks across agencies.

Senate highlights: China surpasses U.S. in clinical trials, adversarial nations, reshoring research
Post-Market Surveillance: From Stick to Carrot 🔗
Here’s where the Senate report connects directly to what I’ve been writing about for years.
I've spoken about this back in 2024 at the FDA Digital Health Advisory Committee, in my analysis of behavioral health De Novo outcomes, and in my piece on real-world evidence. The post-market system was built for a different era. It was designed to catch safety problems after approval, and it does that job. But for AI-enabled devices that improve over time, the same structure doesn't incentivize the kind of continuous data collection that would benefit both patients and manufacturers.
The Senate is signaling a different model. One where post-market data collection becomes a carrot, where robust real-world evidence generation reduces your pre-market burden, accelerates your time to market, and gives you a continuous improvement loop that benefits patients and your product simultaneously.
This is the shift I proposed in my FDA Advisory Committee testimony in November 2024: transform post-market from a compliance-focused exercise into a beneficial feedback loop. Companies that build real-world evidence infrastructure into their product, not as afterthought but as core architecture, should get lighter pre-market review.
And this is where it gets especially important for generative AI, foundation models, and multi-indication devices.

Senate highlights: PCCPs, post-market improvements, software review challenges
FDA's Approach to AI: The Post-Market Unlock 🔗
The Senate dedicates significant attention to FDA's handling of AI and identifies several gaps in the current framework.
The current AI/ML action plan focuses on “static-model predictive AI.” The Senate notes that it "does not yet address generative AI", a category where "same inputs will generate different outputs with each entry." The framework is, as the Senate puts it, "already outdated." That's not a knock on FDA's effort. The AI/ML action plan was a meaningful step forward when it was published. The technology just moved faster than any regulatory framework could reasonably keep up with.
But here’s the signal that matters most: the Senate identifies that “generative AI poses the greatest opportunity for leveraging technology to improve patient care” and that FDA’s framework “must be nimble enough to facilitate moving these devices up or down risk classes” as they “are honed and tested in the real world.”
Read that carefully. The Senate is connecting two ideas: generative AI governance and dynamic risk classification based on real-world evidence. That connection is the unlock.
For foundation models, multi-indication devices, and generative AI systems, the traditional pre-market paradigm fails. You can’t pre-validate every possible output of a non-deterministic system. You can’t run a clinical trial for every indication a multi-indication device might serve. The pre-market burden becomes effectively unbounded. That's what happened to several behavioral health De Novo companies I analyzed, including Kintsugi, which spent $16 million without ever filing.
The alternative: shift validation to the post-market. Use Predetermined Change Control Plans (PCCPs) as the delivery mechanism. Pre-approve the testing protocols. Deploy. Monitor. Iterate. Reclassify based on evidence. The Senate calls PCCPs "tremendous potential." I believe they’ll be the drug delivery system that makes generative AI regulation viable.
This is also where real-world evidence becomes transformative. The behavioral health De Novo outcomes, the foundation model challenge, the multi-indication bottleneck. They share a common thread: the current framework relies heavily on pre-market evidence generation for technologies that may only be properly validated through post-deployment data.

Senate highlights: Generative AI, static-model AI, risk class flexibility, post-market
The "Ideally vs. Realistically" Problem 🔗
A lot of the issues in this report collapse into one fundamental tension.
There’s ideally, and there’s realistically. Ideally results in well-intentioned policies that apply traditional frameworks to new technology. FDA reviewers are working within the statutory authority they have, and that authority was written for a different generation of devices. The result is that frameworks designed for physical devices get applied to software, and the fit isn't always right.
Realistically means accepting that the regulatory framework for AI needs to be proportional to risk, adaptive to evidence, and fast enough to keep pace with the technology it governs. Not reckless. Proportional.
The Senate seems to support this. Their recommendations on “least burdensome” enforcement, risk-based calibration, and post-market evidence generation all point toward realistically. But the distance between a Senate recommendation and an implemented FDA policy is measured in years and statutory authority, however, the trade wind directions have been set.
The Pre-Certification Signal 🔗
Interesting that the Senate specifically calls out FDA’s precertification program. They note the pilot’s “lack of statutory authority". It failed not because the concept was wrong, but because the FDA couldn’t legally implement it.
I’m speculating here, but the Senate may be signaling willingness to grant FDA the statutory authority to make pre-certification happen. The combination of pre-certification for trusted manufacturers plus PCCPs for post-market iteration would create a genuinely new regulatory architecture. One built for software, not widgets.
The Generative AI Stalemate 🔗
The Senate identifies a gap that many in the industry have been watching: FDA does not yet have a generative AI-specific policy. That's understandable given how fast the technology is evolving and how high the patient safety stakes are. But the absence of a framework creates its own set of problems.
The Senate says it well: the regulatory framework needs to be just as agile as the thing it is regulating.
I’ve been laying the groundwork for how generative AI can be regulated through a series of published analyses: my FDA Strategy for Foundation Models, the Foundation Models FAQ and Pre-Submission Guide, and my analysis of how to get GenAI devices to market without burning your runway. These aren’t theoretical frameworks. They're practical playbooks built from real pre-submission meetings and clearance data.
The result is a kind of standstill. Industry is waiting for FDA to publish guidance. FDA, reasonably, wants to see what industry is actually building before committing to a framework. Neither side is wrong. But the cost and uncertainty of going first is a real barrier for most companies.
The companies that move forward now, demonstrating viable approaches through pre-submissions and clearance data, will help shape the emerging framework. I can help navigate that process.
CDS: The Senate Wants More 🔗
The Clinical Decision Support section of the report is telling. Congress carved CDS tools out of device regulation in the 21st Century Cures Act, software that helps clinicians “transfer, store, and convert digital formats; display data and results.” Clear statutory language.
FDA then published guidance in 2022 that takes a broader view of its oversight authority over CDS tools. FDA's concern is understandable: some CDS tools can meaningfully influence clinical decisions, and the line between "decision support" and "diagnostic" is genuinely blurry. But the Senate's assessment is that this broader interpretation may go too far: "Requiring FDA review of these tools will likely stifle and slow advancements in digital health."
Based on this language, the Senate appears unsatisfied with FDA’s very mild deregulation on CDS in 2026. They may be pushing for more. I’m speculating, but I think Congress wants FDA to develop out-of-the-box approaches to evidence generation for pre-market submissions; specifically for generative AI CDS tools that don’t fit neatly into traditional clinical trial paradigms.

Senate highlights: CDS statutory conflict, stifling digital health advancement
Vendor Decision-Making: A Familiar Roadblock 🔗
One detail worth flagging: the report notes that FDA has “struggled with reviewing cloud-hosted technologies, due to manufacturers’ lack of exclusive control over cloud vendor decision-making.”
This hits close to home. FDA's concern here is legitimate: if a manufacturer can't fully control the infrastructure their device runs on, that raises real questions about ongoing safety and security. But in practice, the lack of clear guidance on how to address cloud vendor risk created significant uncertainty in early cybersecurity reviews. We developed strategies to work through this at Innolitics, but many companies found it added months to their timelines. The Senate flagging this suggests the friction is ongoing, and that clearer guidance would benefit both FDA and industry.
The Gate Is Wide Open. It is Your Turn. 🔗
Here’s the optimistic and pragmatic takeaway.
There has never been a better time to be building a medical device software AI company. The Senate report confirms the policy direction. The political winds favor speed. The technology is ready. The frameworks exist (I’ve been publishing them).
And here's the part that matters most: the companies that move first don't just get regulatory advantage. They help shape the emerging framework. Their De Novos literally write the rules. Their cleared devices become predicates. Their approaches inform how the system evolves.
The Valley of Death the Senate documents, where fewer than 10% of breakthrough devices commercialize, is not inevitable. It reflects a system where the consequences of approving something harmful are more visible than the consequences of delaying something beneficial. The Senate is asking FDA to recalibrate that balance. And the companies that position now, while the architecture is being rebuilt, will own the next decade.
The gate is wide open. The question is who moves first.
Let's Talk 🔗
If your team is building AI-powered medical device software and you want to understand what the Senate's regulatory signal means for your roadmap, let's talk. I've been mapping this landscape for years, and the frameworks and pathways are published. Book time with me and I'll walk you through where you stand and the options ahead.
