Beyond the Pilot: What It Actually Takes to Scale AI in Health Systems
AI is everywhere right now: every conference, every vendor pitch, every strategic planning session. For the past few years, much of the conversation has centered on what AI could unlock for health systems. That framing is starting to change.
At The Health Management Academy's Chief Supply Chain Officer Forum in Scottsdale last week, the tone had shifted noticeably. We heard less of “look what AI is capable of” and more of “here’s where organizations have seen real impact, where they have struggled, and what it actually takes to scale.”
Five themes stood out, and they extend well beyond the supply chain.
The bar has moved from "soft ROI" to measurable financial impact
Health systems have moved past the experimentation phase. Leadership is no longer willing to fund pilots that can only show operational improvements without a clear line to financial outcomes.
As one speaker put it:
If you are only achieving soft ROI, it’s not enough. You can’t base a scalable AI pilot anymore off of soft ROI.”
But many pilots were not built with that standard in mind. Instead, they relied on indirect or operational metrics as proxies for success. The result, as another panelist noted, is that “only 1 in 10 AI pilots delivered actual labor cost reduction or direct revenue improvement.”
The organizations making progress are approaching this differently. They are choosing partners that support co-development, embedding finance earlier, and being more deliberate about how value is measured and communicated across the organization.
Our work with Northwell Health came up in this context, as an example of how supply capture and utilization improvements can be tracked alongside finance from day one to draw a clearer line to financial performance. More on this partnership is covered in this Becker’s piece and recent OR Manager webinar.
Leadership and frontline teams are often not on the same page about AI readiness
One of the more striking data points from the forum was a readiness gap between executives and the people doing the work. Leadership rated organizational AI readiness around 8.5 out of 10. Frontline staff reported something closer to a 3.
That gap is usually a design problem before it's a communications problem. When people closest to the work aren't involved in shaping a solution, the friction shows up at the point of adoption.
This is why co-development has been central to how AssistIQ builds. Every major product feature has been developed with a health system partner embedded in the process from the beginning, not as beta testers. Frontline teams can articulate where the real friction lives in ways that leadership-level interviews rarely surface.
For clinical and operational teams, the more effective message is about patients, not financial ROI
There's a real tension here: hard ROI is the right bar for leadership decisions, but it's often the wrong message for the people who have to change how they work.
The framing that came up in the room:
Address those fears through communications on workload, on patient care… and not on ROI."
This does not mean obscuring the business case but recognizing that clinicians and supply chain staff care most about improving patient outcomes. Opportunities to improve quality and patient care will always continue to be at the center for these stakeholders which lends to thinking about where the total ROI can include both finance and care benefits.
Role redesign should be a deliberate process, no a byproduct of tool adoption
As AI compresses routine tasks such as reconciliation, documentation, coordination, job roles are already changing. The risk isn't change itself; it's when organizations layer new tools onto existing structures and let the resulting shifts happen without intention.
A sentiment that came up repeatedly at THMA was that role redesign often “happens to us, not by us.” Organizations approaching this more deliberately are defining what high-value work looks like in an AI-assisted environment – decision-making, exception handling, cross-functional judgment — and actively building toward it rather than reacting after the fact.
Governance works best when it's built in, not bolted on
As AI adoption accelerates, governance has to be part of the process from the start, not a checkpoint at the end. Data security, regulatory alignment, ethical considerations, operational performance tracking — the organizations scaling most confidently treat these as enabling infrastructure, not as gates to get through.
There's no shortage of conversation about what AI can do. What this forum reinforced is how much depends on the organizational conditions around it, the partnerships, the measurement discipline, the deliberate choices about change management, that separate a promising pilot from something that actually scales.
Ready to see how AssistIQ can help your health system protect revenue and strengthen workflows?
Connect with our team