This post was written by Rachel Witalec, Redox’s Chief Product Officer.
We’ve always talked about interoperability.
For more than a decade, it’s been the rallying cry of healthcare technology. Connect the systems. Break down the silos. Unlock the data. We built interfaces, stood up engines, joined networks, connected to HIEs, and invested in national frameworks, all in pursuit of one goal: making sure data could reliably move from point A to point B.
And that work mattered. It still does. But what I am seeing in the market right now is that while the word interoperability has stayed the same, what we expect it to deliver has fundamentally changed. Connectivity alone is no longer enough. Today, interoperability means something much more ambitious. To truly make the most of our data we need more than the ability to move it; we need the ability to unify, orchestrate, and activate it. And we need to do this across the entire ecosystem, including both internal and external data systems.
That is a very different standard.
Everyone wants AI. Few have the foundation.
In nearly every conversation I have with health tech leaders, AI comes up. Clinical summarization. Prior authorization automation. Risk prediction. Workflow agents. Revenue cycle optimization. The ambition is real, and the pressure is, too.
But here’s the uncomfortable truth: AI is not the foundation. AI is an application layer that sits on top of data. It consumes data, interprets it, and generates output. If the underlying data is fragmented, delayed, poorly normalized, or missing context, the AI will reflect that.
Fragmented data in. Fragmented intelligence out. We cannot reason our way out of bad data architecture with better models. And even strong data engineering is not enough. AI does not just need access to data. It needs context. It needs to understand which data matters in a given moment, how that data relates across systems, and what action should follow.
This is where the value is shifting.
The real leverage is moving into what I think of as the composability layer: the invisible logic that determines what context to pull, which tools to call, how to orchestrate across systems, and when to act. That layer depends on a coherent data foundation. Without it, AI remains a disconnected feature rather than an integrated capability.
Yet, many organizations are still treating interoperability as a series of integration projects instead of as core infrastructure. Connect this partner. Launch that feed. Stand up another FHIR endpoint.
Project by project, the surface area grows. The architecture underneath does not. It just becomes more fragile.
Then: Moving data between systems
Historically, interoperability meant movement.
- Can we send an HL7 message?
- Can we receive a CCD?
- Can we stand up a FHIR API?
If the answer was yes, the project was deemed successful because the integration worked, the box was checked, and the interface was live. But simply moving data from one system to another is not the same as truly orchestrating it.
Data could move and still be inconsistent.
It could be present and still not be actionable.
It could exist in five places and still not tell a coherent story.
That definition of interoperability was sufficient when the goal was connectivity. It is not sufficient when the goal is intelligence.
Now: Orchestrating data across an ecosystem
Today, interoperability means something much more ambitious. Enterprise interoperability creates a connected environment where data can move across an entire ecosystem and teams can work from the same data foundation.
This is not just about external connections. It is equally about what I would call intraoperability, how data flows and aligns within your own organization.
Intraoperability is often the harder problem. Standards like FHIR exist for external exchange, but mapping a custom internal CRM to a legacy EHR is where AI projects usually go to die. If your teams cannot access a consistent view of patient, operational, and financial data internally, layering AI on top will not fix that fragmentation. It will amplify it.
A modern data layer must support both:
- Interoperability (The Bridge) Connecting your systems to the outside world
- This is your standardized integration with ‘external’ systems – EHRs, national networks, connecting payers and providers, etc. It’s about connectivity methods, integration expertise, compliance, and ecosystem reach to get the data you need into your system.
- Intraoperability (The Nervous System) Making your internal systems work as one
- Think of this as the “connective tissue” that transforms your internal data into a ready state for your organization based on your definitions. It’s about transforming and activating internal data across systems into a unified and usable state. It is focused on data quality, proprietary logic, AI readiness and operational efficiency.
And increasingly, it must expose that data and functionality in a structured way that AI agents can interact with, whether through APIs, MCP servers, or other governed tool layers.
AI does not care whether a data inconsistency is external or internal. It simply reflects what it sees.
Without both inter- and intra- alignment, AI cannot operate effectively.
And without that, you increase the load on your dev team and increase the fragility and risk of breaking.
From Feature to a New Foundation
Interoperability as Infrastructure
We are moving past the era where interoperability was a checkbox feature. Today, it is the fundamental data infrastructure required to scale the next generation of healthcare- specifically, agentic AI.
It means having a “composability layer” that becomes not just the way that data can get exchanged, but also contains the logic that decides what context to pull, which tools to call, and when to act. In an agentic world, this layer becomes the control plane, it becomes “active intelligence” between your raw data and your AI agents. It governs what models can see, what actions they are allowed to take, and how those actions are audited, secured, and observed.
Intraoperability: The Internal Engine
While Interoperability looks outward, Intraoperability is the foundation for enterprise-wide AI deployment. If Interoperability is the bridge to the world, Intraoperability is your internal nervous system. It is the infrastructure layer that converts fragmented internal data into a “useful shape” for:
- Intelligent Automation: Moving beyond simple triggers to complex, data-aware workflows.
- Agentic Control: Providing the “memory” and “context” internal agents need to execute tasks across departments.
- Operational Visibility: Breaking down silos so leadership has a real-time, unified view of the enterprise.
- Scalable Experimentation: Reducing the “data cleaning” tax that kills 80% of AI pilots before they reach production.
When interoperability is treated as a tactical project, organizations end up operating in a reactive mode, fixing feeds, managing edge cases, and standing up one more integration at a time. When it is treated as infrastructure, it becomes a strategic asset: a platform for innovation and a durable layer that new use cases can build upon without requiring a full re-architecture each time.
This “composability layer” enables workflows that pre-process data so it’s clean, mapped, and routed in real-time.
- Performs the “Heavy Lifting”: It does the custom mapping for your specific legacy instance – handling those weird local codes and non-standard fields that usually break integrations.
- Passive Modernization: It automatically transforms that messy legacy data into FHIR and pushes it into your FHIR Store without your team lifting a finger.
- The Logic Hub: While the data is moving, the Composability Layer applies your custom business logic. It’s deciding: is there missing data? It triggers an automatic “fetch” to a national network (like Carequality or TEFCA) to pull the patient’s latest details and “bundles” it into the original order.
Example:
The “Intra-op” Journey of a Lab Result:
- Legacy System: A lab result is generated in a 15-year-old LIS (Laboratory Information System).
- The Composability Layer: Instantly intercepts the result, maps the non-standard local code to a LOINC code, and converts the message to FHIR.
- The FHIR Store: The result is saved, “clean and structured,” in your central repository.
- The Agentic Action: Simultaneously, the Composability Layer triggers an AI agent to check for “Care Gaps.” Finding one, it pushes a notification directly into the provider’s CRM and the patient’s EHR workflow.
The companies that will succeed with AI in healthcare will not be the ones with the flashiest models, but the ones with the most coherent and well-orchestrated ‘composability’ layers to enable agentic intelligence.
Raising the bar
We’ll keep using the word “interoperability” because it’s the language of our industry, but we need to be honest about its new mandate.
Connection is no longer the goal; it’s the baseline. The real frontier now is Intraoperability – the ability to orchestrate, trust, and activate that data internally within your own four walls. If interoperability asks, “Can we talk?” then intraoperability asks, “Can we act?”
If you want to move from “AI pilots” to an “Agentic Enterprise,” you have to build the composability layer that powers them. The plumbing is finished. It’s time to build the engine.
Interoperability connected us and got us to the table. Intraoperability will get us to the future. That is the work ahead of us- and it’s the only work worth doing.