Language-First Interoperability in Healthcare: A Promising Shift or a Risky Shortcut?

Alexandr Pihtovnicov

Delivery Director at TechMagic. 10+ years of experience. Focused on HealthTech and digital transformation in healthcare. Expert in building innovative, compliant, and scalable products.

Krystyna Teres

Content Writer. Turning expert insights into clarity. Exploring tech through writing. Deeply interested in AI, HealthTech, Hospitality, and Cybersecurity.

Language-First Interoperability in Healthcare: A Promising Shift or a Risky Shortcut?

Health data exchange is more structured than ever. But often, it’s still painfully slow.

Standards like HL7 FHIR were designed to fix that. And in many ways, they have. FHIR enables powerful, structured, API-based sharing of health data. It’s modern, flexible, and gaining traction globally.

But here’s the catch: FHIR alone doesn’t guarantee fast or easy interoperability.

Before anyone can start exchanging data, there’s still a long setup process: agreeing on data formats, building implementation guides, mapping systems, and validating use cases. It’s not uncommon for this pre-coordination to take months or even years, especially when multiple institutions, vendors, and legacy systems are involved.

And during that time?
Patients wait. Insights stall. Small providers get left out. Everyone’s still faxing.

That’s how a new concept appeared: Language-First Interoperability (LFI). Mark Kramer and May Terry introduced this idea in their recent article.

LFI doesn’t try to replace standards like FHIR. Instead, it aims to make them easier and faster to use by adding a layer of real-time negotiation.

The concept: let AI agents use natural language to coordinate data exchanges, figure out what’s needed, and then apply standards like FHIR under the hood to carry them out.

It’s a big shift. One that could reduce time-to-value, lower the entry barrier for smaller players, and open data faster without sacrificing structure or safety.

But as we saw in Mark Kramer’s recent LinkedIn post, not everyone is convinced. The concept sparked a wave of comments: some hopeful, some skeptical, all worth listening to.

Before we dive into that debate, let’s get clear on what LFI really is.

Key Takeaways

  • Language-First Interoperability (LFI) doesn’t replace standards like FHIR. Instead, it accelerates their use. It lets AI agents negotiate data needs in real time.
  • LFI lowers the technical barrier and could help small clinics and under-resourced providers join data exchange networks faster.
  • LFI promises faster setup, audit trails, and more human-like workflows, but success depends on strong protocols, governance, and documentation.
  • Natural language is messy; without clear guardrails, AI agents could misinterpret critical medical context and undermine trust in automation.
  • Experts stress that semantic alignment is still required. AI may help reach a consensus quicker, but agreed-upon foundations can’t be skipped.
  • MITRE pilots and startups are already exploring LFI, with proof-of-concepts showing how multi-agent, protocol-driven systems could make referrals and other exchanges faster and more reliable.

What Is Language-First Interoperability?

Basically, Language-First Interoperability (LFI) is a mindset shift.

Instead of forcing every organization to pre-agree on every detail of a data exchange, LFI lets AI agents handle the coordination in real time, using natural language.

Here’s what that means in practice

Right now, if a healthcare provider wants to share patient data (for a referral, a clinical trial, a research project), the process usually starts with weeks of meetings and technical alignment.

Everyone needs to agree on how data is structured, what fields to include, how to send it, and how to receive it. Only then can the data start to move.

LFI flips that around.

With LFI, intelligent agents could talk to each other like clinicians do:

“I need to refer a patient to your cardiology team.”
“Here are our intake criteria.”
“Here’s the data you asked for.”

Simple. Direct. Automated.

Under the hood, those agents would still use trusted standards like FHIR to deliver structured data, but they wouldn’t need to wait for months of prep work before they could act. Instead, they’d negotiate what's needed on the fly and handle the exchange in a way both systems understand.

It’s not breaking standards, it's speeding them up

That’s what Mark Kramer and his colleague May Terry argue in their original article.

They describe LFI not as a replacement for standards like FHIR, but as a conversational layer that makes those standards more usable in the real world.

The inspiration comes from how real healthcare teams already work.

Doctors don’t exchange XML files. They talk. They explain. They clarify. Then they follow up with the documentation. LFI tries to recreate that flow – just between machines.

It’s an idea with a lot of potential. But as the conversation under Mark’s post showed, it also raises big questions.

Let’s explore both sides.

Why It’s Exciting: The Potential Benefits of LFI

Plenty of people in the health tech space got genuinely excited about Language-First Interoperability, and for good reason.

If done right, LFI could make healthcare data exchange faster, fairer, and more human. Here’s what stood out from the community conversation under Mark Kramer’s post:

1. It saves time – a lot of it

Building custom integrations takes months. Sometimes years.

That’s time providers could spend actually using the data – not just setting up how to share it.

With LFI, agents could skip the upfront coordination and negotiate data needs in real time, removing the need to write yet another 80-page implementation guide.

💬 “Encouraging machines to be able to collaborate with machines will let humans focus on the problems humans do best.”
Bradley Hague, Science Writer

2. It lowers the barrier for smaller players

Big health systems can afford interoperability teams.
Small clinics, urgent care providers, school health offices? Not so much.

Because LFI works through natural language, you don’t need deep technical expertise to participate. The interface becomes a conversation, not a config file.

💬 “This could finally democratize healthcare data exchange.”
Mark Kramer

3. It mirrors how humans already work

When doctors need to move fast, they don’t open up a schema doc. They call each other.

LFI tries to replicate that flow between systems, so agents can share intent, clarify context, and adjust on the fly.

💬 “Medical professionals spend years learning to communicate patient stories with nuance. Forcing everything into dropdown codes doesn’t reflect that reality.”
Stijn De Saeger, R&D Lead at ibis.ai

4. It keeps standards in the loop – just at the right layer

One thing LFI doesn’t do is ignore structure. Once the agents agree on what’s needed, they can use FHIR or other standards to deliver the data in a reliable, auditable way.

It’s not chaos. It’s coordination, with flexibility up front and rigor behind the scenes.

💬 “Language-first interop could indeed be powerful for bridging gaps between standardized systems, but I'd argue we still need that foundational layer of agreed-upon meanings. The real opportunity might be using AI to accelerate and democratize the standardization process itself.”
–  Axel Vanraes, Tech Co-founder at Tiro.health

💬 “Use natural language for discovery and selection, then bootstrap into FHIR for the actual exchange.”
Abigail Watson, AI Generalist

5. It creates an audit trail – automatically

Because AI agents interact through dialogue, those conversations can be logged, audited, and even reviewed to detect fraud or gaps in care coordination.

That’s something traditional data exchange doesn’t always capture. The “why” behind the “what”.

6. It’s not just talk – it’s being tested

MITRE has already announced plans for pilot projects and a connectathon to explore LFI in real-world settings. Other startups (like Monsana) are building similar ideas into their products.

💬 “We’ve implemented a basic version of this. Our agent checks trial eligibility and facilitates outreach between doctors and research sites.”
Simeon Devos, CTO at Monsana.AI

Bottom line

LFI could reduce the cost of interoperability, shrink timelines, and open doors for providers and patients who are often left out.

But as we’ll see next, not everyone thinks it’s the right path forward.

Note: While this article was being written, a follow-up piece with technical details on a live proof-of-concept was published by May Terry. You can read it here.

Why It’s Complicated: The Pitfalls of LFI

For all its promise, Language-First Interoperability isn’t a silver bullet.

If anything, the reaction to Mark Kramer’s post made one thing clear: Natural language opens the door, but it also opens a can of worms.

Here are the top risks and open questions that came up both from commenters and the broader context of working in health IT.

1. Natural language is messy – and medicine doesn’t leave room for misinterpretation

AI agents talking in plain language sounds simple. But in healthcare, ambiguity can cost lives.

Words carry context, assumptions, and even local slang. That’s tricky for machines and risky when the stakes are clinical.

💬 “Normal language leaves too much room for ambiguity and misunderstanding… You’d need some kind of guidance or semantic framework to make this safe.”
–  Samuel GWED, Clinical Research Director

💬 “Natural language navigation fails in healthcare all the time. People get lost until someone hands them a flowchart.”
–  Abigail Watson, AI Generalist

Expert POV: Without strong grounding (via ontologies, world models, or domain-specific guardrails), LFI agents could misunderstand queries, miss critical context, or introduce errors into workflows.

2. Standards exist for a reason – they prevent chaos

Some see LFI as a threat to standardization.

If every agent interprets things slightly differently, are we introducing new fragmentation in the name of flexibility?

💬 “Let’s not forget why standards like FHIR exist: they provide clear semantic agreement.”
–  Axel Vanraes, Tech Co-founder at Tiro.health

💬 “Codes may feel rigid, but they serve a purpose. They make data computable, consistent, and safe to analyze.”
–  Multiple contributors

But that’s not the end of the story.

💬 “Even human-defined terminologies are vague. Many have no formal definitions or context. And workaround translations by individual engineers often become invisible ‘truth’ without scrutiny.”
–  May Terry, Principal, Health Informatics at MITRE

In other words, standards can feel solid but still leave room for ambiguity. The difference is, their complexity is hidden in implementation layers, not out in the open.

Expert POV: LFI shouldn’t bypass standards, but it might actually help improve them. As May Terry pointed out, AI agents could detect mismatches, surface local terminology variants, and propose more grounded harmonization over time. It’s not about avoiding standardization, but accelerating and improving it through real-world context.

3. We still need a shared foundation, no matter how smart the agents get

Even supporters of LFI agree: you can’t skip semantic alignment.

💬 “Each healthcare organization has its own dialect. AI agents might help bridge gaps, but we still need a foundation of shared meaning.”
–  Axel Vanraes

Expert POV: Natural language can ensure flexibility, but only when grounded in agreed-upon semantics. The opportunity may be less about skipping alignment and more about using AI to reach it faster.

4. It’s not just an engineering problem – it’s a workflow problem

Even if LFI works technically, how will it fit into real healthcare operations?

  • How do EHRs handle agent-driven exchanges?
  • Who’s responsible when something goes wrong?
  • Will this add friction instead of reducing it?

💬 “We need to make sure this integrates with how people actually work. Not just how we wish they would.”
– Implied across multiple responses

Expert POV: Without strong UX design and operational alignment, LFI could become just another layer of complexity. Clinicians need confidence. Not another tool they have to fight with.

5. Compliance and accountability aren’t optional

Every health data exchange must comply with HIPAA, local privacy laws, and audit frameworks. That’s easy to validate with structured APIs. But can we really trust generative systems to follow the rules every time?

💬 “How would you encourage data giants, like Epic, not to fight this but to collaborate?”
–  Brian Bush, AI & Health Researcher

Expert POV: LFI agents need to be auditable, certifiable, and explainable. Without that, adoption could hit a wall. Not just technically, but legally.

6. LFI could widen the trust gap – not close it

While the vision is to make data exchange feel more human, many stakeholders are already cautious about automation.

Adding another layer of AI, especially one that makes decisions in real time, could erode trust unless the system is fully transparent.

💬 “LLMs inherit the biases and ambiguities of their training data, and each healthcare organization operates with its own 'dialect' of medical terminology and workflows.”
–  Axel Vanraes, Tech Co-founder at Tiro.health

💬 “I’ve never seen two people give up on language and start talking in codes. But I’ve seen people lose trust in automation that doesn’t explain itself.”
Mark Kramer, responding to concerns

Expert POV: Trust in automation is fragile, especially in healthcare. Transparency, oversight, and clear human-in-the-loop moments will be essential to building confidence in LFI workflows.

In short

LFI is full of potential. But it’s not “set it and forget it.”

Without strong checks, clear roles, and real-world testing, it could add more confusion than clarity.

The key isn’t to reject it. It’s to design it responsibly.

And that starts with listening to everyone it affects, including engineers, policymakers, and clinicians who actually move data every day.

Expert Takes: What the Community Thinks

The LFI concept is controversial. That’s why many in health tech are eager to reflect on what Language-First Interoperability could mean (and if it's ready for the real world).

One voice we’re especially excited to include in this conversation is Axel Vanraes, Tech Co-founder at Tiro.health. Axel joined the discussion under Mark Kramer’s post and raised a core concern shared by many in the industry: natural language is powerful, but also prone to ambiguity.

In healthcare, where precision matters and lives are on the line, can AI truly negotiate safely without predefined semantics?

We asked Axel to expand on his thoughts, and here’s what he said:

“Standards exist to make data meaningful. That means both sender and receiver agree on what the data actually represents. The challenge today is that reaching this consensus takes months or years before implementation can even begin.”

According to Axel, large language models could ease that burden by surfacing commonalities across systems and synthesizing functional requirements into workable protocols. But success depends on one crucial element: good documentation.

“If system B needs data from system A, we need a clear context on what functional goals system B is trying to achieve. Without that, agents can’t negotiate effectively.”

Axel also sees a role for AI in bridging the “dialects” each healthcare organization speaks. Instead of treating every variation as a barrier, intelligent agents could sift through standards and specs to spot overlaps and even propose where they might converge. With the right oversight and tooling, this could turn today’s patchwork of standards into something more coherent and adaptive.

Looking ahead, he points to code generation as one of the biggest opportunities.

“If AI can reduce the cost of producing and updating code to near zero, adopting new standards or versions becomes dramatically easier. And in the longer run, AI could even help evolve the standards themselves based on feedback from the field.”

So, Is Language-First Interoperability the Future?

It might be.

Don’t expect it to work miracles. And it’s definitely not for all cases.

But Language-First Interoperability is starting to surface a real shift: a way to speed up data exchange without throwing out the structure healthcare relies on.

Instead of months spent on setup and alignment, AI agents could help systems coordinate in real time. Natural language becomes the entry point. Standards like FHIR still carry the weight.

It’s early days. There are serious questions around safety, compliance, and scale. But it’s a conversation worth having because faster, smarter, and more accessible data exchange could open huge wins for care delivery, research, and health equity.‌

Want to build your AI solution? We can help!

Contact us

If you're exploring ways to bring AI into your digital health solution safely, our team can help. Contact us to start a conversation!

FAQs

FAQs Language-First Interoperability
  1. Is Language-First Interoperability meant to replace FHIR?

    No. LFI is designed to complement standards like FHIR, not replace them. It adds a natural language layer on top, so AI agents can coordinate data needs dynamically, then use existing standards to exchange data in a structured, compliant way.

  2. What makes LFI different from traditional interoperability?

    Speed and flexibility. Traditional interoperability requires extensive pre-alignment. Often, months of planning and documentation. LFI skips that by letting intelligent agents negotiate data needs in real time, reducing delays and lowering barriers for smaller players.

  3. What are the risks of using natural language for data exchange?

    Ambiguity and safety concerns. Natural language can be vague or misinterpreted by machines. In healthcare, that’s a serious risk. That’s why LFI needs strong safeguards, clear fallback standards, and careful integration into clinical workflows.

Was this helpful?
like like
dislike dislike

Subscribe to our blog

Get the inside scoop on industry news, product updates, and emerging trends, empowering you to make more informed decisions and stay ahead of the curve.

Let’s turn ideas into action
award-1
award-2
award-3
RossKurhanskyi linkedin
Ross Kurhanskyi
Head of partner engagement