Architectural considerations for FHIR submissions to US Federal Agencies

FHIR in the Federal Context

Ryan M Harrison
7 min readOct 12, 2021

Since it’s introduction in 2014, FHIR (Fast Healthcare Interoperability Resources) has made tremendous strides towards realizing its promise as an interoperability format.

The US Federal Government has been an active participant in the FHIR community, not a passive bystander.

  • CMS (Centers for Medicare and Medicaid Services) has been the leader in FHIR-adoption by US government agencies, driving the community forward by both building FHIR APIs for consumption and rule-making encouraging the use of FHIR.
  • ONC (Office of the National Coordinator for Health Information Technology) helps ensure the various federal and private-sector efforts don’t step on each others toes. Architectural coordination via ONC FAST (FHIR at Scale Taskforce). Data element coordination via USCDI. Regulatory coordination with the ONC Cures Rule, and separate (mostly harmonized) rule-making from the independent authorities of CMS and HHS OCR (Office of Civil Rights).
  • CDC has begun a coordination function within the public health space with their July 2020 listening session. While not as mature as the ONC US Core / DaVinci / CARIN FHIR IG (Implementation Guide) ecosystem, the MedMorph and eCR IGs are a great second step.

FHIR Consumption and Submission

There exist two principle axes of APIs

  • Consumption vs Submission
  • Individual vs Batch vs Stream processing [1]

The FHIR community has developed use-cases and implementation guides for most consumption use-cases. RESTful consumption is the most mature, followed by Flat FHIR.

In the Federal space, CMS maintains APIs using both individual and bulk consumption. The CMS BlueButton FHIR API allows Medicare beneficiaries to delegate access to their Medicare claims record to third party apps. CMS’s three bulk data FHIR APIS — AB2D, BCDA and DPC — allow bulk FHIR exchange from CMS to specific CMS consumers.

The CDS Hook and FHIR Cast implementations are solid from an IG perspective, but are not widely supported. I am unaware of a US Federal government (production) implementation of either IG.

Gap for FHIR streaming submissions for server-to-server use-cases

There appears to be a gap, in both the FHIR IG community and Federal space, for submissions via FHIR streaming. Let us decompose “FHIR streaming” into its two primary components, a messaging format and a transport layer.

Taking the example of HL7v2, the messaging format is the HL7v2 message envelope and the transport layer is the MLLP protocol over TCP.

For FHIR, messaging formats are more mature than the messaging transport. R4 includes a messaging exchanges paradigm in addition to the much more commonly used RESTful exchange. Further, IGs like eCR have profiled the base Bundle and MessageHeader resources for payloads specific to their use case — electronic (initial) case reporting. Surprisingly, FHIRCast does not use Bundle or MessageHeader for its events, relying instead on light-weight keys (hub.topic , hub.event) instead of full resources.

Unlike the FHIR REST exchange, where the transport layer is implied (REST over HTTP1.1/HTTP2), FHIR does not specify a mechanism for streaming transport. This is an appropriate design decision, but means that agreement upon streaming transport as left as an exercise to the community.

A near-term solution for web-based FHIR streaming: Approximate streaming with FlatFHIR

While not a “true” streaming approach, with one relatively small change, FlatFHIR ($export) could be extended to approximate streaming.

Current-state sequence diagram for FlatFHIR ($export). Adapted fromAdapted from “FHIR BULK DATA API” by Dan Gottlieb and Josh Mandel, Last revised September 2021.
Proposal for webhook option with FlatFHIR ($export) and use of $import

Briefly, by using a webhook in Request 2 GET Content Location, we obviate the need for polling. Then, by using $import with Response 4 FHIR resource files, we load the resources into the client (the destination FHIR server).

The use of webhooks with FHIR is established by both FHIRCast and CDS Hooks. $import specifications are under active development.

The webhook extension to FlatFHIR is inappropriate for Federal submissions at scale

This approach would work for web applications, or low-volume server-to-server exchange, but would be inappropriate for bulk server-to-server transmissions on the scale required for Federal submissions. Submissions to Federal agencies will follow one of two approaches:

  • Approach A: Submitter push TO Agency
  • Approach B: Agency pull FROM Submitter

Approach A, submitter push TO the Agency, is better suited for most streaming submissions to Federal agencies, think case reports to the CDC or provider enrollment files to CMS.

The webhook adaption of FlatFHIR ($export) uses Approach B, an Agency pull FROM source. This has two implications, 1) the agency must track submitter end-points (for which there could be tens-of-thousands), and 2) the subscriptions cannot be monitored centrally by the agency.

Requirements for Federal submissions at scale

MUST SUPPORT

  • Uni, bi and multi directional use cases (and therefore, for bi- and many multi-directional use cases, streaming consumption in addition to submissions)
  • Customizable payloads, preferably human readable or at least plain text
  • Compatible with batch [3], implying a field mapping to batch (ideally, identical field keys) and deduplication between streaming and batch submissions

All roads point to a durable distributed message queue/bus.

  • Open source: Kafka, RabbitMQ
  • Amazon: Amazon MSK, Amazon MQ
  • Azure: ServiceBus
  • GCP: Pub/Sub

With a plaintext FHIR payload for future proofing [4], instead of serialized into a more efficient binary format (Protobuf, Avro, et al.)

And a stream adapter for flat file batch and RESTful submissions.

Streaming with Submitter push TO Agency

The open question is then the streaming transport mechanism from submitter to agency.

The message payloads are not concerning to me, as existing application-specific FHIR resource profiles can be wrapped with a generic MessageBundle and MessageHeader profile. At worst, you’ll need a MessageBundle and MessageHeader for each high-level domain, e.g. Provider, Payer, Public Health.

Option A: Wait for TEFCA

The best long-term hope for FHIR-based streaming transport lies with the Qualified Health Information Network (QHIN) Technical Framework (QTF), a technical sub-component of ONC TEFCA RCE.

The promise of this approach is a dramatic streamlining for both submitters and Federal Agencies. Submitters will have a handful of integrations with their local QHINs. Federal Agency will receive updates from the RCE from multiple QHINs. In theory, Agencies could create additional submission requirement without any new burden on the submitter.

As of July 2021, QTF has solicited feedback on three options for using FHIR in QHIN-to-QHIN exchange:

Option 1: Require “QHIN Message Delivery” modality in QTF using the Integrating the Health Care Enterprise (IHE) Cross-Community Document Reliable Interchange (XCDR) profile with a future transition to FHIR; or

Option 2: Defer “QHIN Message Delivery” from QTF until a FHIR based solution is readily available; or

Option 3: Include “QHIN Message Delivery” using XCDR as optional in QTF until a FHIR based solution is readily available.

July 2021 request for technical feedback— https://rce.sequoiaproject.org/qhin-technical-framework-feedback/

The good news is that the common legal agreement for onboarding QHINs can be reused, regardless of whether the underlying messages are C-CDA (IHE XCDR profiles) or FHIR profiles.

The bad news is that this solicitation means a workable FHIR-based exchange via the RCE is several years away.

Option B: Attempt to hijack an existing streaming exchange network for FHIR payloads

Here, I am out of my depth, but it may be a possibility. I don’t even know if it’s possible to transport FHIR via…

  • MLLP to hijack the existing HL7v2 message broker network
  • DIRECT Trust network
  • CareEquality’s C-CDA based network

Option C: FHIR over Kafka for server-to-server

A near-term solution would be to transmit FHIR message payloads over Kafka [2]. The Federal Agency could maintain the message bus and access control. A durable message bus like Kafka would also allow bi-directional and multi-directional exchange. Unfortunately, this means substantial submitter burden, with yet another transport mechanism to keep track of.

Submitters mature enough to stream with Kafka are likely already using HL7v2. Since shutting down their HL7v2 exchanges is unlikely on a decade timescale, this means submitters must maintain both a streaming FHIR solution and a streaming HL7v2 solution.

Immature submitters are using flat file over FTP or faxed PDFs; FHIR streaming over Kafka does nothing to reduce their burden.

Additional technical hurdles for FHIR-based agency submissions

Streaming message payload and transport mechanism aside, there are a number of gaps — mostly addressed on the ONC FAST roadmap —that would be required for streaming FHIR submissions to Federal agencies.

Must have

  • Submitter identity — Agency reuses existing IdAM (Identity and Access Management), e.g. CDC SAMS
  • A directory of endpoint —Agency supports a third-party in maintaining an endpoint directory, e.g. the CARIN endpoint directory maintained by OneRecord (https://carinfhirdirectory.com/).

Nice to have

  • Dynamic registrations — UDAP for web-based
  • Privacy preserving data linkage

Footnotes

[1] I do not separate stream processing based on the payload; however, there are a number of payload use-cases:

  • A: One individual resource per payload (e.g. payload.Patient)
  • B: One bundle resource per payload, where the bundle contains one logical transaction (e.g. payload.Bundle[Patient, MedicationStatement, MedicationRequest])
  • C: One bundle resource per payload, where the bundle contains multiple logical transactions (e.g. payload.Bundle[Patient1, Patient2, Patient3])
  • D: Multiple bundles per payload

[2] Because of the payload size limitations of most durable message queues/buses (e.g. 15MB for Kafka), I suspect payloads will be restricted to (A: One resource per payload) or (B: One logical transaction bundle per payload). Even with this restriction, it may not be possible to transport an entire logical transactions in one event, in which case multi-part payloads (chunking) are a likely resolution. Externing would be fine for attachments, like DICOM images, but would not be appropriate for the logical transaction proper.

[3] An example of a batch submission processes Transfer2 (presumbly FTP) using CDC SAMS.

[4] Today’s data modernization effort, is tomorrows legacy system. Efficient binary serialization is great for centralized and federated use-cases; Federal submissions will be highly decentralized, which lends itself to plaintext.

--

--

Ryan M Harrison

Software for health IT and life-sciences. Basic Income (UBI).