Response to ONC’s proposed Trusted Exchange Framework and Common Agreement

Next week is the final opportunity for public comment on the ONC’s proposed Trusted Exchange Framework and Common Agreement. I’ve prepared a set of comments and recommendations that focus on the scope and mechanics of individual access, including technical standards, security requirements, identity proofing, authentication, and authorization.

Given the importance of health data exchange to the SMART Health IT community, I’m sharing the full letter and 15 recommendations here in PDF.

MACRA / MIPS Comment: We need API Support!

Today is the last day of the comment period for CMS’s MACRA and MIPS proposed rules. Below, we share a comment we submitted promoting the use of APIs for patient and provider access alike.


CMS states that priorities for “Advancing care information” are patient engagement, electronic access, and information exchange:

> These measures have a focus on patient engagement, electronic
> access and information exchange, which promote healthy behaviors
> by patients and lay the ground-work for interoperability.

… but nothing in CMS’s proposed MIPS measurement strategy in fact places an emphasis on these goals. Consider patient API access through third-party apps, which falls squarely in the intersection of these focus areas. Under the proposed scoring rubrics, a provider can earn 100% full marks on “advancing care information” while making API access available only to a single patient!

CMS should take actions to ensure that the “priority goals” are in fact met. One clear way to fix this issue would be to define a scoring function where patient API access is a hard line. For example, MIPS could require providers to offer API access to all patients in order to be eligible for the “base score”. This special-priority treatment is already given to one objective (“Protect Patient Health Information”); it should be extended to other priority items including patient API access. Otherwise, these “priorities” can, in fact, be entirely ignored by MIPS EPs, given the elaborate structure of bonus points and the “ceiling effect” of earning just 100 points out of a possible 131 points.

CMS should also add an explicit requirement for APIs that be used by healthcare providers as well as patients. Current meaningful use requirements focus on patient API access; MACRA should expand access to clinicians as well. To be concrete in advancing interoperability, MIPS could award points for clinicians who run at least one third party application against their EHR data (for example, see the SMART on FHIR open app platform specifications at http://docs.smarthealthit.org/) and at least one third party decision support service (for example, see the SMART CDS Hooks specifications at http://cds-hooks.org/).

Precision Medicine Initiative (in Blank Verse)

I’m deeply excited about the Precision Medicine Initiative. With Cohort Program grant deadlines approaching in a matter of hours, I thought it might be time for a brief distraction with this blank verse reflection on the funding opportunity announcements:


Precision Medicine Initiative:
a blank verse summary and overview.

Recruit a million volunteers across
the country, spanning age, geography,
ethnicity and race, the ill and well,
a cohort of participants engaged
as partners for a long-term effort to
transform our understanding of the links
that bind genetics, our environment,
disease and health: a cohort big enough
for wide association studies of
diverse and non-prespecified effects.

We’ll weave a network joining scientists
from academia and industry,
and someone’s loft or basement or garage
to generate hypotheses, compare
results and methodology, and share
interpretations with participants.

We’ll gather physical exam reports
from EHRs and clinics, collate SNPs
and genomes, track activity from phones
and wearables, and questionnaires to learn
as much as each participant will share.

And how to organize a study with
unprecedented capabilities?
The cast of characters includes at least

* Enrollment Centers (seven) to recruit
one hundred thousand people each, and build
a pipeline for transmitting data to…

* Coordinating Center (one) composed
of interlocking Cores for Data (with
facilities to scale analysis),
Research Support (including phenotype
selection algorithms, software tools,
and science help desk), plus a centralized
Administrative Core to oversee
the project and collaborations with…

* Participant Technologies, to build
a suite of mobile applications that
engage participants through questionnaires,
acquire sensor data (GPS
and wearables) and share research results.

* A central Biobank for specimens
collected from the cohort, offering
facilities to handle, process, store,
prepare, and ship to labs upon request.

A cohort of one million volunteers
will chart a course across the next five years.
Jump in and grab the helm — but science steers:
discoveries ho! Let’s sail to new frontiers.

Patient API Access in MU3

We’re in a Meaningful Use Stage 3 comment period!

The Meaningful Use Stage 3 final rule was published on October 16th, and came with a 60-day open comment period. Anyone can submit a comment here.

Patient API access is a critically important MU3 guarantee

I want to share a comment I’ve submitted that deals with a critically important (and strongly worded) guarantee that MU3 provides: a patient’s right to access data through an API, using “any application of their choice”. This is a critical issue because this guarantee would open up data access in a very wide, very real way — but it also comes with a host of security and privacy concerns (as well as business concerns) that will cause provider organizations to push back against it.

Below is my comment, verbatim. I’d love to hear your thoughts @JoshCMandel.

Josh’s Comment on Patient API Access

The following language pertaining to patient access must be be clarified to ensure it retains its intended potency:

The provider ensures the patient’s health information is available for the patient (or patient-authorized representative) to access using any application of their choice that is configured to meet the technical specifications of the API in the provider’s CEHRT.

The key question here is: which parties need to agree that an app is (so to speak) “okay to use”?

The regulatory intent appears to support the idea that patients make this decision, choosing among all apps that have been configured to work with the provider’s EHR. But what does it take for an app developer to configure an app to work with the provider’s EHR? Beyond technical details, is it okay for a provider to tell an app developer something like:

1. “Sorry, your app sounds good and useful, but we don’t choose to make it available to our patients.”

2. “Sorry, your app might be useful but it’s duplicitive: we already offer a similar functionality to our patients through another app, or through our own portal.”

3. “Sorry, your app is designed to help patients move away from our practice by seeking a second opinion, and that’s against our business interest.”

4. “Sorry, your app offers what we consider to be questionable clinical advice.”

5. “Sorry, we don’t believe your app will do an adequate job of protecting patient data.”

CMS should clarify that providers may not use these excuses to prohibit apps from becoming available to patients. If a provider can reject apps for policy reasons like the ones described above, this will lead to an environment that fails to provide patients a right to access their data in a useful way.

But of course some of the concerns above are important, especially as they begin to touch on clinical utility and data protection. CMS should clarify that protection comes, ultimately, from allowing patients to make informed decisions about which apps to use. It is reasonable for providers to share warnings, or endorsements, or to ask questions like “Are you sure?” with specific confirmations, or to assign apps to different levels of trust or approval — but a provider must not prohibit a patient from using a specific app (just as they must not refuse to fax a patient’s data to a patient-specified phone number).

One important step in ensuring this kind of access will be clarification about who is responsible for a data breach in the case where a patient has approved an app to access EHR data. The Office for Civil Rights should issue a clear statement that providers are not responsible for what happens downstream, after healthcare data are shared with a patient-selected and patient-authorized app. By analogy, we expect providers to share healthcare information by fax to any phone number that a patient identifies, as long as the patient has authorized the transmission; we should look at sharing data with apps the same way. This kind of clear statement from OCR will be a necessary step to ensure that providers do not perceive conflicting obligations.

OAuth2 for Healthcare: Are we ready?

Last weekend I got an email asking whether OAuth 2.0 is ready to deploy for healthcare. Given SMART’s use on OAuth 2.0, I think so! Here’s the exchange…

The question I received

 

I realize that the big news is the NPRMs being released, but one thing that I have been interested in is the big push for using OAuth 2.0 with newer standards (primarily FHIR related), and the known vulnerabilities in OAuth2.0.

I realize that HL7’s security Workgroup has experts and the other organizations consult experts (and I’m certainly not questioning the work they have done in this area) , but considering we are talking about healthcare data – it seems that it might have raised at least a few eyebrows and would have been addressed more openly.

Below are just a few links that explain.  I do not know how many – if any – of these vulnerabilities have been resolved since these were printed.

I just thought this was interesting…

http://www.darkreading.com/security-flaw-found-in-oauth-20-and-openid-third-party-authentication-at-risk/d/d-id/1235062

http://tetraph.com/covert_redirect/oauth2_openid_covert_redirect.html

http://www.oauthsecurity.com/

http://www.cnet.com/news/serious-security-flaw-in-oauth-and-openid-discovered/

My executive summary-level response:

There have been many reports of flawed OAuth 2.0 implementations, but there have not been security vulnerabilities identified in the OAuth 2.0 framework itself.  The community is constantly improving on best practices that help developers avoid implementation pitfalls.  There are already real-world OAuth 2.0 deployments in healthcare.

My more detailed take:

The overall system security of an OAuth 2.0 implementation depends critically on a substantial number of implementation details (as with any reasonably-capable authorization framework). The core OAuth 2.0 spec is accompanied by a “Threat Model and Security Considerations” document (RFC 6819) outlining many risks; and other groups have performed related analyses. The bottom line is that a robust implementation of OAuth 2.0 must account for these risks and ensure that appropriate mitigations are in place.

Sensational headlines in the blogosophere generally identify places where an individual implementer got some of these details wrong. In large measure, we’ve seen so many of these stories simply because OAuth 2.0 is so widely deployed — not because it’s so deeply flawed. (Now, we can argue that a well-designed security protocol should protect implementers from all kinds of mistakes — and that’s fair. But the collective community experience in identifying these threats, learning how things go wrong, memorializing the understanding in clearer recommendations and more-capable reference software implementations is exactly how that protection emerges.) At the end of the day, Microsoft, Google, Facebook, Twitter, Salesforce, and many, many more players (large and small) offer, promote, and continue to expand their OAuth 2.0 deployments.

With respect to health IT, there is ongoing work to define profiles of OAuth 2.0 that promote best practices and avoid common pitfalls. Three examples are:

MITRE’s OAuth 2.0 profiles created for VA:

SMART on FHIR’s profiles for EHR plug-in apps

OpenID Foundation’s Health Relationship Trust (HEART) Workgroup:

Commercial health IT vendors have already deployed OAuth 2.0 implementations, and I expect we’ll see many more in the near future.

Certification/MU tweaks to support patient subscriptions

This is a quick description of the minimum requirements to turn patient-mediated “transmit” into a usable system for feeding clinical data to a patient’s preferred endpoints. In my blog post last month, I described a small, incremental “trust tweak” asking ONC and CMS to converge on the Blue Button Patient Trust Bundle, so that any patient anywhere has the capability to send data to any app in the bundle.

This proposal builds on that initial tweak. I should be clear that the ideas here aren’t novel: they borrow very clearly from the Blue Button+ Direct implementation guide (which is not part of certification or MU — but aspects of it ought to be).

Continue reading “Certification/MU tweaks to support patient subscriptions”

Improving patient access: small steps and patch-ups

In a blog post earlier this month, I advocated for ONC and CMS to adopt a grand scheme to improve patient data access through the SMART on FHIR API. Here, I’ll advocate for a very small scheme that ignores some of the big issues, but aims to patch up one of the most broken aspects of today’s system.

The problem: patient-facing “transmit” is broken

Not to mince words: ONC’s certification program and CMS’s attestation program are out of sync on patient access. As a result, patient portals don’t offer reliable “transmit” capabilities.

2014-certified EHR systems must demonstrate support for portal-based Direct message transmission, but providers don’t need to make these capabilities available for patients in real life. Today, two loopholes prevent patient access:
Continue reading “Improving patient access: small steps and patch-ups”

SMART Advice on JASON (and PCAST)

As architect for SMART Platforms and community lead for the Blue Button REST API, I’m defining open APIs for health data that spark innovation in patient care, consumer empowerment, clinical research. So I was very pleased last month at an invitation to join a newly-formed Federal Advisory Committee called the JASON Task Force, helping ONC respond to the JASON Report (“A Robust Health Data Infrastructure”).

We’re charged with making recommendations to ONC about how to proceed toward building practical, broad-reaching interoperability in Meaningful Use Stage 3 and beyond. Our committee is still meeting and forming recommendations throughout the summer and into the fall, but I wanted to share my initial thoughts on the scope of the problem; where we are today; and how we can make real progress as we move forward.

Continue reading “SMART Advice on JASON (and PCAST)”

Disturbing state of EHR Security Vulnerability Reporting

Last week I reported on a set of security vulnerabilities that affected multiple EHR vendors and other Health IT systems.

I initially discovered the vulnerability in a single Web-based EHR system and successfully reported it directly to that vendor.

But my subsequent journey into the world of EHR vulnerability reporting left me deeply concerned that our EHR vendors do not have mature reporting systems in place. Patient health data are among the most personal, sensitive aspects of our online presence. They offer an increasingly high-value target for identity theft, blackmail, and ransom. It’s time for EHR vendors to take a page from the playbook of consumer tech companies by instituting the same kinds of security vulnerability reporting programs that are ubiquitous on the consumer Web.

HL7 and EHR Vendors must address security reporting

I’ll lead with the key message here, and provide supporting evidence below: HL7 and EHR vendors need to institute security vulnerability reporting programs!
Continue reading “Disturbing state of EHR Security Vulnerability Reporting”

Case study: security vulnerabilities in C-CDA display

For background, see my previous blog post describing the details of three security vulnerabilities in C-CDA Display using HL7’s CDA.xsl.

Last month I discovered a set of security vulnerabilities in a well-known commercial EHR product that I’ll pseudonymously call “Friendly Web EHR”. Here’s the story…

The story: discovery and reporting

I was poking around my account in Friendly Web EHR, examining MU2 features like C-CDA display and Direct messaging. I used the “document upload” feature to upload some C-CDAs from SMART’s Sample C-CDA Repository. At the time, I was curious about the user experience. (Specifically, I was bemoaning how clunky the standard XSLT-based C-CDA rendering looks.) I wondered how the C-CDA viewer was embedded into the EHR. Was it by direct DOM insertion? Inline frames? I opened up Chrome Developer Tools to take a look.
Continue reading “Case study: security vulnerabilities in C-CDA display”