Blogs

Post a Comment

We don't have an algorithm problem—We have a data problem

January 24, 2014

The December 16, 2013 ONC Stakeholder meeting addressing patient identification and matching could best be summed up by a statement made early in the day’s discussions by Dr Scott Schumacher, chief scientist for MDM at IBM: “We don’t have an algorithm issue, we have a data quality issue.” As the 250+ virtual and in-person participants heard from the ONC contractor, Audacious Inquiry (AI), data quality problems, often leading to incorrect patient matching, are abundant and lead to cost, quality and safety issues. AI derived these findings from literature review, as well as interviews with over 50 providers, physicians, vendors, Federal partners (VA, DOD, SSA) and industry or trade groups. Joy Pritts, ONC’s chief privacy officer, shared that discussion or consideration of a national healthcare identifier was out of scope due to the ongoing Congressional resolution. From this four month, multi-faceted engagement common barriers to accurate patient matching were identified:

  • Inconsistent formatting within data fields. The primary use of patient identifying information is for the caregiver to make certain they have the correct data when treating the patient, not to support machine matching of records. As such, there is no standardization of the data fields used in patient matching nor in the formatting of those fields. No two vendors capture the same data elements in the same format, thus placing a high burden on the matching software to interpret the wide variation in fields and formats. And, this variability extends to the healthcare provider level, as any given organization that uses an EHR system may capture different fields than another provider using the same software. Thus, the burden lies on the patient matching software to “sort out” the countless ways something as “simple” as patient names may be collected.   
  • Mistakes in data entry. Human beings make human mistakes and incorrect data, which is very common, suffers from this truism. The root of these errors stems from human error, low compensation for the registration/patient access staff, inadequate training of the staff and lack of systems that will do accuracy checks during data entry or validity checks such as address standardization. 
  • Cost of sophisticated matching technology. Sophisticated matching software can produce patient matching at 98-99 percent, but this software can be costly. Thus smaller organizations and small vendors generally do not use the sophisticated EMPI or MDM software that is used in many industries to create a single view of a customer/patient/subscriber/beneficiary. Yet, they have the same need for accurate patient matching. 
  • Lack of patient involvement in data accuracy. Consumer engagement in the healthcare processes is still limited, thus patients don’t routinely access their demographic or clinical information, and aren’t aware of the importance of demographic data in matching and exchanging information. Patient portals are becoming more common, but they are not mainstream, and validating data is not a key feature of this technology. 

These barriers have all been chronicled in previous attempts by public and private organizations to dissect the long-standing patient matching issue, including the RAND Report of 2008 sponsored by several EHR vendors, ONC Patient Matching hearing of December 2010 and subsequent HITPC recommendations as well as the 2009 ONC sponsored whitepaper developed by the Regenstrief Institute.

One has to wonder if we will travel endlessly on the road to better patient matching. I believe there is hope for incremental and continued improvements; better and safer care and controlling costs demand these improvements. I’ll share my detailed thoughts on this in my upcoming post on February 7th (be looking for it!), along with the eight key findings from the ONC and AI work.  

For further reading, check out my previous posts on the Big Data & Analytics Hub, and enjoy these recent selections: