Clinical Architecture Blog

The Twelve Days of Christmas... with an Informaticist


On the first day of Christmas, my informaticist gave to me
A hyper-precoordinated key

On the second day of Christmas, my informaticist gave to me
Two LOINC codes
And a hyper-precoordinated key

On the third day of Christmas, my informaticist gave to me
Three RXCUIs
Two LOINC codes
And a hyper-precoordinated key

On the fourth day of Christmas, my informaticist gave to me
Four SNOMED Concepts
Three RXCUIs
Two LOINC codes
And a hyper-precoordinated key

On the fifth day of Christmas, my informaticist gave to me
Five ON-TOL-O-GIES
Four SNOMED Concepts
Three RXCUIs
Two LOINC codes
And a hyper-precoordinated key
 
On the sixth day of Christmas, my informaticist gave to me
Six ICD maps
Five ON-TOL-O-GIES
Four SNOMED Concepts
Three RXCUIs
Two LOINC codes
And a hyper-precoordinated key
 
On the seventh day of Christmas, my informaticist gave to me
Seven representations
Six ICD maps
Five ON-TOL-O-GIES
Four SNOMED Concepts
Three RXCUIs
Two LOINC codes
And a hyper-precoordinated key
 
On the eighth day of Christmas, my informaticist gave to me
Eight mental predicates
Seven representations
Six ICD maps
Five ON-TOL-O-GIES
Four SNOMED Concepts
Three RXCUIs
Two LOINC codes
And a hyper-precoordinated key
 
On the ninth day of Christmas, my informaticist gave to me
Nine semantic primes
Eight mental predicates
Seven representations
Six ICD maps
Five ON-TOL-O-GIES
Four SNOMED Concepts
Three RXCUIs
Two LOINC codes
And a hyper-precoordinated key
 
On the tenth day of Christmas, my informaticist gave to me
Ten content models
Nine semantic primes
Eight mental predicates
Seven representations
Six ICD maps
Five ON-TOL-O-GIES
Four SNOMED Concepts
Three RXCUIs
Two LOINC codes
And a hyper-precoordinated key
 
On the eleventh day of Christmas, my informaticist gave to me
Eleven luminaries lecturing
Ten content models
Nine semantic primes
Eight mental predicates
Seven representations
Six ICD maps
Five ON-TOL-O-GIES
Four SNOMED Concepts
Three RXCUIs
Two LOINC codes
And a hyper-precoordinated key
 
On the twelfth day of Christmas, my informaticist gave to me
Twelve government guidelines
Eleven luminaries lecturing
Ten content models
Nine semantic primes
Eight mental predicates
Seven representations
Six ICD maps
Five ON-TOL-O-GIES
Four SNOMED Concepts
Three RXCUIs
Two LOINC codes
And a hyper-precoordinated key-ey-ey-ey

--

May all your holidays be happy, healthy and semantically interoperable!
The Clinical Architecture Team
Posted: 12/23/2011 3:13:17 PM by Global Administrator | with 0 comments

Clinical Terminologies in Healthcare - Part Two

The Terminology Uncertainty Principle

I am a Star Trek fan (not a Trekkie... I don't wear a tunic around the office...if I did it would be gold to signify command... but I DON'T). On Star Trek they had a piece of equipment called a 'transporter'.  It was the job of the transporter to teleport a person from point A to point B.  It does this by converting a person to a pattern of information and energy (dematerializing them), sending them through a beam and rematerializing them on the other side, hopefully without turning them inside out.

The idea of interoperability is that you are taking structured information, that is native to one system, and converting it to structured information that is native to the receiving system. This is typically done through mapping, where a source code is mapped to the single, most appropriate target code.  The objective of mapping this information is to create a picture of the patient in the target system that is as complete and accurate as the picture was on the source system. The challenge in doing this is making sure you get it right.  The Star Trek writers often had to create fictional mechanisms to make the show believable.  One of these fictional mechanisms, relating to the transporter, was something called the 'Heisenberg compensator'.  In 1927 a physicist, Werner Heisenberg, postulated that an experimenter cannot accurately observe a particle without affecting it and therefore it is not possible to truly know the state of a particle.  This assertion went on to be called the “Heisenberg uncertainty principle”.  By establishing the Heisenberg compensator the star trek writers were able to conveniently set this aside, allowing the fictional transporter to isolate the state of each particle in the source’s body and rematerialize them in their target location.  How convenient.

Truth is harder than fiction

When dealing with the exchange of patient information, in the nonfiction world, we also engage in a similar process and we also worry about turning our subject inside out as a result on uncertainty.

When dealing with interoperability is important to be aware of those circumstances that introduce uncertainty into the process and conspire against us.

The interoperability uncertainty principle that I propose is as follows:

When exchanging information between systems, the more you try to read into a surface term the more likely you are to introduce an unintended shift in the meaning of the original information.

Contributing Factors

Each time we take a patient's clinical information and convert it to another collection of terminologies a couple of factors conspire against us.  A few of these factors are transcription errorcontextual ignorance and granularity

Transcription Error

Transcription error is the fact that due to terminology differences, software and human error, each time you transform a item you run the risk of degrading the meaning of that item.  The more items and the more transformations the more likely a degradation will occur.
Contextual Ignorance

Contextual ignorance is the recognition that all terminologies are built based on the domain view (prejudices, knowledge and policies) of the terminology publishers and that viewpoint is not inherently bound to a given term.  This being the case, when a provider selects the term, it is based on the term itself, not the context behind it.  Therefore it is important that when we convert/map the term to another terminology, we should deal with it in a prima facie manner.  To do otherwise presumes contextual knowledge that likely did not exist and could result in transcription error. 

By way of example, let's say that you come across the SNOMED disorder term 'fracture' in a patient's problem list.  Before mapping it to the target you consult SNOMED and find that the 'fracture' term that was selected was a child of 'Injuries to the skull'.  You have the choice of mapping to a local code of 'fracture' or a local code of 'skull fracture'.  Which do you choose?

In another patient's file you find that they have a severe allergy to 'sulfa drugs'.  When you go to map that to the target terminology do you look up the ingredients in that class and try to find a class with overlap in the target or do you just try to find the best match on the term 'sulfa drugs'.  When the admission clerk chose 'Sulfa drugs' do you think they researched the ingredients and chose that class based on how their terminology provider applied the class to ingredients?  Do you think they chose what they thought they heard the patient say?  Or did the patient say a single ingredient and the clerk chose the class based on their clinical knowledge.

The problem in both of these examples is that you have no way of knowing whether or not the person that selected the code was aware of the context.  If this is the case, your best bet is to choose the best prima facie match.  This preserves the integrity of the surface information, which is what the provider chose, the patient saw and the clinical decision support uses as an entry point.

Granularity

Granularity is essentially the degree to which a term specifies the concept it s representing.  For example; a drug terminology that describes my prescription as “Loratidine” is less granular than one that describes it as “Claratin (loratidine) 10 mg Oral Tablet”. This concept is also referred to as “broader or narrower than” . When conversing between two like terminologies often you run into a situation where one terminology is more or less granular than the other.  When this happens someone has to make a choice.  It is easier to go from a more granular term to less granular term if the less granular term is the primary defining attribute of the more granular term.  Using my previous example; it is not difficult to go from “Claritin (loratadine) 10 mg Oral Tablet” to “loratadine”.  It is a perfectly valid target.  I am losing information in the transaction, but that information is outside the conceptual awareness of the target terminology, which only understands generic drugs.  However, if I reverse the flow and go from “loratadine” and select “Claritin (loratadine) 10 mg Oral Tablet”, because it is the only option I have to represent the primary defining characteristic (loratadine), I have added information that is not based on facts.  One could argue that in this case, with loratadine in particular, the guess is harmless.  That agreement might change if the drug in question was warfarin and I arbitrarily selected a 10 mg strength.  Another example is the problem in going from ICD9 to ICD10.  Both terminologies can represent disorders, both are for the same source but they express like terms at different granularities which makes life difficult when you are trying to transition or correlate for one to another.



This problem with granularity will persist as long as we have more than one system using more than one terminology.  The crux of this obstacle is that sometime you have the make a leap of granularity.  Some leaps are safer than others.  What is important is that if you have information that is based on a granular leap you (a) indicate that there was a shift in granularity, (b) let the consumer know the nature of the leap and (c) preserve the original data for reference.  This way, even if you have made a bad leap, you allow the consumer to recover from it.

Design for uncertainty

In a nutshell, the answer to uncertainty is to plan and design for it.

1. Indicate which terms came from elsewhere
2. Preserve the original term for reference
3. Indicate granularity shifts
4. Resist the urge to infer beyond the surface term.

For terminology creators, remember a good term adheres to a doctrine of ‘res ipsa loquitur’ – or ‘the thing speaks for itself’. 

I hope this has been useful.  I am always open to alternate points of view.  If you have anything to add or want to argue any of these, please put up your dukes and reply with a comment or an email to me.
Posted: 12/15/2011 2:02:38 PM by Global Administrator | with 0 comments

Clinical Terminologies in Healthcare Applications - Part One

But Charlie, you look so young!

Over the past two decades, I have been on the battlefield of Healthcare IT.  Throughout this time, I have been in many roles, ranging from programmer and systems designer, to chief technology officer.  During this time, I have had the opportunity to observe a number of applications in the healthcare space.  Some of these applications have been simple and some complex; some dealt with administrative aspects of care and some that meant the difference between a safe patient and a patient that ends up in a statistic that gets quoted in an ISMP report.  I have learned from my successes, my mistakes and the successes and mistakes of others (yes, I have been watching you).

What follows is a collection of my observations with regard to the role of clinical terminologies in healthcare applications and how I believe it will change and be changed by the evolution that is happening even as I write this post.

A Wise Man Once said… or was it me?

A “healthcare provider” is ultimately the sum of a skilled human being, their physical tools (facilities, equipment, devices, medicines and instruments) and their information technology (Paper files, forms, software applications and data).  In this symbiotic triplet, each group (Humans, Tools and Information Technology) evolves and grows as our knowledge and capabilities grow.

The objective of the evolution of the physical tools is to enhance the physical capabilities of the human.  Improve their ability to observe things, remove things, kill bacteria, stop a cancer or trick some biological mechanism to behave in a particular manner.  In essence, improving their ability to detect and affect things within the scope of their objective.

The objective of the evolution of the information assets (beyond paper and file cabinets) is to enhance the mental capabilities of the human.  Improve their ability to recall information, expand their knowledge, increase the speed at which they can process information and bring awareness to them of things outside of the range of their human senses.

Last but not least, the humans themselves are prone to natural evolution, expanding their understanding with dedication, intuition and insight.  They are also tasked with driving the evolution of the other two members that have already been mentioned.

The most recent addition to this grouping is that of information technology as a successor to paper, books and filing cabinets.  These information technology tools are what I will be focusing on.

I will assume that, if you are reading this post, you understand the primary functions of a healthcare provider.   That being said, let’s start by summarizing the primary functions of healthcare information technology systems. 

Here are five of its most critical capabilities (over simplified for the sake of brevity):

  • Record and store information about providers, physical assets and services
  • Record and store information about patients
  • Record and store what we currently know about the practice of medicine
  • Record and store the relationships between these elements as we become aware of them
  • Provide this information (or any combination thereof) to a provider and administrators as needed in order to assist them with their objectives

In order for the HIT application to perform its functions well, two things must happen, (1) the information must be entered into the application by a provider and (2) when the time comes, the information must be presented in a manner that the provider can understand.

Sounds simple right?  It would be, if not for a couple of issues…

Human-to-Computer Interface

The Human-to-Computer interface is fraught with inefficiency.  We are constantly looking for ways to improve and expedite this most inelegant of relationships.  But whether we use mice, keyboards, voice recognition or touch screens, the problem remains.  When you think about it, the human mind works fairly quickly (well… after some coffee) as does a computer processor.  When we try to move information from one to the other, it is like a freeway onramp that requires to you get out of the car, disassemble it, push each part through a doorway and reassemble the car to continue.   Seriously, think about it.  In fact, in the time it took me to type this, I thought up seven other metaphors for how inefficient the human to computer interface is, but it would take me too long to go back and type a better one.

Human to Human Interface

Have you ever engaged in the dangerous habit of sending an email to another human?  Has it always panned out the way you intended?  No?  Well, there is a reason for that.  The human mind is not a processor of information; it is an interpreter of information.  When presented with data it goes through a process of cognitive filtering and interpretation that is fraught with intellectual, personal and contextual biases.  In other words, the human mind is local.  So when you tell your spouse they look “fine” you have no control over whether “fine” will be interpreted as “Sure, I guess you look fine (meh)” or “Damn! You look FINE!”.  

The other night, I gave my fifteen year old son the following instructions: “Max, I left the sprinklers on in the yard.  Please go outside and shut them off.”  These were simple, straight-forward instructions (even for a teenager).  He dutifully left the room and came back about ten minutes later.  I thanked him and he said “You’re welcome, Dad.”  It was a well-formed transaction with a flawless acknowledgement mechanism.  When I got up the next morning to catch a flight, I walked out the front door and both sprinklers were going full blast and had obviously been doing so all night.  What bothered me the most was the following question, “What exactly did Max do during those ten minutes?”  Just in case, I checked the gas on the grill in the back yard.  To this day, I have no idea what he did.

This ability to misinterpret language is exacerbated when the communication is devoid of vocal tone and body language, which is the case when text is exchanged.

Clinical Terminologies as a Lingua Franca

Clinical terminology is how we record information about a patient.  We do this in an attempt to create a record that both the computer and other humans can utilize in the performance of their duties.   In the best circumstances, these terms are stable, well formed, unambiguous and of appropriate granularity for their purpose (and NOT concatenated…grrr).  The problem, of course, is the human (they are cute, when they are small).  The terms are selected by a human being (with all of their biases), jammed inefficiently through the human to computer interface, so that they can interpreted by a software program (with its unforgiving logic) and other humans (with all of their biases).   Oh… did I mention that the terminologies are created by humans?  Also, it is important to note that these terminologies are constructed within specific ontologies.  According to Wikipedia: “Ontology deals with questions concerning what entities exist or can be said to exist, and how such entities can be grouped, related within a hierarchy, and subdivided according to similarities and differences.”  In other words, ontology is a collection of intellectual, personal and contextual biases.

Now is when I calm everyone down. 

I am not bashing clinical terminologies or those brave souls that craft them in the forges of their intellect.  We do not have a better alternative.  All I am suggesting is that we respect the fuzzy nature of the delicate process that is in play here.  We accept the uncertainty and resist the urge to give ourselves over to the notion that if we build a model finite enough, and a computer fast enough that fuzziness will disappear and the human can sit back and relax.  It may happen someday, but not likely someday soon.    This understanding and acceptance of the uncertainty of information is what I call the Clinical Terminology Uncertainty Principle and it is the subject of my next post.
Posted: 9/3/2011 3:26:28 PM by Global Administrator | with 0 comments

Lab Domain and LOINC Overview – Part III

In this post we are going to start digging into LOINC.  While this is meant to be a critical review, I need to say that I am a fan of LOINC and respect the amount of effort and discipline that is required to build and maintain a terminology.  

First, Everything I learned about LOINC is based on information and data that came from the LOINC website.  If you are interested in making the most of LOINC, I recommend you check it out.

As I mentioned in the first post of this series the mission of LOINC is “to facilitate the exchange and pooling of clinical results for clinical care, outcomes management, and research by providing a set of universal codes and names to identify laboratory and other clinical observations.”

So the objectives are to:

  1. Facilitate the exchange of clinical results
  2. Facilitate the pooling (aggregate) of clinical results
  3. Facilitate outcomes management (through the application of 1 and 2)
  4. Facilitate research (through the application of 1 and 2)

And this is accomplished by:

  1. Providing a set of universal codes and names to identify laboratory observations
  2. Providing a set of universal codes and names to identify other clinical observations

If you read the previous post, this may seem familiar.  The objective of LOINC seems to coincide with the objectives in the pharmaceutical industry in the mid-80s relative to data combinability.

We will get into the structure of LOINC later, but I think it is a safe bet that when LOINC began it was focused on laboratory observations and the “other clinical observations” was added later. 

We humans have a peanut-sized part of our brains called the amygdala (no, I am not talking about Luke Skywalker’s mom…). The amygdala assesses whether a situation is dangerous, then fires signals to other parts of the brain.  When I hear words like “other”, “miscellaneous” and “universal” my amygdala says {Do you hear that?!?}  My latent informatics super powers notwithstanding (eat your heart out spiderman) this is largely based on years of observing unintended consequences of good intentions in terminology evolution.

For now, let’s pretend that the “other clinical observations” does not exist and just focus on “laboratory observations”.   We will come back to the “other” part of LOINC later.

The Official LOINC terminology

You can download the official LOINC data from here.  I typically download LOINC in MS Access format.  I am going to provide a high level overview of the structure of LOINC but I strongly recommend the LOINC manual as a great source of more information for those that want to do a deeper dive.

The Code

The LOINC code itself is a 3-7 digit sequential base number followed by a required hyphenated check digit (0-9).  Regenstrief recommends you support a 10 character field to allow for future expansion.  Obviously, unless you enjoy calculating check digits, you would store the LOINC code as text not as a number.  The number itself is a ‘dumb number’, in that it has no inherent meaning in and of itself.

The Term

Each LOINC code has an associated term.  The term has several incarnations.  There is a short name, a long common name and the distinct parts that describe the term. Initially we will focus on the parts.

A given LOINC Term is comprised of six major parts (with a total potential of 16 parts when you factor in optional sub-parts and sub-classes). {Do you hear that?!? }

 

Here are the major parts:

Component This is the principle name of the order or result.  As depicted above the component has the potential to be fairly convoluted and complex.
Property This is the nature of the property being measured.  You can almost think of this as the type of result unit.  For example: ‘mg/dL’ is a ‘Mass Concentration’ property.
Time Aspect For most results this is ‘Point in time’.  For results whose value represents a measurement that in measured or calculated over an interval of time this represents the time interval.
System This is the substance that the test was performed on.  I tend to think of this as the specimen type.
Scale This is the measurement type.  A majority of the lab result LOINC codes are either Quantitative or Ordinal (89%). In fact, 65% of them are Quantitative.
Method (optional) The method is how the test was actually performed.
 

Preparing your working LOINC subset

If you are going to dig into the Official LOINC data, with a focus on lab results, the first thing you need to do is prepare your working subset.  To do that you need to do the following:

1.            Identify the Active Lab Observation population of LOINC codes

I am going to be throwing some numbers at you as we continue.  The subset I am going to be focusing on is the following LOINC terms:

  • Are laboratory observations (CLASSTYPE equals ’1’)
  • Are active (STATUS is equal to ‘ACTIVE’)
  • Represent lab results (ORDER_OBS is either ‘Observation’ or ‘Both’)

For those of you playing the home game… here is the query I am using.

SELECT LOINC.CHNG_TYPE, LOINC.CLASSTYPE, LOINC.ORDER_OBS, LOINC.*
FROM LOINC
WHERE ((Not (LOINC.STATUS)='ACTIVE') AND ((LOINC.CLASSTYPE)=1) AND ((LOINC.ORDER_OBS)='observation' Or (LOINC.ORDER_OBS)='both'));

This results in a list of about 41,313 distinct LOINC terms in the database I am working in.

I am excluding lab orders because they are not clinical terms (Orders are administrative terms).
I am excluding ‘other clinical observations’. {Do you hear that?!? }
I am excluding terms that are not active… because I only care about active terms.

2.            Only include LOINC relevant LOINC fields

According to the LOINC documentation there are a number of columns that are no longer updated, deprecated or for internal use.  I create a view that allows only shows rows that are useful. These are the following:

Column Description
LOINC_NUM The Unique LOINC code
COMPONENT Term – part 1 - Component
PROPERTY Term – part 2 - Property
TIME_ASPCT Term – part 3 - Time aspect
SYSTEM Term – part 4 - System
SCALE_TYP Term – part 5 –Scale type
METHOD_TYP Term – part 6 – Method Type
STATUS The status of the term (should all be active from step 1)
CLASSTYPE 1=Laboratory class; 2=Clinical class; 3=Claims attachments; 4=Surveys
SHORTNAME The short name (not very useful)
EXAMPLE_UNITS These are not consistently populated but worth keeping around
ORD_OBS Indicates whether the term is an order, observation or both
INPC_PERCENTAGE INPC result volume indicator
LONG_COMMON_NAME “human readable” constructed term description
 

Here is the query (combining the useful columns and the term filters from above):

SELECT LOINC.LOINC_NUM, LOINC.COMPONENT, LOINC.PROPERTY, LOINC.TIME_ASPCT, LOINC.SYSTEM, LOINC.SCALE_TYP, LOINC.METHOD_TYP, LOINC.STATUS, LOINC.CLASSTYPE, LOINC.SHORTNAME, LOINC.ORDER_OBS, LOINC.EXAMPLE_UNITS, LOINC.INPC_PERCENTAGE, LOINC.LONG_COMMON_NAME
FROM LOINC
WHERE (((LOINC.STATUS)="Active") AND ((LOINC.CLASSTYPE)=1) AND ((LOINC.ORDER_OBS)="Observation" Or (LOINC.ORDER_OBS)="Both"));

In the next post we will dive into the component part of the LOINC term and discuss some strategies that can simplify your life, if you are trying to map to or use LOINC.



Lingua Freaka: Amygdala hijack

Amygdala hijack is a term coined by Daniel Goleman in his 1996 book Emotional Intelligence: Why It Can Matter More Than IQ. (it would also be a cool name for your band...)

The amygdala is responsible for our fight or flight response.  This function, that evolved very early, is designed to react in milliseconds to perceived danger without thinking twice.  The term Amygdala Hijack is used to describe emotional responses from people which are out of measure with the actual threat because it has triggered a much more significant emotional threat. 

An amygdala hijack exhibits three signs: strong emotional reaction, sudden onset, and post-episode realization that the reaction was inappropriate.

Know the signs... and rememember to apologize.
Posted: 2/13/2011 2:46:40 PM by Global Administrator | with 1 comments

Lab Terminologies and LOINC - Part II

Lab Orders and Results prior to LOINC.

I was hired by Medi-Span in 1998 to run their software products team.  In a meeting with product management someone was explaining a CDS module that involved lab results and proudly informed me that they had cross referenced their internal lab codes to LOINC codes.  I said, “That’s great… what’s a LOINC code?” 

What’s relevant about this little anecdote, is that prior to starting at Medi-Span I had spent the prior ten years developing lab information systems with a focus on system interfaces.  During that time with the countless client interfaces, in both the reference lab and clinical lab environment, I had never come across LOINC codes.

Why was I unaware of LOINC in 1998, four years after it had been initiated?  The more important question is why now, 14 years later, are most labs still struggling with LOINC?  Part of the answer lies in the way lab systems evolved.

Lab systems were early adopters of using structured codes to store and organize information.  This was because the nature of the information necessitated it.  The orders and results, typically related to a visit oriented grouping entity called an ‘Accession’, were used to track what was ordered for billing and the associated tests that were to be performed in the lab and resulted for the client.  Once the testing was complete, the client was billed and the test results were then reported, mailed, faxed, auto-dial printed or transmitted using a custom or ASTM (now HL7) interface.  Having stable codes for lab results also made it easier to compare values over time to look for trends or significant shifts (or deltas).

During these early days, the most common output of a lab visit was a paper report that listed the results, their reference ranges and abnormal flags.   The results were not intended to be exchanged (combined) electronically, so the codes were described to the extend they needed to be to appear unique on a lab report.  Information, like the method the test was performed, was only included in the term if it effected how the result was interpreted.  Other attributes, like specimen type, were only included if they were atypical.  

So, as a hospital lab or reference lab, the rules of engagement for lab order and result terminologies were something like this:

  • You only need to define lab order and result codes for tests that you perform and report
  • The codes themselves only need to be described to the extent they are unique within the lab
  • The specific attributes (method, property, specimen type) need not be described in the terms as they are known within the lab unless they are required for uniqueness or interpretation.
  • Lab terms may include workflow information that is not relevant to the order or result.

If there was a need to exchange information, like between a hospital lab and reference lab, test codes were mapped and unit conversions were applied within the interface.  This was less of an issue because referral testing was typically limited to a subset of tests that could not be performed in house.

Islands of terminology

The early adoption of terminologies created for internal use resulted in decades of labs existing as islands of terminology. 

Unlike the medication domain, there are no commercial, proprietary lab terminologies that a lab can license (despite my efforts to get every content company I have worked for to do this…).  CPT (Common Procedure Terminology) codes have been around for years but they are expensive to use and are geared toward billing and not clinical use.

As a lab, if you do not want to create your own lab terminology, your choices are LOINC or SNOMED-CT Procedures.  The result is that Hospital labs and other local labs create their limited set of simple test codes.  Large reference labs like Quest and Labcorp use internal codes as well, but typically provide LOINC codes (if they exist for a given test), on request, in their result transactions. 

Many of the systems in use today have been around awhile, operating under the ‘if it aint broke, don’t fix it’ moniker.

How avoiding interoperability issues helped create an industry

Prior to 1986, pharmaceutical companies used many local andregional laboratories to perform safety and efficacy testing while conducting clinical trials.  One problem with this approach was that it resulted in a 39% error rate. Human error accounted for the majority of data inaccuracies, which were caused by such things as mislabeled test kits, incorrect tests, and missing specimens.  The other problem was that differences in methodology, terminology, result units, reagents and reference ranges resulted in a “combinability” problem.   These inconsistencies between information sources resulted in uncertainty when the information was brought together into a central location for analysis.  In order to deal with this the combined information would need to be “cleaned” before it could be used.

The monumental difficulties of interoperability (syntactic, semantic) as well as the deeper problem of clinical equivalence resulted in delays, rework and significant cost.

The pharmaceutical industry was facing a problem very similar to what we face in healthcare today when it comes to coordination of care.  They were lucky however because each clinical trial was a contained system so rather than address the interoperability issue and an industry, they gave birth to a new business segment, the central lab.

The pioneer in the space was SciCor (Now Covance Central Labs).  They are here in Indianapolis and I worked there from 1993-1998.  Through the course of events they addressed many of the problems with a number of innovative approaches in specimen collection, workflow, centralized data management and reporting (most of which happened before I got there…).  The error rate is now somewhere around 2%.

The clinical trial interoperability problem was resolved by removing it from the equation. All (or most) tests were performed in a single lab using the exact same methods.  The combinability of the data became a non-issue…cheaters.

There is a lesson to be learned here, but it is not what you think. What worked for the microcosm of individual clinical trials will not work for the entire healthcare industry. 

Finding the right solution for lab result interoperability

We must find a different solution to the current interoperability problem that allows a federated approach to combining clinical information while managing uncertainty.  Can LOINC play a significant role in this solution? Before we can ponder that, we need to understand the intent and structure of LOINC, and assess the strengths and weaknesses of it as it stands today.
Posted: 2/6/2011 9:47:19 PM by Global Administrator | with 0 comments