Ten Years of Health Fidelity: Natural Language Processing in Healthcare

Carol Friedman on natural language processing healthcareHealth Fidelity’s NLP engine Lumanent Insights was created based on the NLP engine MedLEE developed at Columbia University by Carol Friedman, PhD. Today, we have two goals. First, to better understand natural language processing in healthcare. Second, to celebrate Health Fidelity’s 10th anniversary and five years of delivering clinical suspects, by helping you get to know Dr. Friedman, her pioneering work, the engine’s history, and what makes it unique.


 

What brought you to natural language processing as a field?
I earned my MS and PhD (both in computer science) at NYU. During that time, among other projects, I worked on the Linguistic String Project and the Proteus Project. After I graduated, I was recruited into the medical informatics department at Columbia in the early 1990s. At that point, they were trying to put clinical data to work in a decision support system but needed a way to access the clinical data reliably; therefore my prior NLP background was attractive to them.  A year after joining Columbia, I received a grant from the National Library of Medicine to develop a clinical NLP system, which really brought me fully into NLP research.

The clinical NLP project itself was initially developed in collaboration with the Columbia radiology department. Through those early years, the department head and I would go through notes to understand the terminology and to interpret the jargon. That helped capture the linguistic patterns in the clinical documents and the ability for the engine to interpret clinical discourse.

As we neared completion, a colleague that was collaborating on the clinical decision support system asked to evaluate the NLP system itself. He decided the questions, set the parameters, and performed the entire evaluation. I don’t mind saying that that was quite scary since NLP systems were typically evaluated by the NLP developers. In the end, he compared its outputs to six specialists in radiology and six general practitioners. Of half a dozen conditions found in chest x-rays, the evaluation demonstrated that the NLP performed similarly to the physicians.

Based on that, and thorough physician evaluation, the NLP was added to the decision support project.

After another year to completely operationalize it, the NLP went live in the clinical information system and was a success. It was not long before an epidemiologist used it to identify TB exposure for isolation protocols. Year after year, as word spread, researchers and physicians would ask to use the NLP, evaluate it, and find new uses. It kept succeeding. That’s when we knew it was time to license it.

And that’s where Health Fidelity enters the story?

Right! There were a lot of companies interested in the NLP system, but I liked Health Fidelity’s approach to impacting healthcare. I liked the smaller, agile environment, as well. I didn’t want to work with a larger company.

With Health Fidelity, Lumanent Insights has been built and refined for quite some time. Does the nature of this NLP system give it an advantage over later arrivals?

Yes. I think what’s happening now is people are developing NLP for specific purposes. For instance, some of the newer systems would find patients with diabetes, and they are only developed for that purpose. However, Lumanent Insights has a broader application. It understands how to decipher language at a linguistic level in a clinical setting, so you can tune it to do almost whatever you need it to. It can easily be adapted to new settings and or new uses.

Additionally, especially for risk adjustment, you need significantly more details than condition-specific recognition. A lot of the newer NLP are based on machine learning, which is good, but as soon as the notes change, those systems often degrade and have to be retrained. Healthcare risk adjustment data is so diverse: with all the different providers, different language usages, facilities, machine learning doesn’t do as well and has to be retrained for each difference in order to improve performance. The data itself is more complex, now. Clinical notes are not written by transcriptionists anymore who typically used to write well-formed sentences. Currently it’s doctors or scribes using shorthand, abbreviations, checklist data, etc. If anything, a more general clinical NLP is necessary to properly make use of the varied dataset.

For machine learning based NLPs to “catch up” to Lumanent Insights, it’d take an incredibly long time, if it’s even possible. Machine learning NLP engines can only go so far, and they face degradation when the contexts and inputs change. The machine learning algorithms have to start over every time. That’s not to say there aren’t applications for machine learning to support human expertise, just as an NLP does, but there are no shortcuts with this sort of work.

Over the past decade, NLP has become a part of healthcare technology, and it’s accelerating. Any reflections on that adoption or how it has changed the landscape?

When I started, nobody knew what NLP meant, and now everybody’s saying it. Speaking frankly though, a lot of what’s out there is just using simplistic keyword matching, and not what I’d call NLP.

How is that different?

Beyond the linguistic sophistication, Lumanent Insights knows how to decipher clinical language at a fine granularity, which can be tuned and adapted to new settings and new uses. It’s the major strength of natural language processing in healthcare. When it finds a clinical event, it looks at every word in the sentence, the surrounding sentences, cross-referencing it to other data points. It can differentiate between a diagnosis and family history, between different aspects of certainty and time, and interpret other modifiers and ambiguous meanings of words and abbreviations. For example, it recognizes the difference between a borderline condition and borderline personality disorder. It can understand differences in the meaning of abbreviations, as “HF” can mean “heart failure,” “hip flexion,” “hand and foot,”  “hip flexor,” “hydrafacial,” or “Hispanic female.” As a result, it is easy to incorporate knowledge into the system for new uses other than for risk adjustment.

Looking back over ten years of use at Health Fidelity, how do you feel about the impact your work has had in healthcare and on patients?

It feels good. I appreciate Health Fidelity’s continued innovation around it. There’s a lot of potential for other use cases of natural language processing in healthcare: recognizing adverse drug events, identifying co-morbidities and predicting outcomes. I didn’t foresee risk adjustment when I was first developing the NLP system, but in a way, it’s come full circle with clinical suspecting.

In a lot of ways, it’s back working on decision support. With suspecting, it’s directly supporting doctors, helping them use their own medical expertise in an encounter, impacting care while, at the same time, still meeting risk adjustment goals. It’s exciting and great to see.


We’d like to thank Dr. Friedman for taking the time to discuss her work with us today. If you’d like to learn more about what Lumanent Insights can do for your risk adjustment program, contact us today.

Share