Blog

What is Natural Language Processing?

Jan 30, 2020 | Jonathan Maisel

What is Natural Language Processing

Artificial intelligence (AI) has made rapid progress from a science fiction concept to a technology used in everyday life. AI solutions are being adopted across the healthcare field at an astonishing pace to maximize efficiencies, reduce costs and provide more effective care for patients.

One of the most exciting emerging forms of AI is natural language processing (NLP). This branch of technology is focused on interpreting and manipulating human language.

Defining Natural Language Processing

Human language is full of idiosyncrasies and nuances that computers simply don’t understand yet. Natural language processing focuses on improving the interactions between computers and humans using natural language through machine learning.

Every time a person expresses thoughts or delivers information through written or verbal means, they are communicating an immense amount of information. Every aspect of language adds a layer of complexity, from the topic chosen to the tone we use and the exact words we select. Any data generated from speech or text is an example of unstructured data. This type of data is messy and difficult to manipulate, but, when properly processed, it can provide a goldmine of insights.

In the recent past, natural language processing and machine learning focused on interpreting text and speech through the use of keywords — a mechanical means of understanding. Today, NLP makes use of artificial neural networks to understand language the same way human brains do.

Artificial Neural Networks

Everyday Uses of Natural Language Processing

Many people don’t realize just how universal NLP has already become. In fact, a wide variety of natural language processing applications exist in the technology we currently use every day. Typically, NLP-driven interactions between humans and machines go like this:

  1. The person speaks to the device.
  2. The device captures the audio data from the speech.
  3. The audio is converted into a text format.
  4. The device processes the text data.
  5. The processed text data is converted to audio.
  6. The device outputs an audio file in response to the human input.

Some common applications include:

  • Machine translation: Applications like Google Translate use statistical machine translation to find what words appear to be the same in two languages.
  • Speech recognition: Your Google Assistant, Siri, Alexa and any other device activated by voice uses NLP technology.
  • Question answering: In addition to recognizing speech, virtual assistants also have to use that input to produce answers through NLP.
  • Sentiment analysis: Also known as emotion AI or “opinion mining,” sentiment analysis attempts to determine feelings or opinions through NLP of text.
  • Chat bots: When you open a chat box on a website to resolve an issue with your internet service, for example, you will likely encounter an NLP-based automated response before being transferred to a human agent.
  • Predictive text: You may have noticed that auto-correct features in smartphones have improved to adapt to your individual writing habits through the use of NLP.

Natural language processing in consumer or customer-facing products mostly serves the purpose of making interactions with devices more seamless and natural. Just 10 years ago, people were still getting used to the idea of using Siri to perform search engine queries. Today, companies are incorporating voice-activated technology into almost anything you can think of, from vehicles to home appliances.

How Does Natural Language Processing Relate to Healthcare?

Outside of the consumer sphere, NLP also plays a huge role in critical industries like healthcare. There are two main branches of NLP as it relates to healthcare:

  • Understanding human speech and deriving meaning from it.
  • Making use of unstructured data by mapping it out and helping physicians use it for decision-making.

The majority of use cases fall directly into either one of these categories. Let’s take a look at five different use cases of NLP in healthcare.

1. Automated Registry Reporting

Many measures of health, such as ejection fraction, are not stored discreetly. This creates a burden of regulatory reporting that bogs down efficiency and increases cost. Automated reporting using natural language processing machine learning makes it possible to identify when symptoms like ejection fraction are included in patient notes and codify each value in a format that can be used by various analytics platforms.

2. Data Mining

The provision of healthcare services generates an astounding amount of unstructured data. Physician notes that are handwritten or manually typed are full of information that is never used in individual care, let alone for the analysis of data on large populations. Natural language processing can aid in data mining by combing the unstructured data for patterns, helping providers make better decisions for patient care.

3. Prior Authorization

Dealing with insurance payers is often a time-draining process for physicians and health organizations as a whole. Prior authorization requirements can be a headache that increases an organization’s overhead while delaying speedy delivery of care. NLP can eliminate the issue of payers agreeing on prior authorizations by creating a universal understanding of language that doesn’t require human interpretation for reimbursement to be authorized.

4. Predictive Analytics

Natural language processing tools have a critical role in analyzing symptoms and identifying which patients are high-risk. Especially in emergency departments, it’s essential to have all the crucial information on a patient before starting treatment, but that often does not happen. Even with the use of an electronic health record (EHR) to quickly transmit a patient’s chart, providers must look through the information presented to make connections regarding what is and isn’t appropriate treatment.

NLP can be used to pick up on words and phrases that signify particular conditions and quickly alert medical professionals to potential adverse interactions. It can also be used to create population models from unstructured medical data that enable earlier disease detection.

5. Benchmarking Guidelines

The synthesis of clinical guidelines is a time-consuming but necessary process. NLP can help through aiding in the interpretation of unstructured data. First, an algorithm takes clinical guidelines from a multitude of sources and aggregates them into a common framework within the database. The unstructured text is then organized into structured data via algorithmic parsing. From the structured data, the NLP algorithm derives and aggregates diagnosis codes. The result is a significant decrease in the time necessary to create new clinical guidelines.

Reduce time spent on documentation

How Does Natural Language Processing Impact Medical Documentation?

NLP is undoubtedly of great value for healthcare organizations, but it also has personal and clinical benefits for physicians. Currently, the manual data entry work required to input comprehensive and accurate notes into an EHR is often referenced in complaints against the technology. In fact, a recent study revealed that for each hour of face-to-face time spent with patients, another two hours is spent creating documentation within the EHR. This is unreasonable by most standards and can significantly contribute to burnout among physicians.

Natural language processing tools integrated with speech recognition technology can significantly reduce the time spent on documentation. A study published in JMIR Medical Informatics investigated the effects of NLP usage on the time spent creating clinical documentation. Researchers looked at 118 pieces of documentation and experimented with four documentation methods from 31 physicians. One method was purely NLP, another was purely traditional keyboard and mouse entry, and the other two were hybridized approaches.

The results of the study are divided by physician specialty. The purely NLP approach resulted in average documentation times of:

  • 5.2 minutes for cardiologists
  • 7.3 minutes for nephrologists
  • 8.5 minutes for neurologists

Documentation times for the traditional approach were:

  • 16.9 minutes for cardiologists
  • 20.7 minutes for nephrologists
  • 21.2 minutes for neurologists

The NLP approach clearly reduces the time spent on the creation of documentation, but it also helps solve the pressing issue of unstructured data. The widespread adoption of EHRs has led to an explosion in the amount of unstructured data available, which has previously been unusable in any meaningful way.

NLP allows doctors to take notes in a natural, narrative way because the technology can ultimately read through that unstructured data and derive clinically useful meaning from it. By identifying common phrases and evaluating them contextually in comparison to large repositories of data, NLP helps clinicians identify medical priorities.

Natural language processing fills the gap between structured and unstructured data, providing greater insight into patient conditions for all clinicians. For instance, uncontrolled diabetes is simple to represent in structured data. However, equally important questions like why the condition is uncontrolled remain unanswered without the help of NLP.

Rather than being a simple word search, NLP can identify patterns and concepts in linguistics. This functionality is essential for providing context not offered by basic word searches. A patient may mention a specific disease without actually having it. These are some examples of contextual variation:

  • The patient is mentioning their fear of having a condition.
  • The patient is communicating a family history.
  • The physician has ruled out the condition.

NLP can identify such contexts and provide more relevant results to clinician queries.

natural language processing improving digital medical transcription

 

How Is Natural Language Processing Improving Digital Medical Transcription?

Medical transcription has a variety of advantages over speech recognition alone, and NLP enhances these benefits. What can we expect NLP to do in the future? Here are five ways natural language processing applications will make medical transcription even more effective for healthcare organizations.

1. Improved Accuracy

One crucial problem with speech recognition that was not addressed in the JMIR study is the accuracy level of the technology. Most documentation platforms that only use speech recognition do not employ the cutting-edge NLP technology necessary to create documentation with enough accuracy. A study published in JAMA revealed that documentation produced with speech recognition software alone has an error rate of 7.4% overall. Additionally, more than 25% of those errors were clinically significant.

After having been reviewed by transcriptionists, the error rate of documentation dropped down to 0.4%, making it evident that the current state of speech recognition is not advanced enough to beat a set of expert human eyes.

2. Time Saved

NLP may someday develop to the point where documentation can be produced instantaneously without the need for human review to ensure accuracy. Transcription services that don’t use any form of speech recognition tech can take days to turn around documentation, and many physicians and patients don’t want to wait that long to see the results of an appointment.

Currently, the fastest available turnaround time for transcribed documentation is the two-hour window ZyDoc offers on more than 90% of jobs. This quick time is due to our hybrid system pairing speech recognition with transcriptionist review. As NLP technology advances and becomes even more accurate, that time will drop.

3. Increased Training Efficiency

One of the disadvantages of doctors using speech recognition software only is that it takes a long time to train the software to each physician’s particular needs. Without NLP, speech recognition software is not very usable out of the box. Sometimes, it can actually take weeks of correction to get speech recognition software to recognize a specific accent or a set of terms specific to a medical specialty.

It will take a long time for NLP to be integrated into most speech recognition software, but ZyDoc is already at the forefront. With HIPAA-compliant medical transcription services, doctors don’t have to do any training of the recognition software, and with NLP, our services will only become more accurate.

4. Automatic Coding

Coding and billing can be quite complex, requiring healthcare organizations to employ multiple specialists if they want to receive timely and complete reimbursement for services rendered. An NLP engine can look at documentation produced by clinicians and pick out the relevant terminologies within the context of a patient’s medical record. While a coding specialist might not have access to a patient’s entire record, the NLP tool has a consolidated view of all relevant information within the record, leading to instant, optimized accuracy.

In combination with documentation produced by a medical transcription service and then inserted automatically into the EHR, NLP can greatly reduce the need for coding specialists.

5. Cost Savings

The overhead associated with speech recognition software is significant, as someone has to check the documentation to confirm it is error-free. Whether doctors are spending their time editing or an organization hires an in-house transcriptionist, the time spent ensuring accuracy is money thrown away.

Medical transcription services are currently the best combination of convenience and affordability and will stay that way as NLP continues to be integrated into our technology. The more accurate our speech recognition becomes, the less need there will be for the human element, resulting in lower costs that ZyDoc passes on to its clients.

Contact ZyDoc

The Most Efficient Clinical Documentation Solution Available Today

The role of natural language processing and machine learning in healthcare continues to expand, assisting physicians with everything from diagnosis to documentation. Today, medical transcription that blends speech recognition with transcriptionist review is still the most accurate and efficient way for clinicians to create patient charts. ZyDoc’s intuitive system shortens the time it takes to create and receive comprehensive notes, and our medical transcription services combined with EHR integration eliminate the need to transfer finished notes to their correct place in the EHR.

ZyDoc is committed to using the latest cutting-edge technology to consistently improve clinical documentation services. To see what ZyDoc can do for your healthcare organization, explore our plans and pricing, and give us a call at 1-800-546-5633.

Tired of Typing In Your EHR?