Hello and welcome to Careviser by Marie Loubiere, the weekly newsletter that cuts through the healthcare noise with a single focus: productization of the latest research and tech breakthroughs.
Clinicians are burned out. Can the digital scribe alleviate their admin burden?
🗝️ Why it matters: A recent US survey found that about half of physicians report burnout symptoms. Some of these symptoms are due to the increase in administrative tasks linked to the implementation of Electronic Medical Records. One of the ways physicians try to reduce their administrative workload is by delegating some of these tasks to secretaries and assistants. For instance, they will record their notes after a patient’s meeting, give the recording to their assistant who will then type it in. However, this solution remains costly and not scalable. This is why the emergence of new technologies around automatic speech recognition and natural language processing is interesting.
🔎 The study: The authors aimed to review the current progress of both the “digital scribe” technology itself and the companies building products around it. They wanted to assess both their technical and clinical validity. They included 20 studies in the systematic review.
✅ Findings:
Automatic speech recognition (ASR): The KPI used to measure the technical performance of ASR is the word error rate.
The Lowest word error rate was 14.1% meaning that with the best performing ASR system, they still needed to change 14% of the automatic transcript compared to the manual one which I find pretty significant. The worse one had a whopping 65% WER! As a comparison, the authors indicate that state-of-the-art WER is around 5% in other industries/solutions.
Word Error Rate (WER) counts the number of substitutions, deletions, and insertions in the automatic transcript, compared to the manual transcript. The lower the WER, the better the performance.
Natural Language Processing (NLP): there were 3 different tasks that could be performed by the NLP: extracting information, classifying it, and summarizing it.
Among extracting information solutions, the best performing one was on extracting information about medication and medication dosage. The lowest-performing solutions were the ones around extracting information on the change in medication. The study looked into understanding the causes for the errors. In more than a third of the cases, human reviewers agreed with the model, because there was a lack of context around the patient's medical background meaning that the information wasn’t easily extracted from the medical notes.
Among summarizing solutions, several solutions attempted to summarize a conversation between a patient and a provider. Two of them asked providers to review the summaries, and in the vast majority of cases, they were satisfied with the outcome.
🚀 Opportunities ahead: The challenge with creating an efficient medical scribe is that conversations between patient and providers include both very specific medical jargon, but also informal language. It can be hard for a model to determine which part of the conversation is crucial. NLP models so far worked better on manually transcript conversations which do not alleviate the administrative burden at medical practices. Studies were focused on technical validity but did not look at clinical validity, and user-friendliness of the solutions. These will be crucial steps to be adopted by providers in their daily workflows.
Corti AI was founded in 2016 by a Danish team.
🤯 The problem: During a patient’s consultation or an emergency call, providers are often alone to make a diagnosis and take notes. Administrative requirements can lower their attention to detail and they can miss something.
🤗 The solution: Corti built Audia, a digital assistant that can listen in to any kind of patient consultation (remote or in-person) and support the provider in the decision-making process. Audia also automatically uploads a recording of the consultation in the EMR enabling to generate analytics about consultations (length, who spoke the most, common symptoms emerging…). This also helps detect health trends and allocate staff accordingly.
Audia can be integrated with existing telemedicine and videoconferencing solutions.
Corti has also built other solutions
A call center triage and analytics solution: they enable call takers to not missing any crucial information, they set up processes with suggested questions to receive accurate information, and they obviously manage the performance of the call center over time.
A covid-19 triage solution: based on the same features as their general triage solution but with specific training on covid-19 data
📈 The traction: Corti targets emergency medical services (including their call centers) and nurse lines. In one of their use case stories, they claim that they could reduce the number of undetected out-of-hospital cardiac arrest cases by 43%. They are used by some of the world’s best ER services, and nurse calls platforms in Europe and are now expanding in the US with a recently hired local sales team.
Kaid Health was founded in late 2019 at the heart of the health tech ecosystem: Boston. They are still in stealth mode. They have brought together a small team of former healthcare executives and developers to build “AI-enabled empathy”. I like the term. Essentially they aim to use natural language processing to summarize patient’s interactions and medical data to enable providers to make better decisions and to coordinate care. I’ll keep an eye on them.
That’s a wrap for today! Don’t hesitate to reply to this email with comments, I read and answer all emails :)
I wonder if these solutions protect the patient’s privacy. Privacy should be a key concern for patients and physicians. The analyses would need to be designed for privacy. For instance, the speech-to-text analysis could be done on the device, not sent to a server, in order to make sure the parient’s data is kept private.