Getting your Trinity Audio player ready...
|
MIT’s AI Model for COVID-19 Uses Similar Biomarkers to Detect Alzheimer’s
A cough can reveal a lot about a person’s health, but you need to know what you’re listening for. Researchers at MIT said that differences in coughs between healthy individuals and someone with COVID-19 are not discernible to the human ear but can be detected by AI.
In their study, the AI was able to accurately identify 98.5% of coughs from people with confirmed COVID-19, including 100% of those who were asymptomatic.
The team hopes that their findings can help provide a convenient way to screen for asymptomatic COVID-19 individuals.
For the project, they asked people to voluntarily submit recordings of forced coughs through their smartphones and laptops. The team then fed the recordings into the algorithm, which is based on tens of thousands of cough samples and spoken words.
Participants were also asked to fill out a survey indicating the symptoms they are experiencing, whether they have had COVID-19, and if it was confirmed by a formal test.
Currently, they have collected 200,000 samples of forced cough audio. About 2,500 of them were submitted by people with confirmed COVID-19, including those who were asymptomatic.
The MIT team is in the process of integrating the algorithm into an app. If it receives approval from the US Food and Drug Administration, it could become a free and non-invasive screening tool to identify asymptomatic COVID-19 individuals.
App users could log in daily and simply cough into their phone. Instant results would alert them to a possible infection and the need for testing.
“The effective implementation of this group diagnostic tool could diminish the spread of the pandemic if everyone uses it before going to a classroom, a factory, or a restaurant,” said co-author Brian Subirana, a research scientist at the MIT Auto-ID Lab.
Recordings of coughs have many uses. Before the pandemic, other research groups trained algorithms to detect pneumonia and asthma from cell phone cough recordings.
The MIT team that worked on the COVID-19 algorithm also developed models to detect signs of Alzheimer’s disease through audio recordings. Weakened vocal cords are a sign of this type of dementia.
Subirana and his colleagues first trained two models, one to distinguish different degrees of vocal cord strength and another to detect changes in emotional state through speech.
Alzheimer’s patients express frustration or emotion more frequently than people without the disease, the researchers said.
They then trained another model on a database of coughs to detect changes in lung and respiratory performance.
These three models combined with another that detects muscle degradation were found to be an effective way to identify Alzheimer’s solely through audio recordings.
The AI model for COVID-19 uses the same four biomarkers – vocal cord strength, emotion, lung and respiratory performance, and muscle degradation. The COVID-19 model was only slightly adjusted to look for specific patterns for viral infection.
Subirana and his team’s findings were published in the IEEE Journal of Engineering in Medicine and Biology.
Other organizations like Cambridge University, Carnegie Mellon University, and Novoic, a UK-based health start-up, are also working on similar tracking tools.