Back to Search Start Over

COVID-19 Misinformation Detection: Machine Learned Solutions to the Infodemic (Preprint)

Authors :
Dhiraj Murthy
Nikhil Kolluri
Publication Year :
2022
Publisher :
JMIR Publications Inc., 2022.

Abstract

BACKGROUND Due to the volume of COVID-19-related misinformation exceeding the capabilities of fact checkers, automated and web-based approaches can provide effective deterrents to online misinformation. Despite the progress of initial, rapid interventions, the volume of COVID-19-related misinformation remains immense and continues to overwhelm fact checkers. OBJECTIVE Improvement in automated and machine-learned methods for infodemic response. METHODS Machine learning was used to evaluate whether training a machine learning model on only COVID-19-related fact-checked data, only on general fact-checked data, or on both would result in the highest model performance. Fact-checked, ‘false’ content was combined with programmatically-retrieved ‘true’ content to create a COVID-19-related labeled misinformation dataset with ~7,000 entries. 20,000 votes were crowdsourced to human label this dataset. RESULTS We found that although inclusion of COVID-19 misinformation data maintained or improved model performance, the best-performing model tested used a combination of general-topic and COVID-19-topic content. Our Bi-LSTM model trained on COVID-19 specific data achieved an internal validation accuracy of 93% and external validation accuracy of 75%, confirming the utility of our model. A crucial contribution of our study is that we were able to build a combined model that outperformed human votes of misinformation. Specifically, when we considered outputs where the machine learning model agreed with human votes, we achieved an accuracy of 90%, which outperformed human votes alone (accuracy of 73%). CONCLUSIONS General-topic content may supplement limited amounts of specific-topic content in low-data situations (e.g., future pandemics) to increase accuracy. Achieving an external validation accuracy of 75% with a Bi-LSTM shows that machine learning can produce much better-than-random results for the difficult task of classifying the veracity of COVID-19 content. The 90% accuracy on a “high-confidence” subsection comprising 58.7% of the dataset (when combining machine learned and human labels) suggests that even suboptimal machine learned labels can be augmented with crowdsourced votes to improve accuracy above human-only levels. These results support the utility of supervised machine learning to deter and combat future health-related disinformation.

Details

Database :
OpenAIRE
Accession number :
edsair.doi...........fd5891e62919d388807f545d1cabb1db