Interpretable Fake News Detection
Abstract
Fake news is one of the most serious challenges facing the news industry today, which could result in adverse impacts on our society. Recent progress of deep neural networks (DNNs) has shown some promising results in detecting fake news. However, a critical missing piece of such detection is the interpretability, i.e., why a particular piece of news is detected as fake. This thesis investigates several approaches for explainable detection of fake news, including its several forms: texts, images and videos. First, we study some techniques to efficiently explain the output prediction of any given news. It sheds light on the decision-making process of the detection models and could illustrate why the detection model succeeds or fails.
Second, we show that refining those explanations can enhance the model’s generalization ability. To make this refinement process feasible, we propose an active learning strategy to identify the challenging examples in the training data that are responsible for the model’s overfitting. Several experiments have been conducted to demonstrate the effectiveness of our active learning strategy for image/video-based fake news detection. Third, we propose an interactive explainable detection system for language based (text) fake news to help end-users identify the news credibility. We provide several explanations like word/phrase importance, attribute importance, linguistic feature importance, and supporting examples, which could help end-users understand why the system makes that decision.
Citation
Pentyala, Shiva Kumar (2019). Interpretable Fake News Detection. Master's thesis, Texas A&M University. Available electronically from https : / /hdl .handle .net /1969 .1 /189157.