当前位置: X-MOL 学术Complex Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Clip-GCN: an adaptive detection model for multimodal emergent fake news domains
Complex & Intelligent Systems ( IF 5.8 ) Pub Date : 2024-04-20 , DOI: 10.1007/s40747-024-01413-3
Yufeng Zhou , Aiping Pang , Guang Yu

Emergent news is characterized by few labels, and news detection methods that rely on a large number of labels are difficult to apply to learned features for emerging events and are ineffective in coping with less labeled emergent news detection. To address the challenge of limited labeled data, this study first establishes a scenario for detecting breaking news, ensuring that the domain of detecting events is distinct from the domain of historical events. Secondly, we propose the Clip-GCN multimodal fake news detection model. The model utilizes the Clip pre-training model to perform joint semantic feature extraction of image-text information, with text information as the supervisory signal, which solves the problem of semantic interaction between modalities. Meanwhile, considering the domain attributes of news, the model is trained to extract inter-domain invariant features through Adversarial Neural Network ideation, and intra-domain knowledge information is utilized through graph convolutional networks (GCN) to detect emergent news. Through an extensive number of experiments on Chinese and English datasets from two major social media platforms, Weibo and Twitter, it is demonstrated that the model proposed in this paper can accurately screen multimodal emergent news on social media with an average accuracy of 88.7%. The contribution of this study lies not only in the improvement of model performance but also in the proposal of a solution for the challenges posed by limited labels and multimodal breaking news. This provides robust support for research in related fields.



中文翻译:

Clip-GCN:多模态新兴假新闻领域的自适应检测模型

突发新闻的特点是标签少,依赖大量标签的新闻检测方法很难应用于新兴事件的学习特征,并且无法有效应对标记较少的突发新闻检测。为了解决标记数据有限的挑战,本研究首先建立了一个检测突发新闻的场景,确保检测事件的领域与历史事件的领域不同。其次,我们提出了Clip-GCN多模态假新闻检测模型。该模型利用Clip预训练模型对图文信息进行联合语义特征提取,以文本信息作为监督信号,解决了模态间的语义交互问题。同时,考虑到新闻的领域属性,通过对抗神经网络思想训练模型提取域间不变特征,并通过图卷积网络(GCN)利用域内知识信息来检测突发新闻。通过对微博和推特两大社交媒体平台的中英文数据集进行大量实验,证明本文提出的模型能够准确筛选社交媒体上的多模态突发新闻,平均准确率达到88.7%。这项研究的贡献不仅在于模型性能的提高,还在于针对有限标签和多模态突发新闻带来的挑战提出了解决方案。这为相关领域的研究提供了强有力的支持。

更新日期:2024-04-20
down
wechat
bug