Skip to main content

Fighting rumours and fake news

Disinformation is a massive problem in the news world. Two new projects want to use AI to expose fake news. 

Lisa Priller-Gebhardt , 26.04.2023
The threat posed by fake news – AI can help here.
The threat posed by fake news – AI can help here. © AdobeStock / Rawpixel Ltd.

Fake news has become much more prevalent, partly as a result of the Ukraine war, the coronavirus pandemic and the climate crisis. This doesn’t mean the kind of false information that social media platforms like Facebook and Twitter are engulfed by on a constant basis, however, but fake news reports that are intentionally disseminated. That is to say images, texts and videos that are sent out via bots or fake accounts and are difficult to recognise as fake. The objective is to stoke fears and jeopardise social cohesion. 

Artificial intelligence (AI) can be used not only to spread such disinformation, however, but also to uncover it. In Germany for example, the computer science research centre Forschungszentrum Informatik (FZI) has launched the DeFaktS project to this end with the support of the Federal Ministry of Education and Research (BMBF). DeFaktS stands for “eliminating disinformation campaigns by revealing factors and stylistic devices”. 

DeFaktS – AI warns against disinformation 

The programme involves training the AI to recognise and warn against disinformation in suspicious social media services and messenger groups. To achieve this, the researchers have compiled lots of datasets from Twitter and Telegram posts. This allows the artificial intelligence to recognise the stylistic devices typically used in fake news, such as emotional polarisation. 

The next step is to use the trained AI for a so-called XAI (explainable artificial intelligence). According to the FZI, its job is not only to alert users to the possible presence of dubious content. But also to make it clear what reasons prompted it to issue the warning. The researchers do not wish to filter or censor content, however. The idea is for users themselves to critically question information. “In the interests of digitally educating society, this aspect was very important to us,” explains FZI expert Jonas Fegert. 

noFake – collaboration between humans and AI 

The Correctiv research centre also wants to provide people with a tool that will help them recognise fake information. Together with Ruhr University Bochum and TU Dortmund University, it is developing the “noFake” project. Working together, human and artificial intelligence want to make it easier to distinguish between facts and false information. To this end, the scientists are creating AI-based assistant systems that are able not only to detect potential fake information but also to help analyse texts and images. Citizens keen to get involved as crowdworkers can then check information themselves via the Correktiv.Faktenforum platform. First they are trained in how to do this. “In this way we are setting up a fact checking community that combines voluntary engagement with professional standards,” explains Corrective publisher David Schraven. 

© www.deutschland.de