“State of the Art and Automated Rumour Verification in Social Media Conversations”, a guest seminar by Dr Elena Kochkina, University of Warwick and the Alan Turing Institute, London
26 January 2021
Social media have gained a tremendous increase in popularity across the world as means of delivery of breaking news and stories on a wide range of issues faster than traditional media. However, absence of content verification on social media results in misinformation being published and spread through like wildfire. False and malicious rumours reaching global audiences can lead to serious consequences, damage reputations, and may even place individuals or communities in danger.
On 28th October, our invited speaker, Dr Elena Kochkina (Postdoctoral Research Fellow at the University of Warwick and the Alan Turing Institute in London), presented a seminar on “State of the Art and Automated Rumour Verification in Social Media Conversations”.
As part of motivation, Dr Kochkina presented an overview of the findings of the Reuters Institute Digital News Report 2019, which revealed a high level of concern among user about which information is real and which is fake on the internet. The report also found that the average level of trust in the news, across all countries and platforms included in the study, is down to 42% with less than half agreeing that they trust the news media they themselves use.
Rumour verification is traditionally performed manually by professional journalists and fact-checkers but due to the scale of content generation today, there is a need for an automated system to assist in this process. Dr Kochkina explained that a typical automated rumour verification system includes the following steps: rumour detection, tracking, stance and verification.
The aim of rumour detection is to determine if a message is a candidate for being a rumour or not by filtering out those that are opinions or claims that do not have a potential impact. Rumour tracking is the collection of sources and responses related to the identified rumour. Then the stance of those sources and responses towards the rumour is established. A stance could be one of supporting, denying, questioning or commenting. From this, the rumour is then verified to be true, false or unverified using machine learning techniques.
Dr Kochkina went on to explain the types of language technology approaches that can be taken to identify misinformation. The majority of existing rumour verification approaches using linguistic features and lexical cues. Others include temporal and structural approaches, network connections and user-based approaches, stance classification, media or image content analysis. Some of the latest approaches consider external information such as Web of Trust scores and credibility ranking information.
When using features which are indicative of rumour veracity, there is no universally agreed rumour data set and researchers use different data sets which can produce different outcomes. Rumour verification remains a challenging task.
Dr Kochkina’s research focuses on Rumour Stance and Veracity Classification in social media conversations. Her current work focuses on estimating predictive uncertainty of machine learning models.
The Computing Seminar Series gives our students an insight into the range of research projects undertaken in the School of Computing, and offers an opportunity to hear from experts in industry and academia on the state-of-the art in computer science. These enrichment activities help broaden our students’ understanding of computing and its broader impact on society. All staff and students are encouraged to attend Computing Seminars.
Check out upcoming School of Computing events and seminars.
Find out more about our courses in computing.