Lola, fake news detection engine seeks to help fight against online harm.

Fake news

A digital tool designed to detect fake news, cyberbullying and other online harms is being developed at the University of Exeter Business School.

“LOLA” uses sophisticated artificial intelligence to detect emotional undertones in language, such as anger, fear, joy, love, optimism, pessimism and trust.

It can analyse 25,000 texts per minute, and has been found to detect harmful behaviour such as cyberbullying, hatred and Islamophobia with up to 98% accuracy.

LOLA takes advantage of the latest advances in natural language processing and behavioural theory.

Taking its name from the children’s TV series Charlie and Lola, the detection engine has been developed by a team led by Dr David Lopez, from the Initiative for Digital Economy Exeter (INDEX).

“In the online world the sheer volume of information makes it harder to police and enforce abusive behaviour,” said Dr Lopez.

“We believe solutions to address online harms will combine human agency with AI-powered technologies that would greatly expand the ability to monitor and police the digital world.

“Our solution relies on the combination of recent advances in natural language processing to train an engine capable of extracting a set of emotions from human conversations (tweets) and behavioural theory to infer online harms arising from these conversations.”

Such is LOLA’s potential in the battle against misinformation that it has already led to collaborations with the Spanish government and Google.

In a recent experiment, LOLA was found to pinpoint those responsible for cyberbullying Greta Thunberg on Twitter.


Read More: Twitter to use deep learning to crack down on fake news


It has also been used to spot fake news about Covid-19, detecting the fear and anger so often used to pedal misinformation and singling out the accounts responsible.

LOLA grades each tweet with a severity score, and sequences them: ‘most likely to cause harm’ to ‘least likely’. Those at the top are the tweets which score highest in toxicity, obscenity and insult.

This kind of analysis could be a valuable tool for cybersecurity services, at a time when social media companies are under increasing pressure to tackle online harms.

The government is in the process of creating a new regulatory framework for online safety, giving digital platforms a duty of care for their users.

Dr Lopez added: “The ability to compute negative emotions (toxicity, insult, obscenity, threat, identity hatred) in near real time at scale enables digital companies to profile online harm and act pre-emptively before it spreads and causes further damage.”

Dr Lopez gives further details of the project in the document Mitigating Online Harms at Speed and Scale, and more information on LOLA can be found on the INDEX website.


Bekki Barnes

With 5 years’ experience in marketing, Bekki has knowledge in both B2B and B2C marketing. Bekki has worked with a wide range of brands, including local and national organisations.

The Future of Smart Buildings: Trends in Occupancy Monitoring

Khai Zin Thein • 12th June 2024

Occupancy monitoring technology is revolutionising building management with advancements in AI and IoT. AI algorithms analyse data from IoT sensors, enabling automated adjustments in lighting, HVAC, and security systems based on occupancy levels. Modern systems leverage big data and AI to optimise space usage and resource management, reducing energy consumption and promoting sustainability. Enhanced encryption...

The need to weave agility throughout the business

John Craig Swartz SVP at POWWR • 11th June 2024

With geopolitical tensions, more extreme weather events and the legacy of a global pandemic, it is more difficult for energy suppliers to preserve their margins and remain competitive than ever before. To thrive in the current climate, it is imperative that a supplier makes marginal gains wherever they can. Profitability within the sector today hinges...

Artificial general intelligence is closer than expected

AI expert Stuart Fenton • 10th June 2024

Whilst most of the attention around artificial intelligence (AI) thus far has been on ChatGPT, it is just the tip of the iceberg. In many ways, ChatGPT shouldn’t be thought of as true AI as it is – at its heart – just generative, learned behaviour. The future of AI, in contrast, is a system...

The State of Data Streaming

Confluent • 06th June 2024

Confluent survey: 90% of respondents say data streaming platforms can lead to more product and service innovation in AI and ML development 86% of respondents cite data streaming as a strategic or important priority for IT investments in 2024 For 91% of respondents, data streaming platforms are critical or important for achieving data-related goals

The State of Data Streaming

Confluent • 06th June 2024

Confluent survey: 90% of respondents say data streaming platforms can lead to more product and service innovation in AI and ML development 86% of respondents cite data streaming as a strategic or important priority for IT investments in 2024 For 91% of respondents, data streaming platforms are critical or important for achieving data-related goals

Grant Funding Awarded to Advance Cancer Therapeutics Discovery

Dr Alan Roth • 04th June 2024

The CRUK (Cancer Research UK) Scotland Institute and Oxford Drug Design, a biotechnology company with core expertise in AI drug discovery, have announced that their joint application for the MRC (UK Medical Research Council) National Mouse Genetics Network (NMGN) Business Engagement Fund has been awarded.

Grant Funding Awarded to Advance Cancer Therapeutics Discovery

Dr Alan Roth • 04th June 2024

The CRUK (Cancer Research UK) Scotland Institute and Oxford Drug Design, a biotechnology company with core expertise in AI drug discovery, have announced that their joint application for the MRC (UK Medical Research Council) National Mouse Genetics Network (NMGN) Business Engagement Fund has been awarded.