An explosion of fake news and deepfakes due to generative AI
20 Jan 2026
Can you still tell what is real and what is fake? That cute video you saw on Instagram or TikTok of an animal doing something unexpected? What about that news story about how food delivery services are giving drivers “desperation scores”?
Fake news has been around for a very long time, back in ancient Egypt even Ramesses the Great spread lies and propaganda. But something about fake news has changed in the 21st century, first with the widespread access of the internet and now with the rise of generative AI. Not only is it harder to spot fake news, there is also more of it.
A well-informed public is critical to any democracy. But now anyone can generate content of anything and say it is real; or if you want people to think something is not true, you can call it fake news. Fact checking takes time, so much so that the lie will have reached so many people that many will not believe the truth when they hear it or they no longer care.
LLMs generate fake news all the time
A lot of people work with behind the scenes of generative AI, I do not mean the people who are using it but the people hired by tech companies to moderate and assess the quality of the generated content. And they are telling their friends and family to stay far away from generative AI models. The Guardian spoke with a dozen of these AI raters and their insights are damning for the tech companies.
Around the world tens of thousands of people are hired by tech companies to rate their large language models (LLMs). You would think that these people are positive about the models, as they see how they work. But The Guardian’s article shows that the opposite is true.
The AI raters do not trust the LLMs—such as ChatGPT, Gemini, Grok and Claude—because their experiences have shown them that the companies care more about growing fast than providing a good quality product. NewsGuard’s (a media literacy non-profit) 2025 AI False Claims Monitor showed that instead of getting better at providing correct answers, the LLMs are actually getting worse at it.
LLMs are getting worse at providing correct answers.
In a year’s time, the rate at which LLMs generate false information when asked questions about controversial topics has nearly doubled, from 18% in 2024 to 35% in 2025. Meanwhile, the non-response rate—the times the LLMs say they do not know or cannot answer—dropped from 31% to 0%. You now always get an answer, though there is a good chance the answer is incorrect.
LLM training data is easily manipulated
An LLM is really just a fancy predictive text or pixel program. They are trained on everything on the internet (even copyrighted material), to detect patterns, also known as natural language processing. And are then instructed to generate texts or visuals that follow those patterns based on the prompts of users.
It is no wonder that generative AI answer questions incorrectly so often, because as a study by the UK AI Security Institute, the Alan Turing Institute and Anthropic shows, it is easy to manipulate the data these models are trained on. In fact, it only takes 250 ‘poisoned’ documents to train an LLM to generate the poisoned information.
And there are a lot of documents online with disinformation or fake news. Disinformation operations are nothing new, the Pravda network, for example, has been around since 2014, long before LLMs showed up. This pro-Kremlin propaganda network put out up to 23,000 articles a day in May 2025 alone.
Of course, the Pravda network is not the only propaganda operation in the world. There are many, many others, all manipulating the truth to make people do certain things, think or vote certain ways. This makes media literacy, and the ability to recognise fake news or propaganda more important than ever before.
People purposefully make fake news and deepfakes with generative AI
It is highly likely, in my opinion, that the large number of articles produced by propaganda networks these days is made possible by generative AI. But propaganda networks are not the only ones making fake news with the help of LLMs. Every day, regular people do the same thing, either knowingly or unknowingly.
A clear case of prioritising growth over a good product going very wrong, is xAI’s Grok. For the first few weeks of 2026, the chatbot on social media platform X (Twitter) wilfully followed thousands of prompts to ‘digitally undress’ women and children in photos uploaded by users. Elon Musk originally responded with laughing emojis, according to The Washington Post.
Bloomberg reported that a 24-hour analysis of the images posted to X by Grok showed that the chatbot generated about 6,700 realistic deepfake images every hour. Nana Nwachukwu stated to The Guardian that she first took notice of Grok’s ability to edit people’s attire in pictures in October 2025 but the option went viral at the start of 2026. While xAI claimed, on 15 January 2026 that it was no longer possible for users to have Grok undress people, they had only geo-blocked the function so anyone with a VPN can still undress people without consent.
While thousands of people were undressing women and children over on X, someone also decided to create a fake whistle-blower story on Reddit. The story of how a food delivery service had introduced a ‘desperation rating’ system for its drivers played into people’s worst fears about these types of services. And it is not surprising that it quickly went viral both on and off Reddit, reaching millions of people around the world.
A lie spreads further and faster than the truth.
Casey Newton a reporter at Platformer has debunked the story, but it took almost a full day to properly fact check it. And the news of the debunking did not travel nearly as fast as the original post. Generative AI played a big part in why it took so long to debunk the story.
Newton has shared every step he took to debunk the story, including the documents the fake whistle-blower shared with him to back up the story. Including a Gemini-generated ID badge and an 18-page report. That the fake-whistle blower used Gemini instead of another LLM to generate the image was part of what proofed him as fake, as Gemini embeds SynthID watermarks in its images which Google can see.
Meanwhile the report took longer to debunk, as Newton admits that he lacked the technical knowledge to see it for the forgery it was. This is a problem with many hoax stories, as journalists cannot be experts in every field and content generated by AI looks more realistic every day. And so fake news spreads around the world faster than ever before, while the truth tries to catch up.
Why you should care
Some may wonder why an AI generated story or image is more dangerous than something written by a person or a photoshopped image. But it is the ease of use that is one of the dangers. Why would you do something yourself if it takes a fraction of the time and effort to ask an LLM to do it for you?
The answer is because it can go very wrong if you ask an LLM to do the work for you, just ask the Dutch couple who asked their friend to officiate their wedding. The friend wrote his speech and their vows with ChatGPT. Now a judge has ruled that the marriage certificate is invalid because the couple’s vows did not comply with the law. This may same small and insignificant to you because it happened to someone else, but to the couple it was devastating.
It can go very wrong if you ask an LLM to do the work for you.
On a larger scale, Germany’s government and Holocaust memorial institutions have noticed a flood of AI generated images portraying invented events, including meetings of concentration camp inmates and their liberators or children behind barbed wire. The German government and the institutions have called on social media platforms to stop the spread of these fake images as they distort and trivialise history.
We have seen before how fake news can influence elections and referendums, such as in 2016 in the United States and the many lies of the Brexit campaign. Gen-AI is only making it easier for those with bad intentions to create fake news and deepfakes, making it likely that more elections will be influenced in the near future.
In January 2026, the Internet Watch Foundation (IWF) warned that "AI tools will become child sexual abuse machines without urgent action". Their data showed that 2025 was the worst year on record for online child sexual abuse material. Their numbers included a "26,362% rise in photo-realistic AI videos of child sexual abuse often including real and recognisable child victims".
In November 2017, Ian Goodfellow warned that AI might set us back 100 years when it comes to how we consume news, because we will not be able to trust what we are seeing any more. And in 2018 before the widespread availability of gen-AI, technologists and researchers like Aviv Ovadya already warned about an “infocalypse”, where fake images and videos “undermine or upend core civilisational institutions”.
Yet, almost a decade later, all these warnings have not resulted in any or enough legislation to combat the rising issue. In fact, some governments are actively working to not regulate AI at all. And so, our society comes more under threat of fake news generated by AI every day and you should care about that.
The Impactful Quill
The English & Dutch SEO copywriter, storyteller, and content strategist, helping you to reach and engage with your clients.
© Copyright 2025-2026 The Impactful Quill, All Rights Reserved

