Rethink Your News Source: Why Google Gemini Might Not Be the Best Choice
AI assistants are evolving at an astounding pace, yet their accuracy remains a pressing concern. A recent comprehensive study spearheaded by the European Broadcasting Union (EBU) in collaboration with the BBC unveiled significant shortcomings in how popular AI tools tackle news inquiries. Among those evaluated, Google’s Gemini emerged as the least reliable, raising eyebrows in an age where reliable information is paramount.
The Revelations of the Study
This investigation scrutinized over 3,000 responses across 14 languages from leading AI assistants like ChatGPT, Microsoft Copilot, Gemini, and Perplexity. The findings were alarming:
-
Major Errors: A staggering 45% of responses included a significant mistake, with instances of the AI blending facts with personal opinions (81%) and even injecting its own viewpoints (73%).
-
Performance Rankings: Gemini delivered an astonishing 76% of its answers with severe sourcing or factual inaccuracies—double that of Microsoft Copilot (37%), which was followed closely by ChatGPT (36%) and Perplexity (30%).
- Nature of Mistakes: Common pitfalls included confusing sources, relying on outdated information, and blurring the distinction between opinion and fact.
Screenshot from EBU
Why This Matters
If you rely on an AI assistant for your news, these insights are critical, particularly when one model clearly lags behind the others.
-
Impact on Information Seeking: As AI tools increasingly serve as substitutes for traditional search engines or news summaries, students and professionals alike could find themselves misinformed.
-
Trust Erosion: When AI presents unsubstantiated claims as facts, it challenges the integrity of the response, making users hesitant to trust the information provided.
- Public Skepticism: With an already fragile trust in media, inaccuracies generated by AI could deepen public cynicism regarding the authenticity of available information.
The News Consumer’s Dilemma
While AI assistants like Gemini may be convenient, there is a significant risk of encountering misinformation. Here’s why:
-
Current Affairs & Accuracy: Specifically for current event inquiries, Gemini was found to generate sourcing or factual mistakes in nearly three out of four instances.
-
Relative Performance: Although other assistants performed better, they too were far from perfect, highlighting that no AI model can be deemed entirely reliable for factual news.
- Vulnerable Demographics: Younger audiences, especially individuals under 25, are quickly adopting AI for news updates, making them particularly susceptible to misleading information.
In conclusion, while AI assistants have the potential to keep you informed, they should never serve as your sole source of truth.
Stay Informed, Stay Smart
Take charge of your information consumption. Broaden your sources, question the reliability of your tools, and engage with different platforms. Your journey to accurate knowledge starts with you—don’t allow convenience to overshadow the quest for truth.

