
Fake news has gained new strength with the rise of artificial intelligence. As editors and reporters with our fingers on the pulse of breaking news across social media, our team at DAILY TRIBUNE has spotted a worrying rise in deepfake videos, fabricated or misinterpreted quotes, and statements or press releases of dubious origin.
The flood of misinformation often moves faster than the public’s ability to fact-check — and that’s what makes it dangerous.
AI makes fake news peddlers and scammers more confident because, when used well, AI can mimic almost any style of writing or speaking. It can create entire fake articles, press releases, or even public statements that feel authentic at a glance. A deepfake video can clone a voice. A fake release can sound like the official version. That’s why vigilance has never been more critical.
We are experienced and trained to spot disinformation, misinformation, and potentially fake news. We take pains to confirm what we report — cross-checking sources, verifying documents and making sure statements come from real people and legitimate institutions. And we do all of this while still striving to break stories as they happen.
But what about the public? Most people don’t have newsroom training or daily exposure to the inner workings of information networks. That doesn’t mean that Filipinos aren’t fact-checking and verifying what they read. But it can be challenging to tell what’s real from what’s engineered to deceive. This is why media literacy matters and why institutions like ours must lead by example.
AI itself is not the enemy. While debates rage on about ethics, authorship, bias, and data privacy, the truth is that AI is a tool.
Even in journalism, AI is playing a growing role — streamlining tasks like transcription, research, and translation to enhance speed, accuracy, and overall reporting quality.
The enemy is those who use AI maliciously to deceive.

