
AI deepfakes deceive media, fuel fraud, erode global trust in information
“Create a press image vividly showing Venezuelan President Nicolas Maduro being arrested and taken away by U.S. authorities in the middle of the night.”
When a reporter entered this command into Google’s AI chatbot Gemini on the 7th, a deepfake photo of President Maduro being escorted by U.S. soldiers with their faces pixelated against a dark airport background was generated in under a minute. With additional requests like “insert a headline at the bottom of the image as if it were real news” and “make it slightly out of focus as if taken in haste,” the deepfake image became increasingly “realistic.”
On the 3rd, immediately after the Donald Trump administration announced the arrest of President Maduro, deepfake content related to the incident flooded the internet. Among crude AI videos of President Trump and Maduro dancing hand-in-hand, fake photos disguised as official White House press releases began to circulate. These images spread uncontrollably across social media, deceiving thousands. Some Latin American media outlets mistakenly reported them as real before hastily deleting the posts. The New York Times stated, “The threat of AI-generated images significantly misleading people has become a reality as the technology advances and becomes widespread.”
◇The Era of ‘Crisis of Knowing’ Created by Deepfakes
UNESCO recently warned that the popularization of AI-generated technology would deepen the deepfake problem, leading to a so-called “Crisis of knowing.” In an overflow of information, the line between real and fake becomes blurred, and people must constantly question what to believe.
Last November, amid heightened tensions between U.S. Immigration and Customs Enforcement (ICE) and citizens over President Trump’s immigration crackdown policies, a deepfake video surfaced online showing New York police officers shouting at ICE agents to “back off” during an illegal immigrant raid. Yann LeCun, a New York University professor dubbed one of the “Four Kings of AI,” fell for this deepfake and shared it on his thread with the comment, “Back off, now!” A comment under the post read, “Even Meta’s AI chief is fooled by an AI video — we’re all doomed.” Such misunderstandings repeat regardless of political leaning or educational background. A study by biometric authentication company iProov on 2,000 English-speaking individuals found that only 0.1% could accurately distinguish AI-manipulated videos. With deepfake videos surging from 500,000 in 2023 to 8 million last year, no one is immune to their influence.
◇Deepfakes Leading to Real-Life Damage
Recently, a nail shop owner in Changwon, South Gyeongsang Province, faced an absurd demand for compensation after a customer claimed to have bled from a finger injury. The customer attached an image showing the finger stained red under the nail. In a phone call with Chosun Ilbo on the 7th, the owner said, “I wouldn’t have noticed it was AI if I hadn’t seen the strange handwriting on the ‘medical confirmation’ the customer sent along with the photo.” The case has been reported to the police. On the 1st, the UK Daily Mail reported that delivery companies like Deliveroo are struggling with customers demanding refunds using fake AI images. For instance, customers manipulate images to show undercooked burgers or flies on clean food to get their money back.
According to Deloitte Financial Services Center, losses from deepfake fraud are projected to grow from 12.3 billion dollars (approximately 17.815 trillion Korean won) in 2024 to 40 billion dollars (approximately 58 trillion Korean won) by 2027.
Experts agree that the lack of a clear solution to the deepfake problem is the core issue. Real-time detection of deepfakes is impossible, and even after deletion, it’s difficult to remove all copies once they spread. This is why U.S. and European regulations focused on “post-action” measures to quickly delete deepfake content are deemed ineffective. Fortune, a U.S. economic media outlet, stated, “Deepfakes have already surpassed the boundary where real and fake can be distinguished. By 2026, the technology is expected to worsen as deepfake actors develop into real-time conversational ‘performers.'”
☞Crisis of Knowing
A term describing the situation where it becomes difficult to distinguish between real and fake content due to the proliferation of deepfakes. As people begin to doubt all visible and audible information to avoid deepfake scams, their perception of reality also becomes ambiguous. Warnings continue that this crisis will intensify as more people gain access to high-performance AI technology.

