
That IT staffer you just hired may not be who you think. They may not even exist.
Gartner recently projected that by 2028, one in four job candidates will be artificial intelligence-generated. These fake individuals could be the work of state-sponsored hackers, cybercriminals, or simply fraudsters lining up multiple jobs to collect paychecks while performing little or no work.
“With widespread adoption of AI and GenAI by consumers, employers, and candidates, candidate fraud is easier to accomplish,” said Emi Chiba, Senior Principal Analyst at Gartner. The reasons can range from benign resume inflation to criminal activity, such as theft of intellectual property, she said.
Thousands of IT workers working for the government of North Korea have flooded the market over the past two years. In one case, North Korean actors used fake or illegally obtained identities to get hired at a blockchain research and development company, and then stole virtual currency worth more than $900,000.
The Justice Department has shut down a number of laptop farms that enabled North Koreans to impersonate U.S.-based IT workers and funnel money to the country’s weapons program. A laptop farmer in Arizona was sentenced to jail for eight and a half years, after raising $17 million for the North Korean government. According to the indictment for a Tennessee case, the fake workers duped media, technology, and financial companies by using fake email, social media, payments, and job site accounts, along with fake websites, proxy computers, and accomplices outside North Korea.
These 21st-century no-show jobs cost companies millions in fraud costs. Even more troubling are the security implications of these fake employees running unchecked inside company networks. IT workers often have privileged access and admin-level permissions to manage accounts and apps. This gives bad actors an all-access pass at a time when insider threats are already a growing concern.
“If you get the key to the home, you don’t need to burglarize it,” says Nidhi Jain, CEO of CloudEagle.AI, a platform that helps IT and security teams manage software-as-a-service (SaaS).
The growth in remote work has enabled this practice, and AI is helping it grow. Chiba noted fraudsters use the technology to falsify documents and fake their way through virtual interviews by masking their identities and prompting the faker to give correct answers to the recruiter.
The growth in the gig economy has been another enabler, says Adam Meyers, CrowdStrike’s head of counter adversary operations. CrowdStrike has noticed an uptick in overemployment fraud, where fraudsters line up multiple jobs — dozens per person, in one case CrowdStrike uncovered — and hire overseas proxies to do the work, Meyers notes. The North Korean state actors have been exploiting gig work for years, he says. As remote work mushroomed during the COVID pandemic, they took advantage of the opportunity to get full-time employment, using the same infrastructure and tactics.
“They’re involved in all kinds of different illicit activities to generate revenue for the regime and for the Kim family,” he says. “This is a digital extension.”
Applying a Multi-Layered Solution
As with most insider threats, the best way to counteract these fake employees is supervision and access governance, say experts. Organizations need to update their strategies for all roles, not just remote IT, and they need a multilayered approach that relies on both technology and people, said Gartner’s Chiba.
“Educating the recruiters about some of the tactics and techniques that they’re using is the first line of defense,” says Meyers. They also need a channel — such as a form or some other mechanism — to quickly report when they spot a potential fake, he says.
A good starting point is training recruiters to spot telltale signs of AI use in the interview process, says Meyers. Someone who refuses to take down their background blur, or background noise that sounds suspiciously like a call center, can tip off a recruiter that the interview is with a fake identity. Deepfake technology may be advancing, but it’s not seamless, he says.
“Even in the instances where we’ve seen them use deepfakes, you’ll see things get wonky,” Meyers says. “Somebody with a careful eye and paying attention can often identify that.”
Once employees are onboarded, access governance and supervision can control risk and stop fake employees. Good access controls that prevent insider threats can limit the blast radius if a fake employee goes rogue, and observant project managers — aided by behavioral analytics and AI agents — can spot those staffers doing just enough to keep their jobs, a sign that someone is not a real employee.
“Companies need performance managers,” says Meyers. “If somebody isn’t showing up for meetings, or they refuse to go on camera, or they only do a little bit of work every week, that’s something where you really have to have those tough conversations.”
AI as an Enabler and Defender
Observant humans with gut instincts are the first line of defense, but they also increasingly require automated tools to monitor employees and control access, especially to privileged systems.
It may be obvious to security professionals, but many others would be surprised to find every other employee in the average enterprise has excessive permissions, says Jain. “If that person is coming in with malicious intent, that is going to be bad,” she says.
Jain notes CloudEagle’s survey of CISOs found 60% of apps and AI agents used by enterprises are unsupervised; many are shadow IT purchased directly by departments — mainly marketing and sales — without IT involvement. This can become an attack vector for fake employees to inject malicious bots and malware, so gaining visibility to these apps is a key part of approaching that risk, says Jain.
Adopting and enforcing the access framework set down by the National Institute of Standards and Technology (NIST), including least-privilege access and just-in-time access (JiT), would help protect against the risk posed by fake employees. However, CloudEagle’s survey found that only 5% of CISOs polled have adopted strict least-privilege access across their system, and only 15% have adopted JiT.
“You can no longer have a rule book that says: To do her job means Nidhi needs access to these tools,” says Jain. “I need to evaluate it every day: Do I still need access? Then give me access. Otherwise, take it away.”
This kind of supervision requires using automation and behavioral analytics at scale, Jain notes. AI can apply behavioral analytics to spot atypical access patterns that could unmask a fake, while automated access enforcement allows users into systems for a limited time and revokes access as soon as it’s not necessary to perform work. However, Jain notes CloudEagle’s survey found 85% of enterprises are still managing access manually.
“Somebody will give me access, and they’ll put a Post-It on their laptop: ‘Take away access.’ Then the Post-It falls off, and that access is never taken away,” says Jain.
“Very Much Whack-A-Mole”
Security company Pindrop estimated that hiring managers will have to deal with 45 million to 90 million fake job hunters annually by 2028. Recruiters are aware of the problem; 54% told Gartner that candidate fraud is a bigger problem than two years ago. But HR departments are mostly alert for candidates faking their experience, not their identities.
CrowdStrike has been cooperating with investigators, and law enforcement has been effectively pushing back against the North Korean groups, “but it is very much whack-a-mole,” says Meyers. He notes that the recent indictments have driven some of the actors to outside the United States. But organizations need to remain on the lookout and use all their resources.
“There’s lots of things that can be additive to stopping this,” says Meyers. “It’s definitely not something that we can’t do something about.”

