European companies should be more vigilant of IT threats from within, Google has warned, after its intelligence team found increased activity from North Korean IT workers using fake and stolen identities. One affected worker told The Stack a deepfaked version of him even made it to the interview stage for top AI roles.

The threat posed by these fake personas was raised last year, but campaigns have persisted and moved their focus to Europe thanks to increased awareness of the issue in the US.

Jamie Collier of Google’s Threat Intelligence Group said: “These individuals pose as legitimate remote workers to infiltrate companies and generate revenue for the regime. This places organizations that hire DPRK IT workers at risk of espionage, data theft, and disruption.”

See also: OpenAI, TikTok, X hunt insider threat specialists – on widely diverging salary bands

The team said its research had uncovered fake workers operating through remote hiring sites such as Upwork, Telegram and Freelancer and aided by facilitators offering advice on how to dupe companies and receive a salary through indirect routes.

A “diverse portfolio” of projects was also specifically identified in the UK, with North Koreans working on development projects for websites, bots, content management systems, and blockchain technology.

Among those projects, workers showed a particular affinity for working with the Next.js and TailwindCSS frameworks and creating job marketplaces.

See also: New North Korean threat group seen deploying custom ransomware

Not keen on losing their revenue stream once uncovered, the identified workers are known to carry out extortion attempts on companies when they are fired, a tactic which has reportedly increased in popularity in recent months.

Collier said increased threats to release sensitive data such as source code for internal projects could be matched to growing law enforcement activity in the US, where 14 NK nationals were indicted for operating an IT worker scheme in December, indicating pressure on the actors “to adopt more aggressive measures to maintain their revenue stream.”

Google also warned companies operating ‘bring your own device’ policies were especially vulnerable to hiring workers who may not be all they claim, with teams identifying BYOD environments as “potentially ripe for their schemes … conducting operations against their employers.”

Workers using their own devices are more able to avert detection by avoiding traditional log-in security measures and limit the information given to companies, such as a postal address needed to receive a company laptop.

Rafe Pilling, Director of Threat Intelligence for the Counter Threat Unit at Sophos-owned Secureworks, also warned companies failing to spot these employees could face external action as well internal losses.

He said “Hiring these fraudulent workers puts companies at risk of sanctions violation and all the issues that come with allowing an unknown individual access to critical organizational data and systems.”

It happened to me...

The issue hits close to home for Philip Shurpik, Head of Engineering at Ukrainian InfoOps and synthetic profile detection company LetsData, who discovered 'he' had been applying to top AI and engineering positions with at least six different companies.

Shurpik told The Stack after one recruiter, and friend, messaged him via Facebook about a supposed application last month, another informed him an application using his identity, and a CV peppered with genuine employment information pulled from his LinkedIn, had even got to the interview stage.

"She told me I was on their interview, there was something wrong with the sound but it was 'definitely' me," he said, predicting a deepfake was used to mimic his face, but the would-be fake Philip was likely stumped when the interviewer spoke to him in Ukrainian instead of English.

A warning post on Philip's LinkedIn led four more recruiters to reveal they had seen applications using his details.

The example goes one step beyond what Google warned of but used similar techniques as Shurpik said he was told applicants using his name asked to be paid via a Polish "business incubator" which allows workers to be hired through a B2B contract rather than as an employee, therefore skipping the need to share personal details.

LetsData's technology had previously detected fake personas acting as recruiters for other organisations online, but CEO Andriy Kusyy told The Stack the company has also now processed fraudulent applications for its own roles.

"We also had a bunch of people joining for interviews with our head of people... claiming to be from Ukraine or Poland, but the first thing that would give them away is a simple language check," he said, with applicants claiming to be from Ukraine but not speaking Ukrainian, or failing to engage in culturally-specific small talk.

While Kusyy said he could not know where applicants were really from, fake personas were also given away by stronger social media checks and "two factor authentication" style checkups through a second contact, or simply by giveaways such as a window showing the wrong time of day outside for someone's claimed location.

As techniques advance, recruiters may have to start asking people to "turn their head 360 degrees ... or hold something in between their face" to expose deepfake masks said Shurpik, "it's a bit of a paranoid idea to ask candidates to do it but probably a good idea in this new world."

The link has been copied!