A fintech company in London hired a senior backend developer last autumn. The interview process was thorough: three rounds over video call, a technical assessment, reference checks. The candidate was impressive. Articulate, technically sharp, confident on camera.
He accepted the offer. He started on Monday. By Wednesday, the engineering lead raised a concern. The person on the Zoom onboarding call didn't quite look like the person they'd interviewed. The voice was similar but not identical. The technical fluency that had been so impressive in the interview was gone.
By Thursday, they'd confirmed it. The person who interviewed wasn't the person who started the job. Someone else had attended the interviews on his behalf, using real-time face-swapping software that's now available for under $30 a month. The actual "employee" was logging in remotely from a different country entirely.
The company had been the victim of a deepfake hiring fraud. It took them four days to notice. It cost them six weeks of lost productivity, a compromised code repository, and an urgent security audit.
This Is Not Science Fiction
The FBI issued a public warning in 2022 about deepfake technology being used in remote job interviews. Since then, the problem has accelerated dramatically.
Services now openly advertise "interview assistance" — a polite term for having someone else attend your job interview using deepfake technology. For as little as $50 per hour, a skilled impersonator will attend your video interview, answer technical questions, and get you the job. You show up on day one and hope nobody notices.
The technology enabling this has improved at a staggering rate. Voice cloning now requires as little as 30 seconds of sample audio to produce a synthetic voice with 99.8% similarity. Real-time face-swapping works on standard consumer hardware. Lip-sync technology matches the substitute's words to the candidate's face in real time.
For recruiters, the implications are severe. The person you interviewed — the one you spent hours assessing, whose references you checked, whose experience you validated — may not be the person who turns up to work.
Remote Work Broke the Verification Gate
Before widespread remote work, this problem was self-limiting. Candidates came to an office for interviews. They met people in person. They showed up on day one and shook hands with their new manager. The physical gate — being present, in the same room, visible to human eyes — made impersonation difficult.
Remote work removed that gate entirely.
A candidate can interview from a bedroom using a laptop camera. They can join from any location. Nobody checks whether the person on the video call is physically the same person whose CV is in the system. The entire assessment process runs through a 720p webcam feed — the perfect medium for deepfake technology.
And onboarding has gone remote too. New employees receive a laptop by courier, log in remotely, and attend virtual orientation sessions. It's entirely possible to "start" a new job without ever being in the same room as a colleague.
This is the environment deepfake hiring thrives in. Every physical checkpoint that once existed has been replaced by a video call that can be faked.
The North Korean Connection
In 2024, the US Department of Justice charged multiple individuals in a scheme where North Korean IT workers used stolen American identities and deepfake technology to secure remote IT jobs at US companies. The workers funnelled their salaries back to North Korea, generating revenue for the regime.
This wasn't a one-off. The DOJ described it as a systematic operation involving hundreds of companies. The workers used AI-generated profile photos, deepfake video for interviews, and remote access tools to perform the work from overseas while appearing to be US-based.
The scheme worked for years before being detected. And the companies affected included Fortune 500 firms with dedicated security teams.
If sophisticated enterprises with significant security budgets couldn't detect this, what chance does a mid-market company with a two-person HR team have?
What's At Stake
The consequences of deepfake hiring go beyond lost productivity.
Security exposure. A fraudulent employee with access to your codebase, your customer data, your internal systems, and your communication channels represents an acute security risk. If the impersonation was state-sponsored, the risk compounds into potential espionage or IP theft.
Compliance liability. If the person who actually performs the work isn't the person you verified, your right-to-work compliance is void. You've employed someone you haven't checked. The same penalties apply as if you'd never verified anyone at all.
Team trust erosion. When a team discovers they've been working alongside someone who isn't who they claimed to be, the psychological impact is significant. Trust — in the hiring process, in colleagues, in the organisation's ability to keep them safe — takes a serious hit.
Financial cost. Beyond the direct costs of salary, equipment, and the security audit, there's the cost of re-hiring, re-onboarding, and the weeks or months of work that may need to be reviewed or discarded.
Two-Way Verification at the Point That Matters
The traditional response to deepfake hiring is to add more stages to the interview process. More video calls. More technical assessments. Maybe an in-person stage.
But adding more stages to a broken process doesn't fix the process. If the underlying problem is that you can't verify the person on camera is the person they claim to be, then more camera time doesn't help. It just gives the deepfake more opportunities to perform.
The fix is verification at the points where it matters most: the interview itself and the first day of work.
With Certifyd, both the interviewer and the candidate authenticate before the conversation begins. A QR code scan takes 30 seconds. It confirms that the person on the video call is the verified individual whose identity has been checked — not a stand-in, not a deepfake, not an impersonator.
At onboarding, the same verification confirms that the person starting work is the same person who was interviewed. The biometric link between the interview verification and the onboarding verification is cryptographic, not visual. It doesn't rely on a hiring manager squinting at a webcam feed and thinking "that looks about right."
And because Certifyd is platform agnostic, this works regardless of whether the interview happens on Zoom, Teams, Google Meet, or any other platform. The verification layer sits outside the video call, making it independent of whatever screen manipulation technology the fraudster is using.
The New Standard for Remote Hiring
Remote work isn't going away. Neither is deepfake technology. The two trends are on a collision course, and recruitment is ground zero.
Companies that implement two-way verification in their hiring process won't just prevent deepfake fraud. They'll signal to candidates, clients, and regulators that they take identity seriously. In a market where trust is the scarcest commodity, that signal matters.
The alternative is to keep interviewing through a webcam and hoping the person who shows up on Monday is the person you hired on Friday. In 2026, hope is not a hiring strategy.
If you want to protect your recruitment process from deepfake candidates, see how Certifyd secures remote hiring. For a broader look at how two-way verification works, explore our person verification solution.