Certifyd
← Back to BlogPodcast

Biblical Armageddon: When AI Attacks Get Intelligent

Certifyd Team·

A global bank's security operations centre receives 1.2 trillion alerts every 24 hours. Trillion. With a T. And yet they still have to look at each one manually.

That's the world Dr. Rigoberto Garcia sees accelerating into 2026 — not just more phishing, but smarter phishing, driven by cognitive neural networks that learn and adapt like biological systems. He calls it "biblical Armageddon." After 35 years in IT and 20 years of AI research, he might be the right person to make that call.

Key Takeaways

  • LLMs are not real intelligence — they carry human bias and mistakes. Building agentic systems on top of them copies vulnerabilities across an entire fabric.
  • State-sponsored actors are already using cognitive neural networks for attacks. China and Russia are deploying systems that adapt on their feet — not scripted LLM bots.
  • Hyperpersonalisation distortion is the new attack vector — attackers scrape enough data to send you a Google Maps screenshot of your home claiming a security breach.
  • 2026 will break the 50,000 CVE threshold — entering what Garcia calls "agent warfare," where everyday tools are weaponised by intelligent systems.
  • The pause strategy stops 99% of scams — before you click, stop. Pick up the phone. Call the person. That's it.

Why LLMs Are the Problem, Not the Solution

Dr. Garcia draws a sharp line between large language models and what he calls cognitive neural networks. LLMs, he argues, are built by developers — developers who make mistakes, hide mistakes, and bake bias into the system. Stack agentic capabilities on top of that, and you're distributing vulnerabilities across every connected system.

"When you create an agent tool on top of LLMs, it is the developer that is writing the code. And as a developer, we make mistakes. We are human. And then we try to hide our mistakes — that creates a vulnerability."

His research takes the other path — cognitive neural networks modelled after biological systems. He started by studying bees.

"People say bees are dumb, but each bee is dumb. As a collective they're the most intelligent beings on the planet earth."

The principle: instead of teaching a model data (which LLMs fail at synthesising), you let the system self-learn from individual data droplets. Like raindrops on a windshield — each one isolated until critical mass sends them cascading. The system discovers its own ethics, guided by guardrails that work like a police officer directing traffic rather than hard-coded rules.

The Attacks That Are Already Here

This isn't theoretical. Garcia has tracked four major attacks using neural intelligence since the start of 2026.

The Lure — A text message arrives saying your child Alex just had an accident. It has the correct name. It comes from a valid school address. The intelligence behind it already knows enough about you to make it feel real. The human reaction is immediate: panic, click, compromised.

The Flaw — A Microsoft Office vulnerability (CVE 2026-2159) that bypasses security completely. Not exploited by a human hacker — exploited by an intelligent system that found and weaponised it autonomously.

Hyperpersonalisation Distortion — A term Garcia coined himself. Attackers scrape Google Maps, find your home, screenshot it, and send an email saying someone is trying to breach your security. It specifically targets people who already have home security systems — another layer of data that makes the deception feel credible.

Greishing — QR code attacks at scale. Someone sticks a fake QR code over a Starbucks promotion or an EV charger. You scan it thinking it's legitimate. The amount stolen is deliberately small — small enough that reporting it feels pointless, and the person on the other end dismisses you.

The Password Tsunami

Garcia sees a bigger crisis coming. The move to passwordless authentication — biometrics, passkeys, tokens — is creating a single point of catastrophic failure.

"What happens when you reach critical mass on something and the technology can no longer support its own weight? It cracks."

Once passwordless hits critical mass, the attack surface shifts to session hijacking and token theft. Why target a database of passwords when you can target the single biometric that unlocks all 400 of your accounts?

And then there are emotional attacks — predatory phishing that targets the recently bereaved. Someone dies, it's published on social media, and the attacker contacts the vulnerable widow with fake insurance concerns or home repair urgency. Predatory, personal, and nearly impossible to detect at scale.

What This Means for Identity Verification

Garcia's advice is deceptively simple: pause before you act. Don't click the link. Pick up the phone. Call the person who supposedly sent the message. That human verification step stops 99% of attacks.

At Certifyd, that's exactly what we're building — systems that make identity verification fast, normal, and frictionless. Because when the attacks are this personalised and this intelligent, the last line of defence is knowing that the person on the other end is who they say they are.

Listen to the Full Episode

The full conversation with Dr. Garcia goes deeper into cognitive neural networks, the research behind teaching AI systems ethics, and why he believes 2026 is the year the threat landscape fundamentally changes. Listen on the Gone Phishing podcast page or wherever you get your podcasts.