After hundreds of billions spent on cybersecurity and the arrival of AI promising to transform it, this article explains why people will still be told the same thing.
“Stay vigilant.” “Check suspicious links.” “Don’t trust unexpected messages.”
We’ve built an entire industry around technology that tells people to be careful because it can’t protect them. For all the progress in detection, automation, and artificial intelligence, the outcome remains the same: responsibility for safety always lands back on the individual.
Every government advisory, every security awareness campaign, every corporate policy ends with the same instruction, stay vigilant. Even banks and payment companies bombard customers with awareness campaigns and banners across the top of their websites and mobile apps. Awareness has already landed, yet it continues to prove ineffective when it comes to keeping people safe. That repetition tells us something important. Deep down, we all know that no matter what security system we use, it isn’t reliable enough to trust completely. So we double-check, just in case.
This will not change. Not with more data. Not with new regulations. Not with AI. Because the problem isn’t the speed or intelligence of detection. It’s what detection is built on, assumed trust. Security systems assume every link is safe until proven dangerous. But phishing exploits exactly that assumption.
Around 90% of all cyberattacks start with phishing. Most begin the same way, a link to a fake login page, payment request, website, app download, or message that impersonates someone trusted, a bank, government agency, investor, celebrity, family member, potential date, or colleague. It’s the simplest and most effective form of deception online. And it works.
AI doesn’t fix that. And it never will. AI and machine learning belong to the same family as every other detection-based defence – “threat detection” – assume it’s safe unless proven otherwise. AI can make analysis faster and the fake content more convincing, but it still can’t make a judgement about a brand-new link that’s never been seen before. Without historical information, there’s nothing for machine learning to learn from. So even the smartest AI will still tell people to stay vigilant because it can’t verify what it doesn’t know.
Deepfake videos and voice calls are a new layer of this same problem. They don’t change what’s broken, they amplify it. Hundreds of thousands of fake videos are uploaded every day to platforms like Instagram. No AI system on earth can detect them all. The only reliable protection is verification, confirming that the account behind the content is legitimate so every video it posts can be trusted, and every unverified account can be treated as untrusted until proven otherwise. We need to move away from the proven bad model in favour of the proven good model.
The world doesn’t need more security. It needs different security.
Zero Trust is the gold standard in cybersecurity. When applied to phishing protection for web links, it replaces the assumption of safety with a requirement for proof. Instead of trying to detect what’s dangerous, it verifies what’s legitimate. Every link is treated as untrusted until proven otherwise.
For years, we’ve been asking people to be more careful. What if we made the internet more trustworthy instead?
The way to stop telling people to stay vigilant is to make vigilance unnecessary. That starts by applying Zero Trust to the one thing behind almost every fraud and targeted cyberattack: the link. Rather than relying on guesswork, people need security that lets them verify the legitimacy of links inside messages, emails, apps, QR codes, and browsers.
So ask yourself this.
Would your family, colleagues, employees, or customers rather be told to check every single link in every message, email, app, search result, or social post, hoping they spot a threat? Or would they prefer a simple tool that confirms what’s real and legitimate in seconds, without opening the link?
Which approach do you think brings peace of mind?


