Minimalist illustration of a world map overlaid with speech bubbles representing text messages containing links, some marked safe, some dangerous, and some unknown, using soft neutral colours and clean geometric lines.

The illusion of evolution: why the “Smishing Deluge” isn’t new and what Palo Alto missed

Palo Alto Networks recently described a “sophisticated global smishing operation” built on a complex ecosystem of brokers, developers, and spammers. The language is familiar. Every major vendor publishes similar reports each year, filled with technical diagrams and claims of evolution and complexity. Yet the outcome never changes. People are still told to stay vigilant, and phishing remains the number one cause of cyberattacks. You can read their full article here.

When you strip away the technical framing, what Palo Alto uncovered isn’t a new phenomenon. It’s the same phishing problem that has existed since the 1990s: fake links inside messages leading to fake websites. The difference is the scale. I was among the first people ever impersonated online while working at AOL from 1996 to 1998, experiencing phishing firsthand before most people even knew it was a thing.

Criminals can now generate and register hundreds of thousands of new domains a day. Their operations appear complex, but the tactic itself has never changed. It’s only the detection systems that keep failing to catch up.

According to Palo Alto’s own data, attackers in this campaign registered over 194,000 new domains since January. Nearly 30% of those domains were active for less than two days, and more than 70% disappeared within a week. That’s not sophistication. That’s automation. And it exposes a fundamental flaw in the detection-based security model.

If traditional detection worked, the world wouldn’t need to keep telling people to stay vigilant. Threat-based systems depend on identifying known indicators of danger such as reputation scores, AI, and blocklists. But these methods can only act on historical data. When a domain is new or its use changes over time, there’s no record to analyse, no pattern to learn from, and no signals to correlate. Detection cannot exist without some prior knowledge.

This is why every year since 2016 has been recorded as the worst on record for phishing, even though global cybersecurity spending now runs into hundreds of billions of dollars. The number of reported breaches rose 75% last year, with an average of 1,876 attacks per organisation per quarter. And still, phishing remains the entry point for more than 90% of cyber incidents. These aren’t isolated statistics. They’re proof that detection is fundamentally unreliable.

In Palo Alto’s report, attackers are described as constantly churning domains, cycling through infrastructure to evade blocklists. That’s exactly what happens when defenders rely on a system that only recognises danger after it’s reported and it has been happening for decades – it should not be a surprise to security vendors. Google found that phishing links used in targeted attacks are discarded after just 7 minutes. For bulk campaigns, attackers only need 13 hours before moving on. Detection simply can’t move fast enough.

The report ends by urging people to stay vigilant, manually checking websites instead of clicking on links. That advice exposes what every major vendor knows but rarely says out loud: they can’t protect people from new phishing links before harm has already been done. Shifting responsibility to customers isn’t protection. It’s proof the system has failed. People need tools that let them verify the legitimacy of links before opening them, without relying on instinct or guesswork. It’s unreasonable to expect anyone to inspect every link in every message, app, or browser in 2025.

The problem is architectural, not behavioural. Phishing doesn’t exploit human weakness; it exploits security systems that fail to recognise fake links. Until the security model changes, people will keep being told to look for signs of danger that are invisible.

A Zero Trust approach to phishing protection must begin with URLs. Every web link should be treated as untrusted until verified as legitimate. This is what Zero Trust means in its truest sense: verification before access. It’s not about detecting what’s bad but authenticating what’s real.

Unlike threat detection, Zero Trust URL Authentication uses binary logic. Verified links are allowed. Unverified links are not. It doesn’t rely on blocklists, AI models, or reputation scores. Instead, it authenticates each URL against a registry of verified resources every time it’s accessed. That makes it technically impossible for an unverified link to be treated as trusted.

The phishing ecosystem Palo Alto describes will continue to exist until this architectural shift happens. Criminals don’t need to evolve because the defensive model hasn’t changed. They’re simply exploiting the same gap detection-based systems have always left open.

The real question isn’t how phishing is evolving, but why the security industry keeps insisting that it is.

Why do we keep telling people to stay vigilant when technology should make that vigilance unnecessary?

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay informed with MetaCert’s latest updates, expert analysis, and real examples exposing how digital deception works and how it can be stopped.

Stay tuned with MetaCert’s insights on how online deception really works and what must change to end it.
Minimalist illustration of a world map overlaid with speech bubbles representing text messages containing links, some marked safe, some dangerous, and some unknown, using soft neutral colours and clean geometric lines.

The illusion of evolution: why the “Smishing Deluge” isn’t new and what Palo Alto missed

Palo Alto Networks recently described a “sophisticated global smishing operation” built on a complex ecosystem of brokers, developers, and spammers. The language is familiar. Every major vendor publishes similar reports each year, filled with technical diagrams and claims of evolution and complexity. Yet the outcome never changes. People are still told to stay vigilant, and phishing remains the number one cause of cyberattacks. You can read their full article here.

When you strip away the technical framing, what Palo Alto uncovered isn’t a new phenomenon. It’s the same phishing problem that has existed since the 1990s: fake links inside messages leading to fake websites. The difference is the scale. I was among the first people ever impersonated online while working at AOL from 1996 to 1998, experiencing phishing firsthand before most people even knew it was a thing.

Criminals can now generate and register hundreds of thousands of new domains a day. Their operations appear complex, but the tactic itself has never changed. It’s only the detection systems that keep failing to catch up.

According to Palo Alto’s own data, attackers in this campaign registered over 194,000 new domains since January. Nearly 30% of those domains were active for less than two days, and more than 70% disappeared within a week. That’s not sophistication. That’s automation. And it exposes a fundamental flaw in the detection-based security model.

If traditional detection worked, the world wouldn’t need to keep telling people to stay vigilant. Threat-based systems depend on identifying known indicators of danger such as reputation scores, AI, and blocklists. But these methods can only act on historical data. When a domain is new or its use changes over time, there’s no record to analyse, no pattern to learn from, and no signals to correlate. Detection cannot exist without some prior knowledge.

This is why every year since 2016 has been recorded as the worst on record for phishing, even though global cybersecurity spending now runs into hundreds of billions of dollars. The number of reported breaches rose 75% last year, with an average of 1,876 attacks per organisation per quarter. And still, phishing remains the entry point for more than 90% of cyber incidents. These aren’t isolated statistics. They’re proof that detection is fundamentally unreliable.

In Palo Alto’s report, attackers are described as constantly churning domains, cycling through infrastructure to evade blocklists. That’s exactly what happens when defenders rely on a system that only recognises danger after it’s reported and it has been happening for decades – it should not be a surprise to security vendors. Google found that phishing links used in targeted attacks are discarded after just 7 minutes. For bulk campaigns, attackers only need 13 hours before moving on. Detection simply can’t move fast enough.

The report ends by urging people to stay vigilant, manually checking websites instead of clicking on links. That advice exposes what every major vendor knows but rarely says out loud: they can’t protect people from new phishing links before harm has already been done. Shifting responsibility to customers isn’t protection. It’s proof the system has failed. People need tools that let them verify the legitimacy of links before opening them, without relying on instinct or guesswork. It’s unreasonable to expect anyone to inspect every link in every message, app, or browser in 2025.

The problem is architectural, not behavioural. Phishing doesn’t exploit human weakness; it exploits security systems that fail to recognise fake links. Until the security model changes, people will keep being told to look for signs of danger that are invisible.

A Zero Trust approach to phishing protection must begin with URLs. Every web link should be treated as untrusted until verified as legitimate. This is what Zero Trust means in its truest sense: verification before access. It’s not about detecting what’s bad but authenticating what’s real.

Unlike threat detection, Zero Trust URL Authentication uses binary logic. Verified links are allowed. Unverified links are not. It doesn’t rely on blocklists, AI models, or reputation scores. Instead, it authenticates each URL against a registry of verified resources every time it’s accessed. That makes it technically impossible for an unverified link to be treated as trusted.

The phishing ecosystem Palo Alto describes will continue to exist until this architectural shift happens. Criminals don’t need to evolve because the defensive model hasn’t changed. They’re simply exploiting the same gap detection-based systems have always left open.

The real question isn’t how phishing is evolving, but why the security industry keeps insisting that it is.

Why do we keep telling people to stay vigilant when technology should make that vigilance unnecessary?

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay informed with MetaCert’s latest updates, expert analysis, and real examples exposing how digital deception works and how it can be stopped.

Stay tuned with MetaCert’s insights on how online deception really works and what must change to end it.

Recent blog posts