After I posted about the limits of AI in detecting phishing links on LinkedIn, a security provider replied with a familiar example. They pointed to a product feature called Click Time Protection and claimed it can stop phishing by rewriting links and inspecting them at the moment a person clicks. They said it checks URL reputation, emulates the destination website to detect zero day phishing pages, and had blocked threats that Microsoft missed.
This is exactly the kind of claim people hear every day. It sounds reassuring. It sounds modern. It sounds like technology has finally solved phishing.
It has not.
What follows is a detailed explanation of why their comment is based on the same assumptions that keep the industry focused on the wrong model.
What they said
Their comment described Click Time Protection like this:
- It replaces links inside emails with links that point to an inspection service.
- Every time a person clicks, the system scans the destination website.
- It uses URL reputation to check if the link is already known as malicious.
- It uses URL emulation to detect so called zero day phishing pages.
- They have seen examples that Microsoft passed but their tool blocked.
This is a textbook example of how detection is framed as protection. And it is exactly why phishing continues to succeed.
My response
Click Time Protection is still detection, just delayed until the moment someone clicks. Rewriting links and inspecting them at the moment of use might sound preventive, but the inspection engines they describe still depend on historical information and pattern matching. URL reputation only works when a link is already known. URL emulation still relies on signals that have to exist before the system can recognise them. Both engines require evidence, which new phishing links are designed not to provide.
The claim about zero day detection sounds impressive, but there’s no such thing as a zero day in phishing. Criminals discard phishing URLs after 7 minutes in a targeted attack and after around 13 hours in a bulk campaign (according to Google). The entire point is to avoid leaving any history for systems to analyse. If a link looks identical to a legitimate one and appears for the first time, there’s nothing for emulation to evaluate. That is why phishing remains the entry point for the vast majority of cyberattacks. It exploits the absence of data, not the presence of threats.
Systems like this don’t change the fact that people are still told to stay vigilant and check links. If threat detection was reliable, that advice wouldn’t be necessary.
We’ve seen the same examples with every other vendor. One product may block something another misses, but the pattern is always the same.
The only way to remove the need for constant vigilance is to verify what’s legitimate before people place their trust in it. That requires Zero Trust for web links, where every URL is treated as untrusted until it has been verified as legitimate. Without that, the industry will continue to depend on reputational clues, pattern matching and AI models that can only make judgements about the past.
Why it’s important to highlight this conversation
People trust the language surrounding detection, real time scanning and zero day claims because it sounds like the industry is ahead of attackers. But phishing is not outpacing technology. It’s exploiting the same architectural flaw the web has had since the 1990s.
A brand new link that looks legitimate can’t be detected by any system that depends on historical evidence. That’s why phishing keeps winning.
Until we shift from detection to verification, the advice will stay the same.
“Stay vigilant.”
“Check suspicious links.”
And that advice alone is evidence that our current security systems can’t protect people in the way they assume they can.


