Front entrance of a suburban house with a closed door and scattered tools on the ground, symbolising that only the entry point matters, not how the tools were made.

AI Makes No Difference to Detecting Phishing. The Logic Is Impossible to Dispute.

There’s a widespread belief that criminals using AI changes something about phishing detection. It doesn’t. And the reason it doesn’t is so simple that most people overlook it.

Let’s start with the structure of a phishing attack.

A phishing attack exists only when the final object exists.

  • A fake login page.
  • A fake app download.
  • A fake account.
  • A fake QR code.
  • A fake link.

Everything before that moment is preparation, and preparation has no bearing on detection.

To make this obvious, consider a burglar using a crowbar.

A burglar can make a crowbar by hand, buy it in a shop, borrow it, make one with an AI-powered 3D printer, steal it, or find one in a shed. The origin of the tool is irrelevant. It has no impact on how you stop someone from using a crowbar to break into a house. Security is concerned with the act of entry, not the biography of the crowbar.

Phishing works the same way.

A criminal can create a fake login page by writing the HTML manually, copying a template, buying a phishing kit, or using an AI tool. They can use any method they want to produce the final object. But the moment the object exists, all methods collapse into one outcome. A fake login page exists. A fake link to the object exists.

And that’s the only point where detection becomes possible.

The method of creation doesn’t alter the object. It doesn’t modify its structure. It doesn’t influence how a security system evaluates it. It doesn’t matter.

This is the entire argument.

Security systems don’t analyse how a phishing page or any other digital asset was produced. They don’t inspect the creative workflow. They don’t evaluate whether the attacker used AI, a template, a reverse-proxy server, a text editor, or a magic wand from Hogwarts. They only evaluate the final artefact.

If the final artefact is the same, the method of creation has no influence on detection.

This isn’t an opinion. It is a structural fact.

And once you see it, the claim that AI makes phishing harder to detect becomes impossible to defend. It’s as irrelevant as asking whether the burglar used a crowbar forged by a 3D printer or shaped by hand.

One company even offered to prove me wrong for $20,000. That alone should tell you how fragile their position is. You don’t need to spend money to test something that can be disproven with logic. If logic is enough to dismantle a claim, the product being sold to refute it is vapourware.

Phishing detection happens at the object. AI happens before the object. Therefore AI has no effect on detection.

This is the entire point, and nothing more needs to be added.

At most, AI can make it easier for some criminals to create more convincing messages and webpages in the same way it helps people in sales and marketing. That doesn’t change how a person or a security system detects anything. AI can also make phishing more scalable in the same way it helps sales teams send more outreach, but that doesn’t change how we detect either one.

🔇 Every year since 2016 has been declared the worst year for phishing, and the industry keeps claiming it’s because threats are evolving and they need new features to keep up. That’s nonsense. If AI is now being blamed for helping criminals evade security controls, how come phishing was already the main entry point for cybercrime long before AI even existed?

As someone who has spent decades working on URL classification and content labelling standards, technologies and content labeling UI, I often find myself debating the most basic principles with people who have no real expertise in the subject. Everyone seems to have an opinion, and many repeat ideas that sound insightful but aren’t grounded in how phishing actually works. A little knowledge can be dangerous in this area, and sometimes the most helpful thing is knowing when not to speak with authority you don’t have because it literally puts people at greater risk.

If anyone believes there’s a different way to look at this, I’d genuinely welcome the argument. I’m always open to being shown something I’ve missed, as long as the reasoning stands on its own.

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay informed with MetaCert’s latest updates, expert analysis, and real examples exposing how digital deception works and how it can be stopped.

Front entrance of a suburban house with a closed door and scattered tools on the ground, symbolising that only the entry point matters, not how the tools were made.

AI Makes No Difference to Detecting Phishing. The Logic Is Impossible to Dispute.

There’s a widespread belief that criminals using AI changes something about phishing detection. It doesn’t. And the reason it doesn’t is so simple that most people overlook it.

Let’s start with the structure of a phishing attack.

A phishing attack exists only when the final object exists.

  • A fake login page.
  • A fake app download.
  • A fake account.
  • A fake QR code.
  • A fake link.

Everything before that moment is preparation, and preparation has no bearing on detection.

To make this obvious, consider a burglar using a crowbar.

A burglar can make a crowbar by hand, buy it in a shop, borrow it, make one with an AI-powered 3D printer, steal it, or find one in a shed. The origin of the tool is irrelevant. It has no impact on how you stop someone from using a crowbar to break into a house. Security is concerned with the act of entry, not the biography of the crowbar.

Phishing works the same way.

A criminal can create a fake login page by writing the HTML manually, copying a template, buying a phishing kit, or using an AI tool. They can use any method they want to produce the final object. But the moment the object exists, all methods collapse into one outcome. A fake login page exists. A fake link to the object exists.

And that’s the only point where detection becomes possible.

The method of creation doesn’t alter the object. It doesn’t modify its structure. It doesn’t influence how a security system evaluates it. It doesn’t matter.

This is the entire argument.

Security systems don’t analyse how a phishing page or any other digital asset was produced. They don’t inspect the creative workflow. They don’t evaluate whether the attacker used AI, a template, a reverse-proxy server, a text editor, or a magic wand from Hogwarts. They only evaluate the final artefact.

If the final artefact is the same, the method of creation has no influence on detection.

This isn’t an opinion. It is a structural fact.

And once you see it, the claim that AI makes phishing harder to detect becomes impossible to defend. It’s as irrelevant as asking whether the burglar used a crowbar forged by a 3D printer or shaped by hand.

One company even offered to prove me wrong for $20,000. That alone should tell you how fragile their position is. You don’t need to spend money to test something that can be disproven with logic. If logic is enough to dismantle a claim, the product being sold to refute it is vapourware.

Phishing detection happens at the object. AI happens before the object. Therefore AI has no effect on detection.

This is the entire point, and nothing more needs to be added.

At most, AI can make it easier for some criminals to create more convincing messages and webpages in the same way it helps people in sales and marketing. That doesn’t change how a person or a security system detects anything. AI can also make phishing more scalable in the same way it helps sales teams send more outreach, but that doesn’t change how we detect either one.

🔇 Every year since 2016 has been declared the worst year for phishing, and the industry keeps claiming it’s because threats are evolving and they need new features to keep up. That’s nonsense. If AI is now being blamed for helping criminals evade security controls, how come phishing was already the main entry point for cybercrime long before AI even existed?

As someone who has spent decades working on URL classification and content labelling standards, technologies and content labeling UI, I often find myself debating the most basic principles with people who have no real expertise in the subject. Everyone seems to have an opinion, and many repeat ideas that sound insightful but aren’t grounded in how phishing actually works. A little knowledge can be dangerous in this area, and sometimes the most helpful thing is knowing when not to speak with authority you don’t have because it literally puts people at greater risk.

If anyone believes there’s a different way to look at this, I’d genuinely welcome the argument. I’m always open to being shown something I’ve missed, as long as the reasoning stands on its own.

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay informed with MetaCert’s latest updates, expert analysis, and real examples exposing how digital deception works and how it can be stopped.

Recent blog posts