A dark themed illustration showing a hooded figure holding a torn sheet of paper displaying the Pornhub and Mixpanel logos, with shadowy hooded figures in the background suggesting a cyberattack.

PornHub, OpenAI, and the Same SMS Phishing (Smishing) Failure

This is a post to explain why PornHub’s extortion story matters far beyond adult content. This is the same phishing led analytics failure that exposed OpenAI customer data and impacted other Mixpanel customers who still haven’t come forward. Different brands. Same entry point. Same security failure. The problem isn’t who was targeted. It’s that phishing keeps working and security systems still fail to stop it.

PornHub’s being extorted after search and watch history linked to Premium members was stolen. The reporting says the data came from a breach at analytics vendor Mixpanel, triggered by an SMS phishing attack on November 8th 2025. That matters because this is the same incident Mixpanel previously acknowledged affected OpenAI and CoinTracker, with more customers likely impacted but not yet public.

The entry point’s stated clearly in the story.

Mixpanel suffered a breach on November 8th, 2025, after an SMS phishing attack enabled threat actors to compromise its systems.

Everything that follows is damage control.

PornHub’s notice says this.

This was not a breach of Pornhub Premium’s systems. Passwords, payment details, and financial information remain secure and were not exposed.

That language is designed to narrow concern. It encourages readers to focus on payments and credentials and ignore what actually creates harm here.

Later in the article, the exposed data’s described in detail. Email addresses. Locations. Video URLs. Video names. Keywords. Timestamps. Search, watch, and download activity.

For an adult platform, this isn’t peripheral data. It’s the highest risk data the company holds. It enables coercion, reputational harm, and long term personal exposure. Saying what wasn’t taken doesn’t reduce that risk. It distracts from it.

PornHub also says it stopped working with Mixpanel in 2021, suggesting the data’s old. Age doesn’t make this safer. It makes it worse. Users reasonably assume historical activity isn’t retained indefinitely or left sitting in third party systems years later.

The story then introduces a second narrative from Mixpanel.

“The data was last accessed by a legitimate employee account at PornHub’s parent company in 2023.”

This is framed as evidence that the November 2025 breach may not be the source. In reality, it confirms another failure. The dataset existed. It was retained. It was accessible. And it remained a viable extortion asset years after it should’ve been minimised or deleted.

Whether the final unauthorised access happened through Mixpanel’s compromised systems or elsewhere doesn’t change the core issue. Highly sensitive behavioural telemetry was allowed to exist as a bulk, portable dataset tied to identity.

The attacker matters less than the architecture.

The article spends time detailing ShinyHunters’ history. Salesforce integrations. Oracle zero day exploitation. Ransomware platforms. This creates a sense of sophistication and inevitability. It subtly suggests that no reasonable defence could’ve stopped this.

That framing protects vendors and institutions. It avoids asking why phishing, which is routine and expected, was enough to expose hundreds of millions of records.

Security systems are supposed to assume phishing will succeed. People will click. Messages will look real. That’s not the failure. The failure’s allowing a single compromised identity to reach analytics systems holding raw behavioural data at massive scale.

Training doesn’t stop this. Monitoring doesn’t stop this. Incident response doesn’t stop this. Those appear only after access has already occurred.

Mixpanel’s response language follows the standard pattern. Detect. Execute incident response. Contain. Engage external partners. These are post harm actions. They don’t explain why analytics events contained raw URLs, titles, and search terms linked to emails. They don’t explain why retention policies allowed data from 2021 to remain exposed in 2025. They don’t explain why phishing was able to translate into bulk data access at all.

There’s an additional detail that sharpens this picture.

Since the breach became public, Mixpanel’s removed all mention of the security incident from its site. The original incident language has been replaced with an edited blog post that offers no meaningful explanation of what happened, how access was obtained, or what controls failed.

That absence matters.

Removing incident disclosures doesn’t reduce risk. It reduces visibility. It prevents customers and regulators from understanding how their data was exposed and whether the underlying conditions still exist. Silence isn’t remediation. It’s narrative management.

This is why PornHub, OpenAI, and other Mixpanel customers belong in the same conversation.

The common factor isn’t the brand. It isn’t the audience. It isn’t the sensitivity of the product. It’s an analytics supply chain that treats outbound telemetry as low risk and relies on detection and response once phishing inevitably works.

A technically honest explanation would say this.

Here’s what’s left on Mixpanel’s website about the attack

Quoted from Mixpanel’s published response

On November 8th, 2025, Mixpanel detected a smishing campaign and promptly executed our incident response processes. We took comprehensive steps to contain and eradicate unauthorized access and secure impacted user accounts.

  • We proactively communicated with all impacted customers.
  • Secured affected accounts
  • Revoked all active sessions and sign-ins
  • Rotated compromised Mixpanel credentials for impacted accounts
  • Blocked malicious IP addresses
  • Registered IOCs in our SIEM platform
  • Performed global password resets for all Mixpanel employees
  • Engaged third-party forensics firm
  • Performed a forensic review of authentication, session, and export logs
  • Implemented additional controls to detect and block similar activity going forward.

Why this doesn’t prevent the next phishing attack

Everything Mixpanel describes happens after the compromise has already occurred.

The attack vector was smishing. An SMS containing a link that looked legitimate. The victim clicked it, trusted it, and authenticated. No malware was required. No exploit was used. No system was broken. The user simply followed a link that the system gave them no reason to distrust.

Revoking sessions, rotating credentials, blocking IPs, registering indicators, and engaging forensics are all downstream actions. They assume failure has already happened and focus on damage control. That’s incident response, not prevention.

A new URL has no history. Threat intelligence can’t flag what hasn’t been seen before. SIEM rules can’t trigger on a user doing the correct thing on a legitimate looking page. IP blocking fails because attackers rotate infrastructure instantly. Logging only explains what went wrong after customers are already exposed.

The core problem isn’t authentication.
It’s implicit trust in URLs.

As long as employees receive links in SMS, email, chat, calendars, or collaboration tools with no way to know whether those links are legitimate, phishing will continue to work and they will be exposed to future attacks.

Why Zero Trust for URLs is the only effective fix

A Zero Trust approach treats every URL as untrusted by default.

Before an employee authenticates, before credentials are entered, before a session exists, the system answers one simple question. Has this link been explicitly verified as legitimate.

If it has, it’s marked verified.
If it hasn’t, it’s clearly shown as unverified.
If it’s known malicious, it’s marked dangerous.

Nothing is blocked. The employee is informed at the moment trust is about to be given.

That single shift breaks phishing because the attacker’s advantage disappears. They rely entirely on victims being unable to distinguish real links from fake ones.

Because Mixpanel isn’t working with MetaCert, they don’t have Zero Trust URL authentication operating inside SMS messages or other communication channels. That means the same attack surface still exists today exactly as it did before the breach.

They can respond faster next time.
They can’t prevent the next one.

Only Zero Trust for URLs changes the outcome.

Until phishing’s treated as a guaranteed entry attempt and systems are designed so that successful phishing can’t expose people at scale, this story will keep repeating.

Different victims. Same attack. Same failure.

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay informed with MetaCert’s latest updates, expert analysis, and real examples exposing how digital deception works and how it can be stopped.

A dark themed illustration showing a hooded figure holding a torn sheet of paper displaying the Pornhub and Mixpanel logos, with shadowy hooded figures in the background suggesting a cyberattack.

PornHub, OpenAI, and the Same SMS Phishing (Smishing) Failure

This is a post to explain why PornHub’s extortion story matters far beyond adult content. This is the same phishing led analytics failure that exposed OpenAI customer data and impacted other Mixpanel customers who still haven’t come forward. Different brands. Same entry point. Same security failure. The problem isn’t who was targeted. It’s that phishing keeps working and security systems still fail to stop it.

PornHub’s being extorted after search and watch history linked to Premium members was stolen. The reporting says the data came from a breach at analytics vendor Mixpanel, triggered by an SMS phishing attack on November 8th 2025. That matters because this is the same incident Mixpanel previously acknowledged affected OpenAI and CoinTracker, with more customers likely impacted but not yet public.

The entry point’s stated clearly in the story.

Mixpanel suffered a breach on November 8th, 2025, after an SMS phishing attack enabled threat actors to compromise its systems.

Everything that follows is damage control.

PornHub’s notice says this.

This was not a breach of Pornhub Premium’s systems. Passwords, payment details, and financial information remain secure and were not exposed.

That language is designed to narrow concern. It encourages readers to focus on payments and credentials and ignore what actually creates harm here.

Later in the article, the exposed data’s described in detail. Email addresses. Locations. Video URLs. Video names. Keywords. Timestamps. Search, watch, and download activity.

For an adult platform, this isn’t peripheral data. It’s the highest risk data the company holds. It enables coercion, reputational harm, and long term personal exposure. Saying what wasn’t taken doesn’t reduce that risk. It distracts from it.

PornHub also says it stopped working with Mixpanel in 2021, suggesting the data’s old. Age doesn’t make this safer. It makes it worse. Users reasonably assume historical activity isn’t retained indefinitely or left sitting in third party systems years later.

The story then introduces a second narrative from Mixpanel.

“The data was last accessed by a legitimate employee account at PornHub’s parent company in 2023.”

This is framed as evidence that the November 2025 breach may not be the source. In reality, it confirms another failure. The dataset existed. It was retained. It was accessible. And it remained a viable extortion asset years after it should’ve been minimised or deleted.

Whether the final unauthorised access happened through Mixpanel’s compromised systems or elsewhere doesn’t change the core issue. Highly sensitive behavioural telemetry was allowed to exist as a bulk, portable dataset tied to identity.

The attacker matters less than the architecture.

The article spends time detailing ShinyHunters’ history. Salesforce integrations. Oracle zero day exploitation. Ransomware platforms. This creates a sense of sophistication and inevitability. It subtly suggests that no reasonable defence could’ve stopped this.

That framing protects vendors and institutions. It avoids asking why phishing, which is routine and expected, was enough to expose hundreds of millions of records.

Security systems are supposed to assume phishing will succeed. People will click. Messages will look real. That’s not the failure. The failure’s allowing a single compromised identity to reach analytics systems holding raw behavioural data at massive scale.

Training doesn’t stop this. Monitoring doesn’t stop this. Incident response doesn’t stop this. Those appear only after access has already occurred.

Mixpanel’s response language follows the standard pattern. Detect. Execute incident response. Contain. Engage external partners. These are post harm actions. They don’t explain why analytics events contained raw URLs, titles, and search terms linked to emails. They don’t explain why retention policies allowed data from 2021 to remain exposed in 2025. They don’t explain why phishing was able to translate into bulk data access at all.

There’s an additional detail that sharpens this picture.

Since the breach became public, Mixpanel’s removed all mention of the security incident from its site. The original incident language has been replaced with an edited blog post that offers no meaningful explanation of what happened, how access was obtained, or what controls failed.

That absence matters.

Removing incident disclosures doesn’t reduce risk. It reduces visibility. It prevents customers and regulators from understanding how their data was exposed and whether the underlying conditions still exist. Silence isn’t remediation. It’s narrative management.

This is why PornHub, OpenAI, and other Mixpanel customers belong in the same conversation.

The common factor isn’t the brand. It isn’t the audience. It isn’t the sensitivity of the product. It’s an analytics supply chain that treats outbound telemetry as low risk and relies on detection and response once phishing inevitably works.

A technically honest explanation would say this.

Here’s what’s left on Mixpanel’s website about the attack

Quoted from Mixpanel’s published response

On November 8th, 2025, Mixpanel detected a smishing campaign and promptly executed our incident response processes. We took comprehensive steps to contain and eradicate unauthorized access and secure impacted user accounts.

  • We proactively communicated with all impacted customers.
  • Secured affected accounts
  • Revoked all active sessions and sign-ins
  • Rotated compromised Mixpanel credentials for impacted accounts
  • Blocked malicious IP addresses
  • Registered IOCs in our SIEM platform
  • Performed global password resets for all Mixpanel employees
  • Engaged third-party forensics firm
  • Performed a forensic review of authentication, session, and export logs
  • Implemented additional controls to detect and block similar activity going forward.

Why this doesn’t prevent the next phishing attack

Everything Mixpanel describes happens after the compromise has already occurred.

The attack vector was smishing. An SMS containing a link that looked legitimate. The victim clicked it, trusted it, and authenticated. No malware was required. No exploit was used. No system was broken. The user simply followed a link that the system gave them no reason to distrust.

Revoking sessions, rotating credentials, blocking IPs, registering indicators, and engaging forensics are all downstream actions. They assume failure has already happened and focus on damage control. That’s incident response, not prevention.

A new URL has no history. Threat intelligence can’t flag what hasn’t been seen before. SIEM rules can’t trigger on a user doing the correct thing on a legitimate looking page. IP blocking fails because attackers rotate infrastructure instantly. Logging only explains what went wrong after customers are already exposed.

The core problem isn’t authentication.
It’s implicit trust in URLs.

As long as employees receive links in SMS, email, chat, calendars, or collaboration tools with no way to know whether those links are legitimate, phishing will continue to work and they will be exposed to future attacks.

Why Zero Trust for URLs is the only effective fix

A Zero Trust approach treats every URL as untrusted by default.

Before an employee authenticates, before credentials are entered, before a session exists, the system answers one simple question. Has this link been explicitly verified as legitimate.

If it has, it’s marked verified.
If it hasn’t, it’s clearly shown as unverified.
If it’s known malicious, it’s marked dangerous.

Nothing is blocked. The employee is informed at the moment trust is about to be given.

That single shift breaks phishing because the attacker’s advantage disappears. They rely entirely on victims being unable to distinguish real links from fake ones.

Because Mixpanel isn’t working with MetaCert, they don’t have Zero Trust URL authentication operating inside SMS messages or other communication channels. That means the same attack surface still exists today exactly as it did before the breach.

They can respond faster next time.
They can’t prevent the next one.

Only Zero Trust for URLs changes the outcome.

Until phishing’s treated as a guaranteed entry attempt and systems are designed so that successful phishing can’t expose people at scale, this story will keep repeating.

Different victims. Same attack. Same failure.

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay informed with MetaCert’s latest updates, expert analysis, and real examples exposing how digital deception works and how it can be stopped.

Recent blog posts