Millions of users. Sensitive mental health data. Shared with advertisers.
No breach. No hack. No ransomware.
Just a product decision—and a regulatory failure.
In 2023, the U.S. Federal Trade Commission (FTC) ordered BetterHelp to pay $7.8 million (USD) for sharing users’ sensitive health information with third parties like Facebook and Snapchat for advertising purposes. Although BetterHelp marketed itself as a mental health platform that respected user privacy and confidentiality laws, its data protection and data processing practices told a different story.
The Privacy Breakdown
Users came to BetterHelp seeking therapy, entrusting the platform with deeply sensitive information, including mental health conditions, personal struggles, email addresses, and other identifiers.
Behind the scenes, this data was shared with third-party advertising and social media platforms for ad targeting, analytics, and user acquisition optimization.
This included sending information via tracking pixels and embedded software development kits (SDKs) — technical mechanisms that quietly sent information from the system to third party advertising platforms. This wasn’t just data sharing. It was the Privacy Promise Gap—the distance between what users were told and how the system actually behaved.
This Wasn’t Just Marketing. It Was Misaligned.
Under U.S. law, particularly Section 5 of the Federal Trade Commission Act (15 U.S.C. § 45), companies cannot engage in deceptive or unfair practices. In this context BetterHelp’s public messaging emphasized confidentiality, privacy, and secure therapy.
At the same time:
- data was being shared with advertising platforms
- users were not meaningfully informed
- consent was not specific or contextual
This is not a disclosure issue. It is Trust Misalignment at the system level.
Where BetterHelp Went Wrong
Mistake #1: Designing for Data Sharing While Promising Privacy
BetterHelp positioned itself as a confidential therapy service. However, its architecture told a different story. The platform included third-party trackers, behavioral analytics tools, and marketing integrations.
When a system is built for data sharing, labeling it “private” creates Confidentiality Theater: an appearance of confidentiality and privacy without meaningful enforcement.
Mistake #2: Treating Sensitive Data Like Standard Data
Mental health data is not ordinary personal data, due to its sensitivity, it warrants heightened protection. In this case BetterHelp failed to:
- segregate sensitive data flows
- limit third-party sharing
- apply heightened safeguards
As a result, highly sensitive mental health data was processed as if it were standard analytics input.
Mistake #3: Relying on Disclosure Instead of Alignment
Although BetterHelp disclosed elements of its data practices, FTC standards require that disclosures must be clear, prominent, and contextual.
A privacy policy cannot override user expectations shaped by product design. Where system behavior contradicts the stated assurances, those assurances are legally ineffective.
The Real Issue: Trust as Infrastructure
BetterHelp wasn’t just handling data; it was handling trust.
Users shared information under an assumption of confidentiality.
The system processed that information as marketing input.
That gap between expectation and system behavior is what triggered regulatory enforcement.
How to Fix This (If You’re Building Similar Systems)
Fix #1: Eliminate the Privacy Promise Gap
If a service claims to protect user privacy, its technical and organizational design must support that claim. In other words, a service must close the Privacy Promise Gap. In the case of BetterHelp, this requires concrete changes, including the following measures:
- Remove or strictly limit third-party trackers;
- Isolate sensitive data environments; and
- Prohibit ad targeting based on sensitive inputs.
The result is a system in which system behavior is aligned with user expectations who were promised confidentiality.
Fix #2: Segment and Protect Sensitive Data
Data that is particularly sensitive requires greater privacy protections. To do this, services should create and maintain technical separation for categories of high-risk data, including mental health, biometric, and financial data.
Enforce appropriate safeguards through the implementation of:
- Restricted access controls;
- No external sharing by default; and
- Purpose-bound processing.
The result is that sensitive data is treated in accordance with its heightened risk profile.
Fix #3: Replace Disclosure with Alignment
User interfaces must accurately reflect system reality by:
- Providing clear prompts before data sharing;
- Obtaining explicit, contextual consent; and
- Offering visibility into how data is used.
Privacy cannot rely on documentation alone. It must be enforced through system design.
The Principle That Matters
The BetterHelp case reinforces a core principle in U.S. privacy enforcement: Your promises are enforceable.
Three hard lessons emerge:
- (1) Marketing privacy is not enough; that promise must be reflected in how the system actually enforces it.
- (2) Sensitive data requires stronger controls—even without a unified law.
- (3) Misalignment between messaging and architecture creates liability.
You can write a privacy policy. You cannot engineer trust after you’ve broken it.