WSGR logoWSGR logo
WSGR logo
  • Experience
  • People
  • Insights
  • About Us
  • Careers

  • Practice Areas
  • Industries

  • Corporate
  • Intellectual Property
  • Litigation
  • Patents and Innovations
  • Regulatory
  • Technology Transactions

  • Capital Markets
  • Corporate Governance
  • Corporate Life Sciences
  • Derivatives
  • Emerging Companies and Venture Capital
  • Employee Benefits and Compensation
  • Energy and Climate Solutions
  • Executive Advisory Program
  • Finance and Structured Finance
  • Fund Formation
  • Greater China
  • Mergers & Acquisitions
  • Private Equity
  • Public Company Representation
  • Real Estate
  • Restructuring
  • Shareholder Engagement and Activism
  • Tax
  • U.S. Expansion
  • Wealthtech

  • Special Purpose Acquisition Companies (SPACs)

  • Environmental, Social, and Governance

  • AI and Data Center Infrastructure
  • Energy Regulation and Competition
  • Project Development and M&A
  • Project Finance and Tax Credit Transactions
  • Sustainability and Decarbonization
  • Transportation Electrification

  • U.S. Expansion Library and Resources

  • Post-Grant Review
  • Trademark and Advertising

  • Antitrust Litigation
  • Arbitration
  • Board and Internal Investigations
  • Class Action Litigation
  • Commercial Litigation
  • Consumer Litigation
  • Corporate Governance Litigation
  • Employment Litigation
  • Executive Branch Updates
  • Government Investigations
  • Internet Strategy and Litigation
  • Patent Litigation
  • Securities Litigation
  • State Attorneys General
  • Supreme Court and Appellate Practice
  • Trade Secret Litigation
  • Trademark and Copyright Litigation
  • Trial
  • White Collar Crime

  • Advertising, Promotions, and Marketing
  • Antitrust and Competition
  • Committee on Foreign Investment in the U.S. (CFIUS)
  • Communications
  • Data, Privacy, and Cybersecurity
  • Export Control and Sanctions
  • FCPA and Anti-Corruption
  • FDA Regulatory, Healthcare, and Consumer Products
  • Federal Trade Commission
  • Fintech and Financial Services
  • Government Contracts
  • National Security and Trade
  • Payments
  • State Attorneys General
  • Strategic Risk and Crisis Management
  • Tariffs, Customs, and Import Compliance

  • Antitrust and Intellectual Property
  • Antitrust Civil Enforcement
  • Antitrust Compliance and Business Strategy
  • Antitrust Criminal Enforcement
  • Antitrust Litigation
  • Antitrust Merger Clearance
  • European Competition Law
  • Third-Party Merger and Non-Merger Antitrust Representation

  • Anti-Money Laundering
  • Foreign Ownership, Control, or Influence (FOCI)
  • Team Telecom

  • AI in Healthcare
  • Animal Health
  • Artificial Intelligence and Machine Learning
  • Aviation
  • Biotech
  • Blockchain and Cryptocurrency
  • Clean Energy
  • Climate and Clean Technologies
  • Communications and Networking
  • Consumer Products and Services
  • Data Storage and Cloud
  • Defense Tech
  • Diagnostics, Life Science Tools, and Deep Tech
  • Digital Health
  • Digital Media and Entertainment
  • Electronic Gaming
  • Fintech and Financial Services
  • FoodTech and AgTech
  • Global Generics
  • Internet
  • Life Sciences
  • Medical Devices
  • Mobile Devices
  • Mobility
  • NewSpace
  • Quantum Computing
  • Semiconductors
  • Software

  • Offices
  • Country Desks
  • Events
  • Pro Bono
  • Community
  • Our Diversity
  • Sustainability
  • Our Values
  • Board of Directors
  • Management Team

  • Austin
  • Boston
  • Boulder
  • Brussels
  • Century City
  • Hong Kong
  • London
  • Los Angeles
  • New York
  • Palo Alto
  • Salt Lake City
  • San Diego
  • San Francisco
  • Seattle
  • Shanghai
  • Washington, D.C.
  • Wilmington, DE

  • Law Students
  • Judicial Clerks
  • Experienced Attorneys
  • Patent Agents
  • Business Professionals
  • Alternative Legal Careers
  • Contact Recruiting
U.S. Federal Court Allows CIPA Class Action Against AI Customer Service Provider to Proceed
Alerts
September 4, 2025

On August 11, 2025, the U.S. District Court for the Northern District of California denied a motion to dismiss a California Invasion of Privacy Act (CIPA) class action lawsuit filed against ConverseNow Technologies, Inc. ConverseNow offers restaurants an AI-powered virtual assistant to process and manage customer phone calls, drive-thru orders, and text messaging conversations. In January 2025, plaintiff Eliza Taylor initiated a putative class action against ConverseNow alleging that the company violated CIPA’s wiretapping and call recording prohibitions found in California Penal Code sections 631 and 632 when it intercepted and recorded a call that she had placed to Domino’s Pizza, which used ConverseNow’s services. All businesses developing or deploying AI-powered customer service agents or other AI services that take part in customer communications should take heed of the court’s decision to allow the plaintiff’s claims to proceed.

CIPA Background

CIPA (codified at Cal. Penal Code §§ 630 et seq.) is a California anti-wiretapping and anti-eavesdropping statute originally enacted in 1967 that prohibits wiretapping and recording private communications without consent. At a high level, section 631 of CIPA prohibits a third-party from intentionally wiretapping, or willfully intercepting communications while “in transit” and prohibits them from attempting to read or learn the contents of such communications without consent. California courts have stated that section 631’s liability does not apply to parties to the communication themselves “because parties to a conversation cannot eavesdrop on their own conversations.”1 Nevertheless, courts have split on how to best approach the question of whether an entity included in a conversation on behalf of a party is a third-party eavesdropper. Some courts have adopted an “extension” approach, which analyzes whether a defendant is a mere “extension” of a party to the communication (for example, akin to a tape recorder2) and therefore not a third party. Other courts, meanwhile, have adopted a “capability” approach, which analyzes whether the defendant has the “capability” of using the communication for an independent purpose and, if so, classifying the defendant as a third party.

CIPA section 632(a), meanwhile, prohibits the intentional recording or eavesdropping on a “confidential communication” without the consent of all parties involved. CIPA section 632(c) defines “confidential communication” as “any communication carried on in circumstances as may reasonably indicate that any party to the communication desires it to be confined to the parties thereto, but exclud[ing] a communication made . . . in any other circumstance in which the parties to the communication may reasonably expect that the communication may be overheard or recorded.” The California Supreme Court has interpreted this language to mean “that a conversation is confidential under section 632 if a party to that conversation has an objectively reasonable expectation that the conversation is not being overheard or recorded.”3

The Court’s Decision

In this case, plaintiff Taylor, a California resident, alleged that when she placed a pizza order by phone to a California Domino’s Pizza location, her call was intercepted by ConverseNow and routed through its servers, where her name, address, and credit card details were recorded by ConverseNow without her knowledge or consent. Taylor claimed she was unaware that her conversation was being intercepted by ConverseNow’s AI system and that she believed she was speaking with a Domino’s employee when she provided her order along with her personal and payment information. She further alleged that ConverseNow uses the recorded conversations and data to enhance its AI services.

The court denied ConverseNow’s motion to dismiss both the CIPA section 631 and 632 claims. For the section 631 claim, the court adopted the “capability” approach for determining third-party status. The court stated that ConverseNow should not merely be viewed as an extension of Domino’s because the plaintiff had plausibly alleged that ConverseNow uses caller data to enhance its own services. The court pointed to citations from the complaint, including statements from ConverseNow’s website and privacy policy stating that “by processing millions of live conversations each month, our self-learning system evolves even faster every day to improve the guest experience for new and existing partners alike” and that caller data is used to “improv[e] our ordering platform, advertisements, products, and services.” Further, the court held that the plaintiff met CIPA section 631’s live interception prong by alleging that she and other customers believed they were speaking directly to Domino’s but instead had their calls redirected to ConverseNow’s AI voice assistant. The court also held that section 631’s intent prong was met by the plaintiff’s allegations that ConverseNow’s technology was specifically created to record and analyze calls for Domino’s.

Regarding the CIPA section 632 claim, the court rejected ConverseNow’s “flippant” argument that pizza orders did not warrant a reasonable expectation of privacy. Rather, the court held that the plaintiff’s allegations that she had shared personally identifiable information and personal financial information, including her name, address, and credit card details, were sufficient to allege that a “confidential communication” under section 632 took place.

Takeaways

As this case illustrates, businesses developing or deploying AI-powered customer service agents or other tools that take part in customer communications should take note of the risks of being subjected to privacy class actions. In particular, AI agents that are permitted to use customer communications for product development and improvement, including training AI models, could be considered third-party eavesdroppers because of their “capability” to use the communications for their own ends. This risk is particularly present where such communications may include personal or financial information, and the customer is unaware that their communication is being recorded by an AI agent.

Further, in addition to these CIPA risks, new state laws are emerging to regulate AI systems used to make certain consequential decisions around education, employment, lending, healthcare, housing, insurance, and legal services. For example, the Colorado AI Act imposes a duty of reasonable care on developers and deployers to avoid “algorithmic discrimination” in high-risk AI systems. California, meanwhile, is in the process of finalizing rules on AI-related, automated decision-making technologies used to make “significant decisions” around financial or lending services, housing, education enrollment or opportunities, employment or independent contracting opportunities or compensation, or healthcare services. Additionally, state laws in California, Colorado, and Utah require consumer-facing bots to disclose, in certain circumstances, that the consumer is interacting with AI and not a human. Finally, misrepresentations, fraudulent claims, and inaccurate marketing statements made by customer-facing AI agents can give rise to claims under state consumer protection laws.

Fortunately, businesses developing or deploying AI customer service agents can take concrete steps to help mitigate these risks, for example, by:

  1. Requiring customer-facing AI agents to clearly announce, at the beginning of inbound and outbound communications, that those communications are being recorded and that the agent is an AI assistant.
  2. Either disabling and contractually prohibiting the use of recorded communications for product improvement, development, and AI training, or making clear in the announcement that the AI agent is operated by a third-party (though note that there is a risk that some courts may conclude that both are required).
  3. Testing and verifying that deployed AI agents are not making independent marketing claims and offers, and that AI agents provide factual, accurate information that is not misleading.
  4. Placing technical limits on how far AI agents may be configured and how long they can speak with a customer so as to avoid going “off script” and making claims contradictory to recording disclosures.
  5. Regularly auditing compliance with the controls above using human review to ensure they are operating correctly.
  6. Evaluating whether emerging state AI laws regulating the use of AI systems to make consequential decisions may apply to the AI agent you are developing or deploying and either restrict the use of the agent in connection with such decisions or ensure that the agent meets those compliance requirements.

In light of the U.S. District Court’s decision to allow the CIPA class action against ConverseNow to proceed, as well as related emerging regulatory risks, businesses developing or deploying AI-powered customer service agents should prioritize compliance with privacy and AI laws and regulations to mitigate legal risks. While the recommendations above are not intended to be comprehensive and businesses should consult with counsel on their own compliance strategy, they outline some proactive measures that can form part of a risk mitigation plan.

Wilson Sonsini Goodrich & Rosati routinely helps companies navigate complex privacy and AI regulatory issues and has deep experience representing companies subject to regulatory investigations and CIPA lawsuits. For more information or advice concerning your AI and CIPA risks, please contact Eddie Holman, Tracy Shapiro, Doo Lee, or any member of the firm’s Data, Privacy, and Cybersecurity practice.


 

[1] Garcia v. Build.com, Inc., 2023 WL 4535531, at 4 (S.D. Cal. July 13, 2023).

[2] See Rogers v. Ulrich, 52 Cal. App. 3d 894, 899 (1975).

[3] Flanagan v. Flanagan, 41 P.3d 575, 581 (Cal. 2002).

Contributors

  • Eddie Holman
  • Tracy Shapiro
  • Doo Lee
  • people
  • insights
  • about us
  • careers
  • Binder
  • Alumni
  • Mailing List Signup
  • Client FTP Portal
  • Privacy Policy
  • Terms of Use
  • Accessibility
WSGR logo
Twitter
LinkedIn
Facebook
Instagram
Youtube
Copyright © 2026 Wilson Sonsini Goodrich & Rosati. All Rights Reserved.