WSGR logoWSGR logo
WSGR logo
  • Experience
  • People
  • Insights
  • About Us
  • Careers

  • Practice Areas
  • Industries

  • Corporate
  • Intellectual Property
  • Litigation
  • Patents and Innovations
  • Regulatory
  • Technology Transactions

  • Capital Markets
  • Corporate Governance
  • Corporate Life Sciences
  • Derivatives
  • Emerging Companies and Venture Capital
  • Employee Benefits and Compensation
  • Energy and Climate Solutions
  • Executive Advisory Program
  • Finance and Structured Finance
  • Fund Formation
  • Greater China
  • Mergers & Acquisitions
  • Private Equity
  • Public Company Representation
  • Real Estate
  • Restructuring
  • Shareholder Engagement and Activism
  • Tax
  • U.S. Expansion
  • Wealthtech

  • Special Purpose Acquisition Companies (SPACs)

  • Environmental, Social, and Governance

  • AI and Data Center Infrastructure
  • Energy Regulation and Competition
  • Project Development and M&A
  • Project Finance and Tax Credit Transactions
  • Sustainability and Decarbonization
  • Transportation Electrification

  • U.S. Expansion Library and Resources

  • Post-Grant Review
  • Trademark and Advertising

  • Antitrust Litigation
  • Arbitration
  • Board and Internal Investigations
  • Class Action Litigation
  • Commercial Litigation
  • Consumer Litigation
  • Corporate Governance Litigation
  • Employment Litigation
  • Executive Branch Updates
  • Government Investigations
  • Internet Strategy and Litigation
  • Patent Litigation
  • Securities Litigation
  • State Attorneys General
  • Supreme Court and Appellate Practice
  • Trade Secret Litigation
  • Trademark and Copyright Litigation
  • Trial
  • White Collar Crime

  • Advertising, Promotions, and Marketing
  • Antitrust and Competition
  • Committee on Foreign Investment in the U.S. (CFIUS)
  • Communications
  • Data, Privacy, and Cybersecurity
  • Export Control and Sanctions
  • FCPA and Anti-Corruption
  • FDA Regulatory, Healthcare, and Consumer Products
  • Federal Trade Commission
  • Fintech and Financial Services
  • Government Contracts
  • National Security and Trade
  • Payments
  • State Attorneys General
  • Strategic Risk and Crisis Management
  • Tariffs, Customs, and Import Compliance

  • Antitrust and Intellectual Property
  • Antitrust Civil Enforcement
  • Antitrust Compliance and Business Strategy
  • Antitrust Criminal Enforcement
  • Antitrust Litigation
  • Antitrust Merger Clearance
  • European Competition Law
  • Third-Party Merger and Non-Merger Antitrust Representation

  • Anti-Money Laundering
  • Foreign Ownership, Control, or Influence (FOCI)
  • Team Telecom

  • AI in Healthcare
  • Animal Health
  • Artificial Intelligence and Machine Learning
  • Aviation
  • Biotech
  • Blockchain and Cryptocurrency
  • Clean Energy
  • Climate and Clean Technologies
  • Communications and Networking
  • Consumer Products and Services
  • Data Storage and Cloud
  • Defense Tech
  • Diagnostics, Life Science Tools, and Deep Tech
  • Digital Health
  • Digital Media and Entertainment
  • Electronic Gaming
  • Fintech and Financial Services
  • FoodTech and AgTech
  • Global Generics
  • Internet
  • Life Sciences
  • Medical Devices
  • Mobile Devices
  • Mobility
  • NewSpace
  • Quantum Computing
  • Semiconductors
  • Software

  • Offices
  • Country Desks
  • Events
  • Pro Bono
  • Community
  • Our Diversity
  • Sustainability
  • Our Values
  • Board of Directors
  • Management Team

  • Austin
  • Boston
  • Boulder
  • Brussels
  • Century City
  • Hong Kong
  • London
  • Los Angeles
  • New York
  • Palo Alto
  • Salt Lake City
  • San Diego
  • San Francisco
  • Seattle
  • Shanghai
  • Washington, D.C.
  • Wilmington, DE

  • Law Students
  • Judicial Clerks
  • Experienced Attorneys
  • Patent Agents
  • Business Professionals
  • Alternative Legal Careers
  • Contact Recruiting
Director of FTC’s Consumer Protection Bureau Gives Guidance on Consumer Protection Risks Associated with AI and Algorithms
Alerts
April 15, 2020

These days, most companies are focused on the myriad of legal, health and safety, and financial issues caused by COVID-19. While the firm is actively monitoring these issues,1 we also want to keep you abreast of other developments that may be relevant to your business. Here, we provide an overview of guidance recently offered by Andrew Smith, the director of the Federal Trade Commission's (FTC) Bureau of Consumer Protection, on how to manage consumer protection risks associated with artificial intelligence (AI) and algorithms.2 Although recognizing the significant potential of AI, Smith explained that, to avoid consumer protection problems, companies should ensure that their use of AI tools is transparent, explainable, fair, and empirically sound, while fostering accountability.

More detail on Smith's guidance on each of these points is included below.

Transparency

  • Smith noted that companies should be careful not to mislead when using AI tools, such as chatbots, to interact with consumers. He explained that this is an area where the FTC has been active, taking action against companies that used fake dating profiles to convince consumers to sign up for a dating service,3 and that sold fake followers, subscribers, and "likes" to enhance other companies' and individuals' social media influence.4
  • According to Smith, companies should also be transparent about their collection of sensitive data and consumers' choices with respect to the collection of that data. He noted that failing to do so could give rise to an FTC action, using the FTC's recent action against Facebook as an example.5
  • Finally, to be more transparent, Smith said that companies should give consumers adverse action notices where legally required to do so. He explained that companies that make automated decisions based on information received from a "consumer reporting agency" (CRA) may be required to give consumers adverse action notices under the Fair Credit Reporting Act (FCRA).6 For example, charging a consumer higher rent based on a risk score received from a background check company triggers a requirement under the FCRA to inform the consumer of his/her right to access the information received about them and correct inaccurate information.

Explainability

  • According to Smith, companies that deny consumers something of value based on algorithmic decision-making should be able to explain why. He noted that companies that use AI to make decisions about consumers in any context should be able to explain to consumers what data is used in their models and how that data is used to arrive at a decision.
  • When using algorithms to assign risk scores to consumers, Smith explained that companies should also disclose the key factors that affected the score in order of importance.
  • Smith also noted that companies should notify consumers if they change the terms of a deal based on automated tools. As an example, Smith noted that, over a decade ago, the FTC took action against a subprime credit marketer that failed to disclose that it used a behavioral scoring model to reduce consumers' credit limits.7

Fairness

  • To ensure fairness, Smith encouraged companies to periodically test their algorithms. He explained that testing should take place before algorithms are used and periodically afterwards to ensure they do not discriminate against or disparately impact a protected class.
  • Similarly, Smith suggested that companies evaluate inputs and outputs for potential discrimination issues. According to Smith, companies should review the types of information that go into their models to determine whether they include ethnically-based factors or proxies for such factors, such as census tract. He also said that companies should consider testing their outputs to ensure they are not discriminating against or disparately impacting a protected class.
  • Smith also asked companies to consider giving consumers access to the information they use to make decisions about them and allowing consumers to dispute the accuracy of that information even if they are not legally required to do so under the FCRA.

Empirically Sound

  • Smith flagged the importance of ensuring that data used in models is accurate and up to date. He explained that companies that provide consumer data to others who use the data to make eligibility decisions about consumers may be required to implement procedures to ensure the data is accurate and up to date under the FCRA. He also noted that companies that provide data about their customers to others who use it to make automated-decisions for eligibility purposes may have similar obligations under the FCRA.
  • Smith encouraged companies to take steps to ensure their AI tools are "empirically derived, demonstrably and statistically sound." For example, he suggested that companies assess whether their tools are: based on data derived from an empirical comparison of sample groups; developed, validated, and periodically reevaluated using accepted statistical principles and methodology; and adjusted as necessary to maintain predictive ability.

Fostering Accountability

  • To avoid risks of bias or other harm to consumers, Smith identified four key questions that companies should ask before using an algorithm: 1) how representative is your data set? 2) does your model account for biases? 3) how accurate are your predictions based on big data? and 4) does your reliance on big data raise ethical or fairness concerns?
  • Smith noted that companies that sell AI tools to other businesses should consider whether access controls and other technologies can be used to prevent unauthorized use.
  • Finally, Smith discussed the importance of evaluating accountability mechanisms, suggesting that companies consider using independent standards or experts to test their AI tools for risks of bias or other harm to consumers.

For more information or advice on consumer protection issues associated with AI and algorithms, please contact Lydia Parnes, Cédric Burton, Chris Olsen, Tracy Shapiro, or another member of the firm's privacy and cybersecurity practice.

Lydia Parnes and Kelly Singleton contributed to the preparation of this Wilson Sonsini client alert.


[1] For more information, visit our COVID-19 Client Advisory Resource page here: https://www.wsgr.com/en/services/practice-areas/COVID-19.html.

[2] Andrew Smith, “Using Artificial Intelligence and Algorithms,” FTC Business Blog, April 8, 2020, https://www.ftc.gov/news-events/blogs/business-blog/2020/04/using-artificial-intelligence-algorithms.

[3] FTC v. Ruby Corp. et al., FTC Matter No. 1523284 (2016), https://www.ftc.gov/enforcement/cases-proceedings/152-3284/ashley-madison.

[4] FTC v. Devumi, LLC & German Calas, Jr., FTC Matter No. 1823066 (2019), https://www.ftc.gov/enforcement/cases-proceedings/182-3066/devumi-llc.

[5] In the Matter of Facebook, Inc., FTC Matter No. C-4365 (2019), https://www.ftc.gov/enforcement/cases-proceedings/092-3184/facebook-inc.

[6] Under the FCRA, a CRA is any person that compiles and sells consumer information that is used or expected to be used for credit, employment, insurance, housing, or similar decisions about consumers’ eligibility for certain benefits and transactions.

[7] FTC v. CompuCredit Corp. & Jefferson Capital Sys., LLC, FTC Matter No. 0623212 (2008), https://www.ftc.gov/enforcement/cases-proceedings/062-3212/compucredit-corporation-jefferson-capital-systems-llc.

Contributors

  • Lydia B. Parnes
  • Kelly A. Singleton
  • people
  • insights
  • about us
  • careers
  • Binder
  • Alumni
  • Mailing List Signup
  • Client FTP Portal
  • Privacy Policy
  • Terms of Use
  • Accessibility
WSGR logo
Twitter
LinkedIn
Facebook
Instagram
Youtube
Copyright © 2026 Wilson Sonsini Goodrich & Rosati. All Rights Reserved.