In brief

In September 2024, Texas’ Attorney General announced a “first-of-its-kind” settlement with a healthcare generative artificial intelligence (“Gen AI”) company over what it said were “false, misleading, or deceptive” Gen AI products that aid physicians and medical staff in drafting clinical notes and charts. Per the Attorney General, the Company’s advertised hallucination rate was “very likely inaccurate” which “may have deceived hospitals about the accuracy and safety of the Company’s products.” The settlement provides important guidance for all industries, particularly companies using AI involving more sensitive data, to be transparent about its risks, limitations, and appropriate use.

In depth

The September 2024 settlement against Pieces Technologies was one of the first actions brought by a state Attorney General against an AI product. There has been intense scrutiny over Gen AI products since the widespread launch of Gen AI in 2023. Gen AI concerns revolve around its potential for creating and spreading misinformation. Thus, regulations have emerged to prevent Gen AI’s harmful effects while simultaneously not stifling innovation.

This action was brought under the Texas Deceptive Trade Practices Consumer Protection Act, which protects consumers against false, misleading, and deceptive business and insurance practices, unconscionable actions, and breaches of warranty. The Act does so by prohibiting certain acts and practices that tend to deceive and mislead consumers.

Per the settlement, the Company must do the following:

  1. Provide Clear Marketing and Advertising Disclosures: when the Company makes marketing or advertising statements regarding any metrics, benchmarks, or measurements of the Gen AI product, it must disclose:
    • Details around the meaning of such metric, benchmark, or measurement; and
    • The method, procedure, or process used to calculate the metric, benchmark, or measurement.
  2. Provide Transparent Representations: The Company and its representatives must not:
    • Make any false, misleading, or unsubstantiated representations regarding any feature, characteristic, function, testing, or use of any of its products, including representations related to: accuracy, reliability, or efficacy of any of its products; testing procedures and methodologies of its products; monitoring procedures and methodologies of its products; metric definitions and/or meanings; or training data used for its products;
    • Misrepresent or mislead any customer or user of its products or services regarding the accuracy, functionality, purpose, or feature of its products; or
    • Fail to disclose any financial or similar arrangements between the Company and any person who participates in its marketing and advertising, or who endorses or promotes its products or services.
  3. Provide Updated Disclosures: the Company must provide all current and future customers documentation that discloses any reasonably knowable harmful or potentially harmful uses or misuses of its products or services and must include:
    • Training data and/or models used for its products and services;
    • Explanation of the intended purpose and use of its products and services and any training or documentation needed to facilitate proper use;
    • Any reasonably knowable, product or service limitations or misuses; and
    • All other documentation reasonably necessary for a user to understand the nature and purpose of an output generated by each product or service, monitor for patterns of inaccuracy, and reasonably avoid misuse of the product or service.

The Company must also deliver a copy of the settlement to its principals, officers, and directors, and all of its managerial employees, agents, and representatives. The Texas AG also reserved the right to request supplemental information, and the Company agreed as part of the settlement to provide such information within thirty (30) days of the request.

What should companies consider?

Governments around the world are beginning to regulate AI under existing and new statutes, with a particular emphasis on AI developers. Here, Texas used an existing consumer protection law, while other states and governmental bodies are creating new laws, like the Colorado Artificial Intelligence Act. Federally, the Federal Trade Commission and the Department of Justice are using similar consumer protection laws to investigate AI companies. The healthcare industry is one of a few targets because of the sensitivity of the data. Other targeted industries include ecommerce, legal services, and consumer reviews. Generally, the U.S. is following suit with the EU by aligning with its AI Act. Therefore, all companies, including those developing or integrating AI, should: (1) review their AI marketing and advertisements; (2) conduct AI impact assessments; and (3) implement governance and risk management policies to comply with existing and forthcoming U.S. and global AI regulations.

Author

Rachel Ehlers is a partner in Baker McKenzie's Intellectual Property and Technology Practice Group, based in the Firm's Houston office. Rachel's practice focuses on technology transactions, data privacy and cybersecurity. She has extensive experience advising clients on data incidents and breach response, cross-border transfers, and data privacy and cybersecurity issues related to mergers and acquisitions.

Author

Mercedes is an associate in Baker McKenzie's IP & Technology Practice Group based in Chicago.