In Brief

Global regulations to govern the development and use of artificial intelligence (“AI”) are being reviewed and implemented in rapid pace. While the U.S. does not have a comprehensive regulatory framework for AI, there are initiatives underway at the federal and state level, including the framework recently released by the National Institute of Standards and Technology (“NIST”), the AI Risk Management Framework (“RMF”).

Background

AI technology has a wide range of benefits, including increased efficiency with the automation of routine and repetitive tasks and improved decision-making by providing data-driven insights and predictions. Companies are utilizing AI for enhanced customer service through chatbots and virtual assistants that can provide instant customer service, to provide personalized recommendations based on data and preferences, and enhanced language translation.

Despite these benefits, there are growing concerns around:

  • Data privacy, as models are trained on a large amount of data and it is unclear if the data is legally obtained. The ability of some AI algorithms to parse and disaggregate disparate data sets presents further privacy concerns. Additionally, the outputs often do not attribute the original author of the data;
  • Security, as the data and models may not be properly secured against security threats. Also, there are already examples of AI technology being used to write and propagate malware;
  • Lack of transparency or inability to understand or explain how the models work and what data is being used, which can lead to mistrust and uncertainty; and
  • Potential to reinforce and perpetuate biases or discrimination.

Moreover, AI is shining the light on big data issues. Large volumes of data are required for AI to be effective. Organizations must develop appropriate controls over data – both to ensure good data is coming in, and that company-sensitive data is not going out. New AI technologies like ChatGPT create even more challenges for companies, as they are designed to generate human-like responses to simple questions or comments. As these technologies become more widely available to everyone, including employees, companies are considering how to manage the use of these technologies in the workplace. Several large companies have already banned the use of ChatGPT for work purposes.

NIST AI Framework

NIST is a non-regulatory agency of the U.S. Department of Commerce which aims to promote innovation and industrial competitiveness by advancing measurements and standards. NIST does not have regulatory authority, and adoption of standards or guidelines is generally voluntary; however, in some cases government agencies may mandate the use of NIST standards for certain purposes, such as complying with federal requirements or participating in certain government programs. Additionally, adoption of NIST standards is generally highly regarded and widely adopted by organizations.

In January, NIST released the RMF in compliance with its obligations under the National Artificial Intelligence Initiative Act. This follows the release of the White House’s October release of the Blueprint for an AI Bill of Rights. RMF was developed after more than 18 months of drafting and reviews and included more than 400 sets of comments from stakeholders. The RMF provides guidance and best practices around the development, testing, and deployment of AI, prioritizing transparency, ethics, and accountability. The RMF is designed to apply to all “AI Actors”, which has been defined as “those who play an active role in the AI system lifecycle, including organizations and individuals that deploy or operate AI.

The RMF does two specific things. First, it provides a roadmap for organizations to identify risks related to AI. It outlines key characteristics for trustworthy AI; it should be safe, secure and resilient, explainable, privacy-promoting, transparent and fair, and reliable. It provides that companies developing and using AI should consider harms to people and specific groups including civil liberties, safety, economic opportunity; harms to business operations, security, and reputation; and harms to ecosystems, including supply chain and global systems.

Second, it offers processes and activities, which the RMF groups into four distinct “functions” to manage AI risks:

  • Governance: emphasizes the need for effective and risk-based oversight and management of AI systems. This includes having clear policies, procedures, and standards for the deployment and operation of systems, and clear responsibility within the organization for managing AI and ensuring AI is developed and deployed in accordance with legal and ethical principles.
  • Mapping: establishes framework for evaluating AI risks, including categorization of AI systems and evaluations of the benefits and costs of utilization.
  • Measurement: focuses on the performance of models, including accuracy, reliability, and robustness through evaluation and monitoring against the risks identified. It also focuses on the testing and validation of models with clear performance metrics to help with the trustworthiness, accuracy, and reliability of results.
  • Management: focuses on the ongoing monitoring, evaluation, and management of AI risks, and the prioritization of higher-risk AI systems.

NIST also published a RMF Playbook and Roadmap to facilitate companies’ implementation of the RMF.

Author

Cynthia J. Cole is Chair of Baker McKenzie’s Global Commercial, Tech and Transactions Business Unit, a member of the Firm’s global Commercial, Data, IP and Trade (CDIT) practice group steering Committee and Co-chair of Baker Women California. A former CEO and General Counsel, just before joining the Firm, Cynthia was Deputy Department Chair of the Corporate Section in the California offices of Baker Botts where she built the technology transactions and data privacy practice. An intellectual property transactions attorney, Cynthia also has expertise in AI, digital transformation, data privacy, and cybersecurity strategy.

Author

Rachel Ehlers is a partner in Baker McKenzie's Intellectual Property and Technology Practice Group, based in the Firm's Houston office. Rachel's practice focuses on technology transactions, data privacy and cybersecurity. She has extensive experience advising clients on data incidents and breach response, cross-border transfers, and data privacy and cybersecurity issues related to mergers and acquisitions.

Author

Brian provides advice on global data privacy, data protection, cybersecurity, digital media, direct marketing information management, and other legal and regulatory issues. He is Chair of Baker McKenzie's Global Data Privacy and Security group.

Author

Fernanda Rodriguez is an associate in Baker McKenzie’s Intellectual Property & Technology Practice Group and resides in the Firm’s Houston Office. Fernanda focuses on intellectual property (IP) litigation and brand enforcement as well as matters involving data privacy and data protection.