Latham & Watkins November 2, 2023 | Number 3187 | Page 6
center networking of over 100 Gbit/s, and having a theoretical maximum compute capacity of 10
20
integer
or floating-point operations per second for training AI.
Requirements: The regulations must require IaaS Providers to identify any foreign person who transacts
with them to train a large AI model, by reporting to the government the identity of the foreign person, the
existence of the training run, and additional information to be determined by regulation. IaaS Providers
must also prohibit any foreign reseller of the product from providing the products unless the foreign
reseller submits a report to the Provider, which the Provider must submit to the Secretary of Commerce,
detailing each instance in which a foreign person transacts with the foreign reseller to use the United
States IaaS Product to conduct a training run. These rules could apply, for example, to large cloud
providers that rent computing space for AI purposes to foreign customers.
Verification of Foreign Persons Using Certain AI Models
. Section 4.2(d) mandates that within 180
days the Secretary of Commerce propose regulations obligating domestic IaaS Providers to require
foreign resellers of IaaS Products to verify the identity of foreign persons who obtain an IaaS account.
Covered companies: IaaS Providers
Requirements: The regulations shall:
•
Provide the minimum standards an IaaS Provider must require of foreign resellers to verify the
identity of an individual who creates an account with the foreign reseller; and
•
Establish regulations that foreign resellers of IaaS Products must follow if they allow AI models using
domestic IaaS Providers to be used by foreign individuals to conduct training runs, including
identifying each instance in which a foreign person conducts such a training run
New Standards
: Section 4.1 of the Executive Order mandates that the National Institute of Standards
and Technology establish guidelines and best practices within 270 days, “with the aim of promoting
consensus industry standards for developing and deploying safe, secure, and trustworthy AI systems.”
NIST shall fulfill this obligation by:
•
Developing a companion resource to the AI Risk Management Framework, NIST AI 100-1, for
generative AI;
•
Developing a companion resource to the Secure Software Development Framework to incorporate
secure development practices for generative AI and for dual-use foundation models; and
•
Launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities,
with a focus on capabilities through which AI could cause harm, such as in cybersecurity and
biosecurity.
NIST must establish guidelines to enable AI developers “to conduct red-teaming tests to enable
deployment of safe, secure, and trustworthy systems,” including:
•
Developing guidelines to assess and manage the safety, security, and trustworthiness of dual-use
foundation models; and
•
Developing and helping to ensure the availability of testing environments, such as testbeds, to
support the development of safe, secure, and trustworthy AI technologies.