Gina Raimondo, the commerce secretary, said she would support the U.S. institute’s formal relationship with the U.K. Safety Institute.
Secretary of Commerce Gina Raimondo announced on Wednesday that the U.S. would establish an A.I. safety center to assess known and unknown risks associated with “frontier” artificial intelligence models.
“I will almost certainly be calling on many of you in the audience who are in academia and industry to be part of this consortium,” she said in a speech at the A.I. Safety Summit in Britain.
“We can’t do it alone, and the private sector must step up.” Raimondo went on to say that she would also promise the U.S. Institute to formalize its partnership with the U.K. Safety Institute.
The National Institute of Standards and Technology (NIST) will oversee the new initiative to spearhead U.S. government efforts on A.I. safety, particularly in evaluating sophisticated A.I. models.
The institute “will facilitate the development of standards for safety, security, and testing of A.I. models, develop standards for authenticating AI-generated content, and provide testing environments for researchers to evaluate emerging A.I. risks and address known impacts,” the department said.
Under the Defense Production Act, President Joe Biden signed an executive order on Monday requiring developers of A.I. systems that could endanger public health, safety, the economy, or national security in the United States to share the safety testing results with the government before public release.
Establishing guidelines for such testing and addressing associated risks linked to chemicals, biological, radiological, nuclear, and cybersecurity are also mandated by the directive.