NIST Forms AI Safety Institute Consortium in Response to Biden Executive Order
The US National Institute of Standards and Technology (NIST) and the Department of Commerce are soliciting members for the newly formed Artificial Intelligence (AI) Safety Institute Consortium.
Join a new consortium to evaluate artificial intelligence (AI) systems to improve the security and integrity of emerging technologies. Here's how: pic.twitter.com/QD3vc3v6vX
— National Institute of Standards and Technology (@NIST) November 2, 2023
In a document published to the Federal Register on November 2, NIST announced the establishment of the new AI consortium with an official announcement detailing the office's request for applicants with the appropriate certification.
In the NIST document:
“This announcement is a first step for NIST to collaborate with nonprofit organizations, universities, other government agencies, and technology companies to address the challenges associated with IIA development and deployment.”
The goal of the partnership is to create and implement specific policies and measures to help US lawmakers adopt a human-centered approach to AI security and governance, according to the announcement.
Associates will be expected to contribute to a laundry list of related activities including measurement and benchmarking tools, policy recommendations, red team efforts, psychoanalysis and environmental analysis.
These efforts are in response to a recent executive order by US President Joseph Biden. As Cointelegraph recently reported, the executive order established six new standards for AI safety and security, though none appear to be legally binding.
Related: UK AI Safety Summit kicks off with global leaders, China and Musk comments
While many European and Asian states have begun to enact policies to regulate the development of AI systems with regard to user and citizen privacy, security, and unintended consequences, the US has been a relative latecomer on this front.
President Biden's executive order shows some progress toward establishing so-called “specific policies” to govern AI in the US, as well as the formation of a Safety Institute consortium.
However, there still doesn't seem to be a clear timeline for implementing laws governing AI development or deployment in the US beyond existing policies that govern business and technology. Many experts feel that these current laws are inadequate when applied to the growing field of AI.