Your right to bear AI may soon be violated.
The only way to combat the malicious use of artificial intelligence (AI) is to continuously develop more powerful AI and put it in the hands of the government.
That seems to be the conclusion of a group of researchers in a recent paper titled “Computer Power and the Management of Artificial Intelligence.”
Scientists from OpenAI, Cambridge, Oxford and a dozen other universities and institutions conducted the study to examine current and future challenges related to managing the use and development of AI.
Centralization
The main argument of the paper revolves around, ostensibly, the only way to control who uses the most powerful AI systems in the future is to control access to the hardware necessary to train and run the models.
As the researchers put it:
“Indeed, policymakers can use AI to facilitate regulatory visibility, allocate resources to promote beneficial outcomes, and enforce limits on irresponsible or malicious AI development and use.”
In this context, “compute” refers to the basic hardware needed to develop AI, such as GPUs and CPUs.
Basically, the researchers suggest that the best way to prevent people from using AI for harm is to cut it off at the source. This implies that governments must develop systems to regulate the development, sale and operation of hardware deemed necessary to develop advanced AI.
Artificial intelligence management
In some ways, governments around the world are practicing “computer governance.” The US, for example, restricts the sale of certain GPU models commonly used to train AI systems to countries like China.
Related: US officials extend export ban on Nvidia AI chip to ‘certain Middle Eastern countries'
But according to the study, effectively limiting the ability of malicious actors to harm AI will require manufacturers to build “kill switches” into the hardware. This would allow governments to conduct “remote enforcement” efforts, such as shutting down illegal AI training centers.
However, the researchers note, “naïve or weakly scoping approaches to computing governance carry significant risks in areas such as privacy, economic impact, and centralization of power.”
In the US, for example, tracking hardware usage could fall under the White House's recent directive to develop a “Bill of Rights for AI” that states citizens have a right to protect their data.
Kill Keys may be DOA.
In addition to these concerns, the researchers suggest that recent advances in “relationship-efficient” training can use decentralized computing to train, build, and run models.
This can make it more difficult for governments to find, monitor, and shut down hardware associated with illegal training efforts.
According to the researchers, this leaves governments with no choice but to adopt an arms race stance against illegal use of AI. “Society needs to use more powerful, controlled computing in a timely and wise manner to protect against the dangers of unmanaged computing.”