For open source AI – Biden’s AI executive order is ‘certainly challenging’ – industry insiders
Last week, the administration of US President Joe Biden issued a lengthy executive order intended to protect citizens, government agencies and companies by ensuring AI security standards.
The order established six new standards for AI safety and security. Biden said the order was consistent with the government's own principles of “safety, security, trust, transparency.”
My executive order is a testament to what we stand for on AI:
Safety, security and trust. pic.twitter.com/rmBUQoheKp
— President Biden (@POTUS) October 31;
It includes screening responsibilities such as “accelerating the development and use of privacy for companies that pose a threat to national security, national economic security, or national public health and safety” and “sharing the results of security tests with authorities.” methods of survival”.
However, the lack of specifics associated with such statements has left many in the industry wondering how to avoid developing high-end models.
Adam Struck, founding partner at Struck Capital and AI investor, told Cointelegraph that the order “shows the seriousness around AI's potential to reshape every industry.”
He pointed out that it is difficult for developers to predict future risks under the law based on the estimation of products that are not yet fully manufactured.
“This is certainly a challenge for companies and developers, especially in the open source community, where the executive order is less directive.”
However, the administration's intention to administer the guidelines through the heads of AI and AI management boards by special regulatory agencies is that companies that build models in these agencies must have a “strict understanding of the regulatory framework” from that agency.
“Companies that continue to value data compliance and privacy and unbiased algorithmic foundations must operate within a paradigm that the government agrees with.”
The government has published over 700 use cases on its ‘ai.gov' website on how to use AI internally.
Martin Casado, general partner of the venture capital firm Andreessen Horowitz at X, previously tweeted that he, along with many researchers, academics and founders in AI, sent a letter to the Biden administration to limit open source. AI.
We strongly believe that open source is the only way to free software from monopoly. Please help by highlighting it,” he wrote.
1/ We submitted a letter to President Biden regarding the AI Executive Order and the potential to limit open source AI. We strongly believe that open source is the only way to keep software secure and free from monopoly. Please help to highlight. pic.twitter.com/Mbhu35lWvt
— Martin_Casado (@martin_casado) November 3, 2023
The letter called the executive order's definition “overly broad” for some types of AI models and expressed fears that smaller companies would interfere with requirements that are important to other, larger companies.
Jeff Amico, head of operations at Jane's AI, echoed similar sentiments, calling it “terrifying” for the innovation in the US.
Biden's AI executive order is out and it's terrifying for American innovation.
Here are some new obligations, only big officials can comply pic.twitter.com/R3Mum6NCq5
— Jeff Amico (@_jamico) October 31, 2023
Related: Adobe, IBM, Nvidia Join US President Biden's Efforts to Prevent AI Abuse
Struck emphasized this point, regulatory transparency “can be useful for companies that develop AI-first products,” it is important to note that the goals of “Big Tech” such as OpenAI or Anthropic are very different from seed-level AI. Beginnings.
I would like to see the interests of these early stage companies represented in the dialogue between the government and the private sector, as it can ensure that the regulatory framework is not only suitable for the world's largest companies.
Nanotronics CEO and co-founder Matthew Putman commented to Cointelegraph that the order highlights the need for regulatory frameworks that ensure consumer safety and the ethical development of AI at a broader level.
“How these regulatory frameworks are implemented depends on the interpretation and actions of regulators,” he said.
“As we've seen with CryptoCryptoP, heavy hand restrictions have hindered the search for potentially revolutionary applications.”
Putman says fears of AI's potential “apocalypse” are “overstated given the promise of immediate positive effects.”
He said it's easy for people not directly involved in building the technology to create narratives around hypothetical risks without actually observing the “truly innovative” applications that are happening out of public view.
Industries including advanced manufacturing, biotech and energy are, in Putman's words, “driving a sustainability revolution” with new autonomous process controls that dramatically improve production and reduce waste and emissions.
These innovations would not have been possible without the deliberate exploration of new methods. Simply put, AI can help us rather than destroy us.
While the executive order is still fresh and industry insiders are scrambling to analyze its intent, the U.S. National Institute of Standards and Technology (NIST) and the Commerce Department have already begun asking members of the newly created Artificial Intelligence (AI) Security Council. Institute Consortium.
Magazine: ‘AI killed the industry': EasyTranslate boss adapting to change