The US, Britain and other countries have painted ‘safe by design’ AI guidelines

The Us, Britain And Other Countries Have Painted 'Safe By Design' Ai Guidelines



The United States, United Kingdom, Australia and 15 other countries have issued international guidelines to protect AI models from tampering, urging companies to make their models “safe by design.”

In the year On Nov. 26, the 18 countries released a 20-page document detailing how AI companies should handle cybersecurity when building or using AI models, saying that “security can often be a secondary consideration in fast-paced industry.”

The guidance generally consists of general recommendations, such as maintaining tight control over AI model infrastructure, monitoring breaches of models before and after release, and training staff on cybersecurity risks.

Some contentious issues in the AI ​​space were not mentioned, including what controls might exist around the use of imaging models and deep-fake or data-gathering methods and training models — an issue that has seen several AI firms sued for copyright infringement. Claims.

Tokenmetrics

US Secretary of Homeland Security Alejandro Mayorkas said in a statement: “We are in the midst of a breakthrough in artificial intelligence that is the most effective technology of our time.” “Cybersecurity is key to building reliable, secure and trustworthy AI systems.”

Related: EU tech alliance warns of AI overregulation ahead of EU AI law finalization

The guidelines follow other government initiatives weighing in on AI, including governments and AI companies meeting earlier this month at the AI ​​Security Summit in London to coordinate consensus on AI development.

Meanwhile, the European Union is drawing up details of AI legislation that will govern the space, and US President Joe Biden issued an executive order in October that would set AI safety and security standards — although both say pushback from the AI ​​industry could stifle innovation.

Other co-signatories to the new “safer by design” include Canada, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, South Korea and Singapore. AI organizations including OpenAI, Microsoft, Google, Anthropic and Scale AI contributed to the guidelines.

Magazine: AI Eye: Real for AI in crypto, Google's GPT-4 rival, AI uses edge for bad workers



Leave a Reply

Pin It on Pinterest