G7 and OpenAI develop an AI code of conduct for safe innovation
The Group of Seven (G7) industrial nations are set to establish a voluntary ‘code of conduct' for companies innovating in the field of advanced artificial intelligence (AI) systems. Born out of the “Hiroshima AI Process,” this initiative seeks to address potential misuse and risks associated with this transformative technology.
The G7, which includes Canada, France, Germany, Italy, Japan, Britain and the United States, has started this process along with the European Union as a model for AI governance.
G7 Nations Pitch Global AI Code of Conduct
Amid growing privacy concerns and security concerns, the 11-point code is a beacon of hope. According to the G7 document, the Code of Conduct
“It aims to promote safe, reliable and trustworthy AI around the world and provides voluntary guidance for actions taken by organizations developing advanced AI systems.”
The code encourages companies to identify, assess and mitigate risks throughout the AI lifecycle. Additionally, it recommends publishing public reports on AI's capabilities, limitations, and usage, emphasizing strong security controls.
Read more: The 6 hottest artificial intelligence (AI) jobs in 2023
Vera Jourova, digital head of the European Commission, said this at a forum on Internet governance.
“The code of conduct was a solid foundation to ensure safety and serve as a bridge until regulation was enacted.”
OpenAI joins the cause
ChatGPT's parent company, OpenAI, has also set up a preparedness team to manage the risks posed by AI models. Led by Alexandre Madri, the team looks at threats such as individual persuasion, cyber security threats and the spread of misinformation.
The move is OpenAI's contribution to the upcoming UK International AI Summit, which echoes calls for global security and transparency in AI development.
The UK government defines Frontier AI as:
“High-capacity, general-purpose AI models can perform a variety of tasks that match or exceed the capabilities of today's most advanced models.”
The OpenAI Readiness Group focuses on managing risks, further reinforcing the need for a global AI ‘code of conduct'.
As AI continues to evolve, the G7's proactive stance and OpenAI's commitment to risk mitigation are timely responses. The establishment of a voluntary ‘code of conduct' and an independent preparedness group represent significant steps towards harnessing the power of AI responsibly. The objective is to ensure that the benefits are maximized and potential risks are effectively managed.
Disclaimer
Adhering to the Trust Project guidelines, BeInCrypto is committed to unbiased, transparent reporting. This news report aims to provide accurate and up-to-date information. However, readers are advised to independently verify the facts and consult with experts before making any decisions based on this content. It operates without personal beliefs, emotions or biases, providing data-driven content. A human editor carefully reviews, edits, and approves the article for publication to ensure relevance, accuracy, and compliance with BeInCrypto's editorial standards.