AI regulations in global focus as EU approaches regulatory agreement

Ai Regulations In Global Focus As Eu Approaches Regulatory Agreement



The rise of generative artificial intelligence (AI) has led governments to rush to regulate the emerging technology. The trend coincides with the EU's efforts to implement the world's first comprehensive set of rules for AI.

EU AI legislation is recognized as a set of innovation rules. After several delays, reports indicate that on December 7, negotiators agreed on a set of controls for generative AI tools such as OpenAI's ChatGPT and Google Bard.

Fears that the technology could be misused have prompted the United States, the United Kingdom, China and other G7 countries to accelerate their efforts to regulate AI.

In June, the Australian government announced an eight-week consultation to get feedback on whether “high-risk” AI tools should be banned. The consultation has been extended until July 26. The government sought input on strategies to support the “safe and responsible use of AI”, such as voluntary measures such as ethical frameworks, the need for specific regulations or a combination of both approaches.

itrust

Meanwhile, as of August 15, China has introduced regulations to regulate the generative AI industry in a series of interim measures, requiring service providers to conduct security assessments and obtain licenses before offering AI products to the mass market. After receiving government approval, four Chinese tech companies, including Baidu and SenseTime, unveiled their AI chatbots to the public on August 31.

RELATED: How Generative AI Lets Architects Reimagine Ancient Cities

According to a Politico report, France's privacy protection commission Nationale Informatique et Libertés, or CNL, said in March it was investigating several complaints against ChatGPT after the chatbot was temporarily banned in Italy for allegedly violating privacy laws by ignoring warnings from civil rights groups.

The Italian data protection authority announced on November 22 that it has launched a “fact-finding” investigation, which will examine data collection processes to train AI algorithms. The request seeks to ensure that appropriate security measures are implemented on public and private websites to intercept personal data from “web scraping” used by third parties for AI training.

The US, UK, Australia and 15 other countries recently issued international guidelines to prevent AI models from being tampered with, urging companies to make their models “safe by design”.

Magazine: Real AI use cases in crypto: Crypto-based AI markets and AI financial analysis

Leave a Reply

Pin It on Pinterest