China has enacted strict rules for training generative AI models.

China has enacted strict rules for training generative AI models.


China has issued draft security regulations for companies providing generative artificial intelligence (AI) services, including restrictions on data sources used for AI model training.

On Wednesday, October 11, the proposed regulations were released by the National Information Security Standardization Committee, which includes representatives of the Cyberspace Administration of China (CAC), the Ministry of Industry and Information Technology and law enforcement agencies.

Generative AI, exemplified by OpenAI's ChatGPT achievements, acquires the ability to perform tasks through historical data analysis and generates fresh content such as text and images based on this training.

Screenshot of National Information Security Standards Committee (NISSC) publication. Source: NISSC

The committee recommends conducting a security assessment of content used to train publicly available generative AI models. Content that exceeds “more than 5% in the form of illegal and harmful information” will be blacklisted. This category includes content that promotes terrorism, violence, subverting the socialist system, harming the country's reputation, and actions that harm national unity and social stability.

Phemex

The draft regulations emphasize that censored data on the Chinese Internet should not be training material for these models. The development comes a little more than a month after regulatory authorities gave permission to various Chinese tech companies, including popular search engine Baidu, to introduce AI-powered chatbots to the general public.

Since April, CAC has consistently communicated its demand for regulatory bodies to submit security assessments before offering AI-powered services to the public. In July, the cyberspace regulator issued guidelines regulating these services, which industry analysts noted were too onerous compared to measures proposed in an early April draft.

RELATED: Biden Considers Tightening Controls of AI Chips Through Third Parties to China

The recently released draft security provisions would require organizations involved in training these AI models to obtain express consent from the individuals whose personal data, biometric data, is being used for training. The Directive also includes general guidelines for preventing infringements of intellectual property.

Countries around the world are struggling to establish regulatory frameworks for this technology. China sees AI as an area where it aspires to compete with the United States and has set its sights on becoming a global leader in this field by 2030.

Magazine: ‘AI killed the industry': EasyTranslate boss adapting to change

Leave a Reply

Pin It on Pinterest