Industry insiders predict 2024 AI legal challenges.
In the past year, as artificial intelligence (AI) has become a more popular tool for everyday use, the legal landscape surrounding the technology has begun to evolve.
From international regulations and laws to countless lawsuits alleging copyright and data infringement, AI has been on everyone's radar.
In the year As 2024 approaches, Cointelegraph asked industry insiders working at the intersection of law and AI to understand the lessons of 2023 and what they could mean for the year ahead. In the year For an overview of what's happening in AI in 2023, don't forget to check out Cointelegraph's “Ultimate 2023 AI Guide.”
Delay in implementation of EU AI law
In the year In 2023, the European Union became one of the first regions to take a major step in enacting laws to regulate the deployment and development of advanced AI models.
The EU AI Act was first proposed in April and approved by Parliament in June. On December 8, the negotiators of the European Parliament and the Council reached a provisional agreement on the law.
Once fully operational, it will regulate government use of AI through biometric controls, regulate large AI systems like ChatGPT, and set transparency rules that developers must follow before entering the market.
However, the bill has drawn criticism from the tech sector as “over-regulation”.
In light of pushback from developers and a history of delays, Lothar Determann, partner at Baker McKenzie and author of Determan's Field Guide to Artificial Intelligence Law, told Cointelegraph:
It doesn't seem entirely impossible that we could see a similar delayed timeline to EU AI legislation.
Determan indicated that the agreement was reached at the beginning of December, but the final text has not yet been published. He added that several key member state politicians, including the French president, are concerned about the current draft.
“This reminds me of the direction of the e-Privacy Regulation that the European Union announced in 2016 with the General Data Protection Regulation that will come into force in May 2018, but it is still not finalized five years later.”
Laura de Boel, a partner at Brussels law firm Wilson Sonsini Goodrich & Rosati, pointed out that the December development was a “political agreement”, with formal approval as early as 2024.
She explained that EU lawmakers have included a “graded grace period”:
“The rules on restricted AI systems will apply after six months, and the rules on general purpose AI will apply after 12 months,” she said. “The other requirements of the AI Act will apply after 24 months, except that the obligations for high-risk systems specified in Annex II will apply after 36 months.”
Compliance challenges
Despite the many new regulations coming into place, 2024 will present some challenges for companies in terms of compliance.
De Boel said the European Commission has asked AI developers to voluntarily implement key obligations of the AI Act before it becomes mandatory.
“They need to start building the necessary internal processes and prepare their employees.”
However, Determan noted that even without a comprehensive AI regulatory framework, “we see compliance challenges as businesses apply existing regulatory frameworks to AI.
This includes the European Union's General Data Protection Regulation (GDPR), privacy laws around the world, intellectual property laws, product safety regulations, property laws, trade secrets, confidentiality agreements and industry standards, among others.
To this end, in the United States, the administration of President Joe Biden issued a lengthy executive order on October 30 aimed at protecting citizens, government agencies and companies by ensuring AI security standards.
The order establishes six new standards for AI safety and security, including the use of ethical AI in government agencies.
Biden said the order is consistent with the government's “safety, security, trust, transparency principles,” which industry insiders say has created a “challenging” climate for developers.
This mainly comes down to concrete compliance standards known outside of plain language.
In an earlier interview with Cointelegraph, Adam Struck, founding partner of Struck Capital and AI investor, told Cointelegraph that the order makes it difficult for developers to predict future risks and compliance under the law. They are not fully developed yet. he said:
“This is certainly a challenge for companies and developers, especially in the open source community, where the executive order is less directive.”
RELATED: ChatGPT's first year with existential dread, lawsuits and boardroom drama
More specific rules
Another expectation on the 2024 legislative landscape is more specific, narrowly drafted laws. This can already be seen as some countries develop regulations against AI-generated deep fakes.
Regulators in the United States are considering introducing regulations on politically motivated fraud in the lead-up to the 2024 presidential election. In the year In late November, India began finalizing laws against deep counterfeiting.
Determan warned AI-related businesses and those using AI products:
“Going forward, businesses will need to stay up-to-date on these developments, including disclosure requirements for bots, restrictions on ‘deep faxing' and audit requirements for job application assessment systems.”
He said such focused laws would have a “better chance” of achieving their intended effect than overly broad laws.
“This is because businesses can understand and comply more easily and authorities can enforce them more effectively,” he explained.
In the year There are also rumors that 2024 could see regulations focusing on investments in technology. Among lawmakers in the US, there is already talk of regulating investments.
In July, the US Senate backed bipartisan legislation that would require domestic companies to report investments in Chinese technologies, particularly semiconductors used in AI.
Copyright notice
Many in the industry are waiting to see an explanation of copyright infringement and copyright infringement.
In the past year, there have been several high-profile copyright-related lawsuits accusing leading AI developers such as OpenAI, Microsoft, Meta, and Google of abusing copyright laws.
These charges covered all kinds of content, from art and music to literature and news.
Most recently, on December 7, The New York Times filed a lawsuit against OpenAI and Microsoft for copyright infringement of their news content while training AI models. Prominent legal experts in the space have called this particular case a “watershed moment” for the AI and copyright space.
While most of these issues are ongoing, 2024 is expected to bring some resolution to this issue – fully or partially to be seen.
What does AI say?
In the spirit of all things, Cointelegraph decided to ask ChatGPT itself what it believes will come of the legal scene around AI in the coming year.
“Can you give me some predictions about AI legislation in 2024?” When asked. ChatGPT responded with the following assumptions.
Globally more “strict rules” about ethical concerns and biases. Develop clearer “levels of responsibility” for developers and users for AI-related incidents. “Focus on data privacy laws.” “There is an increased need for transparency in AI algorithms. AI implementation. Efforts to establish an “integrated international framework for managing AI technologies” taking into account possible improvements to labor laws and new employment models with AI, “measures to protect consumers” from the implementation of misleading or harmful AI applications.
Whether or not ChatGPT is onto something, only time will tell, and 2024 will be the year we let us all know. Be sure to watch this space in 2024 for the latest updates on all things AI.
Magazine: Top 10 long reads about crypto in 2023