Apple CEO Tim Cook has expressed optimism about the future of artificial intelligence with limitless life-changing possibilities—but emphasized that the technology needs guidelines and safeguards to prevent abuse.
“It can be life-changing in a good way,” Cook said in an interview with singer Dua Lipa on the Your Service podcast. “I don't necessarily mean today because it can do things like in the future, it helps diagnose the problem from a health perspective.”
Cook said AI is present in all Apple products, although the company does not label it as such.
“If you're writing a message or an email on your phone, you'll see predictive typing that tries to predict your next word so you can quickly select the word. That's AI,” Cook said.
Tech companies have invested heavily in generative AI since the launch of OpenAI's flagship model GPT-3 last year. Since then, billions have poured into the AI industry, including Apple investing more than $10 billion in AI development, $10 billion from Microsoft in OpenAI, $4 billion from Amazon to cloud AI developer Athropic, and another $2 billion from Google. Anthroponic.
Despite his optimism, Cook expressed caution.
“There's an infinite number of things AI can do. “Unfortunately, it may not be good,” he said.
World leaders from the Vatican to the United Nations have warned against the rise of AI-generated deep hoaxes. In October, the UK-based Internet Watch Foundation warned that AI-powered child abuse was spreading rapidly on the dark web.
“What's needed in this new form of AI, generative AI, is some rules of the road and some rules,” Cook said. “I think a lot of governments around the world are now focused on this and how to do it, and we're trying to help with that. And we are one of those who say that this is required, as required is a certain regulation.
Earlier this year, AI developers from leading CEOs pledged the Biden administration to develop AI responsibly. When Microsoft, Meta, OpenAI, Google, Amazon and others entered, Apple was absent.
In May, Apple joined rival smartphone maker Samsung in banning the use of ChatGPIT in the workplace, citing the risk of data loss and intellectual property loss. In July, Bloomberg reported that Apple is quietly developing an AI chatbot to take on ChatGPT.
Apple has said it has to be intentional about its approach to artificial intelligence, saying the tech giant thinks deeply about how people use its products and whether they can use it for nefarious purposes.
“I think most governments today are behind the debate. I think that's a fair assessment,” Cook said. But I think they are catching up fast.
“I think the U.S., the U.K., the European Union and a lot of countries in Asia are coming up to speed very quickly,” Cook said.
Earlier this month, 29 countries and the European Union committed to a unified system for regulating artificial intelligence. The Bletchley Declaration, named after the Bletchley Park location of the UK AI Security Summit, called on global leaders to work together to ensure security, transparency and cooperation around productive AI.
“I think there will be some AI regulation in the next 12 to 18 months, and I'm very confident that it will happen,” Cook said.
Edited by Ryan Ozawa.