Google and Anthroponic are gunning for AI dominance with their new chips.

Google and Anthroponic are gunning for AI dominance with their new chips.



Google is forging a deep alliance with Anthroposic, the startup behind ChatGPT's rival Cloud AI, providing specialized computer chips to boost its capabilities.

This partnership is bolstered by a significant financial infusion from Google into Anthropic. As previously reported by Decrypt, Google's commitment was a 10 percent stake for $300 million, followed by additional funding of $500 million, with a promise of $1.5 billion in additional investments.

“Anthroponics and Google Cloud share the same values ​​in developing AI – that both must be done boldly and responsibly,” Google Cloud CEO Thomas Kurian said in an official press release. “This expanded partnership with Anthropic will bring AI safely and securely for many years to come, and provides another example of how the most innovative and fast-growing AI startups are building on Google Cloud.”

Anthroponic uses Google Cloud's fifth-generation Tensor Processing Units (TPUs) to process a trained AI model to make predictions or decisions based on new input data.

Betfury

Strategic moves by technology leaders highlight the intense competition and high stakes in developing increasingly sophisticated artificial intelligence. The most famous partnership in the AI ​​space is the one between Microsoft and OpenAI, with $10 billion on the table.

But what do these technological advances portend for AI chatbots and the tools people use every day? It comes down to the fundamental differences between the computational workhorses of AI training: GPUs and TPUs.

Graphics processing units (GPUs), the backbone of AI computing operations, are long, adept at handling multiple tasks simultaneously. They are versatile and widely used not only in game and graphics rendering, but also in accelerating deep learning tasks.

In contrast, Tensor Processing Units (TPUs) are Google's brainchild, custom-designed for turbocharged machine learning workflows. TPUs facilitate specialized tasks, provide fast training times and power efficiency, which are critical when processing the massive data sets that LLMs like Anthropic Cloud require.

The difference between these processors is stark: GPUs (like those used by OpenAI) offer a wide range of applications, while TPUs focus performance on machine learning. This suggests that for startups like Anthropic, which rely on large amounts of data to refine their models, Google's TPUs could provide a compelling advantage, leading to faster development and more intuitive AI interactions.

On the other hand, OpenAI's recent advances, especially the GPT-4 Turbo, challenge any presumed leadership of Anthroponic. The new Turbo model holds 128K context tokens, a significant departure from the previous 8K iteration and a major blow to Anthropoc's dominance of Cloud's 100K capacity.

However, the war is not without its differences. These powerful TPUs can help Anthropoc quickly develop even more powerful LLM. But the big contextual window, while exciting, is a double-edged sword — those huge incentives lead to poor performance in the current situation.

As the AI ​​race heats up, Anthroponic may now have a golden ticket thanks to Google's heavy backing. But OpNIA isn't resting on its laurels, so they need to play their cards right – and they're also on the fast track to Microsoft's corner.

Edited by Ryan Ozawa.

Stay on top of crypto news, get daily updates in your inbox.

Leave a Reply

Pin It on Pinterest