Anthroponic Cloud improves the GPT-4 turbo capacity twice

Anthroponic Cloud Improves The Gpt-4 Turbo Capacity Twice



Anthroponic has released Large Language Model (LLM) Cloud 2.1, which provides a 200,000-token context window—a feature that exceeds the recently announced 120K GPT-4 Turbo context window by OpenAI.

This strategic release brings the ability to handle nearly double the context of its closest competitor, and is the fruit of an extended partnership with Google to launch the company's most advanced Tensor Processing Units.

“Our new model Claude 2.1 offers an industry-leading 200K token context window, 2x reduction in hall rates, system prompts, tool usage and updated pricing,” Anthropoc said in a tweet earlier today. The introduction of Claude 2.1 responds to the growing need for AI that can accurately process and analyze long-term documents.

This new update means cloud users can engage with documents such as entire codebases or ancient literary epics, opening up potential in a variety of applications from legal analysis to literary criticism.

okex

AI researcher Greg Kamrad quickly tested the Claude 2.1 model. In the OpenAI model, more consistency is achieved at lower token counts, but Cloud has more varied results based on requests of different lengths.

“Starting around 90K tokens, memory performance starts to get progressively worse at the bottom of the document,” he concluded. His investigation found similar failure rates for GPT-4 Turbo on 65K tokens. “I'm a big fan of Anthroposic — they're pushing the boundaries in LLM performance and creating powerful tools for the world,” he posted.

Anthroposic's commitment to reducing AI errors is evident in Cloud 2.1's improved accuracy, which has reduced the illusion rate by 50%. This doubles the accuracy compared to Cloud 2.0. These improvements are rigorously tested on complex and empirical questions designed to challenge the limitations of the current model. As Decrypt previously reported, illusions are one of the cloud's weaknesses. Such a significant increase in accuracy makes LLM closely comparable to GPT-4.

By introducing an API tool usage feature, Cloud 2.1 integrates with advanced user workflows, demonstrating the ability to manage multiple tasks, search the web, and pull from private databases. While it's still in beta, this feature promises to extend Claude's service in a variety of ways, from performing complex numerical logic to providing product recommendations.

Additionally, Anthropix Cloud 2.1 features “system prompts” designed to maximize interaction between the user and the AI. These questions allow users to set the stage for Claude's actions by identifying roles, goals, or patterns, thereby enhancing Claude's ability to maintain behavior in player-player situations, obey rules, and personalize responses. This is comparable to OpenAI's custom directives, but broader in context.

For example, when a user summarizes a financial report, the cloud can direct them to adopt the tone of a technical analyst, ensuring that the results are consistent with professional standards. Such customization in system prompts can increase accuracy, reduce illusions, and improve the overall quality of a piece by making interactions more precise and contextual.

However, the full capacity of Claude 2.1, with a 200K token context window, is reserved for Claude Pro users, so free users should stick to Claude 2 with 100K tokens and precision between GPT 3.5 and GPT-4.

The implications of the release of Cloud 2.1 are set to impact the dynamics in the AI ​​industry. As businesses and consumers evaluate their AI options, Claude 2.1's enhanced capabilities provide new ideas for those looking to leverage AI for its accuracy and adaptability.

Edited by Ryan Ozawa.

Stay on top of crypto news, get daily updates in your inbox.



Leave a Reply

Pin It on Pinterest