Anthroponic says no customer data is used in AI training.

Anthroponic Says No Customer Data Is Used In Ai Training.



Generative artificial intelligence (AI) startup Anthropophic has pledged not to use customer data for large-scale linguistic model (LLM) training, following amendments to the Claude developer's commercial terms of service. The company promises to support users in copyright disputes.

Anthroponic, led by former OpenAI researchers, has revised its terms of business to clarify its position. Starting in January 2024, the changes will see Anthropic's trading customers own all results from using its AI models. The company “does not expect to acquire any rights in Customer Content under these Terms.”

OpenAI, Microsoft and Google have pledged to support customers facing legal issues over copyright claims related to the use of their technologies by the latter half of 2023.

Anthropoc has committed to an updated commercial terms of service to protect customers from claims of copyright infringement arising from authorized use of the company's services or products. Anthropoc said:

Phemex

“When customers build in the cloud, they get more security and peace of mind, as well as a more streamlined API that's easier to use.”

As part of its legal protection commitment, Anthroponic said it will pay for any approved settlements or judgments resulting from AI violations. The contract applies to Cloud API customers and Amazon's Generative AI development suite using Cloud through Bedrock.

Related: Google Taught Others How to Use AI Models and Got 40% Better at Code

The Terms state that Anthropic does not plan to acquire any rights to Customer Content and will not grant either party the content or intellectual property of the other in any way.

Advanced LLMs such as Anthroponic Cloud, ChatGPT-4 and Llama 2 are trained on a wide range of textual data. The effectiveness of LLMS relies on diverse and comprehensive training data, learning from a variety of language styles, patterns, and new information to improve accuracy and context awareness.

Universal Music Group sued Anthropic AI in October 2023 for copyright infringement over “a large number of copyrighted works – including thousands of musical compositions” owned or controlled by publishers.

Meanwhile, author Julian Sancton is suing OpenAI and Microsoft for using the novelist's work without permission to train AI models, including ChatGipt.

Magazine: Top AI Tools of 2023, Weird DEI Image Guards, ‘Established' AI Bots: AI Eye

Leave a Reply

Pin It on Pinterest