Claude announced that AI Chatbot has no limits for political candidates
10 months ago Benito Santiago
If Joe Biden wants a smart and popular AI chatbot to answer his questions, his campaign team will not be able to use Anthropic's ChatGPIT competitor Cloud, the company announced.
“We do not allow candidates to use Cloud to create chatbots that can impersonate them, and we do not allow anyone to use Cloud for targeted political campaigns,” the company said. Violations of this policy will result in warnings and eventual suspension of access to the Anthropic Services.
Anthropogenic's public statement regarding its “electoral abuse” policy is raising alarm bells around the world over AIA's ability to generate massive amounts of false and misleading information, images and videos.
Meta implemented rules last fall restricting the use of its AI tools in politics, and OpenAI has similar policies.
The anthropomorphic political protections are divided into three main areas: developing and enforcing policies related to electoral issues, evaluating and testing models against potential abuses, and guiding users to accurate voting information.
Anthropogenic's Acceptable Use Policy—which all users ostensibly agree to before accessing the cloud—prohibits the use of AI tools for political campaign and lobbying efforts. The company says that violations of the human review process will result in warnings and service bans.
The company runs a robust “red-team” of its systems: malicious, coordinated partners known to “jailbreak” or otherwise use the cloud for nefarious purposes.
We review how our systems respond to requests that violate our Acceptable Use Policy. [for example] It prompts him to ask for information about voter suppression tactics,” Anthropic explains. The company also says it has developed a set of tests to ensure “political parity” – comparative representation across candidates and titles.
In the United States, Anthropoc partnered with TurboVote to assist voters with reliable data generation instead of an AI tool.
“If a user in the US requests voting information, a pop-up will give the user the option to be redirected to TurboVote, a source from the non-partisan Democracy Works organization,” Anthropic explained, adding that the solution “will be added in the next few weeks—with plans to add similar measures to other countries.”
As Decrypt previously reported, OpenAI, the company behind ChatGPT, is taking similar steps by redirecting users to the CanIVote.org website.
Anthroponic's efforts to address the challenges AI poses to democratic processes align with broader activity in the tech industry. For example, the US Federal Communications Commission recently banned the use of AI-generated deep fake voices in robocalls, a decision that underscores the importance of controlling AI applications in the political arena.
Like Facebook, Microsoft has announced initiatives to combat misleading AI-generated political ads, introducing “content credentials as a service” and launching an election communications hub.
For candidates creating their own versions of AI, OpenAI had to solve a predefined use case. The company suspended the developer's account after discovering they had created a bot that looked like presidential hopeful Rep. Dean Phillips. The move comes after a petition filed by non-profit organization Public Citizen against the misuse of AI in political campaigns called for the regulator to ban artificial AI in political campaigns.
Anthroponic declined further comment and OpenAI did not respond to a request from Decrypt.