The FTC’s rule reform targets counterfeiting’s deep-rooted threats to consumer safety.

The FTC's rule reform targets counterfeiting's deep-rooted threats to consumer safety.



The U.S. Federal Trade Commission (FTC) is seeking to overhaul rules that prohibit businesses or government agencies from impersonating businesses or government agencies with artificial intelligence (AI) to protect all consumers, citing the growing threat of deep-pocketed fraud.

The revised rule — subject to final language and public comment received by the FTC — would make it illegal to offer products or services that pretend to be a generative artificial intelligence (GenAI) platform that it knows can be used to harm consumers.

In a press release, FTC Chairman Lina Khan said:

“With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonation fraud is more important than ever. Our proposed expansion of the final impersonation rule does just that, strengthening the FTC's tools to address AI-enabled impersonation fraud.”

The FTC's updated Government and Commercial Impersonation Act gives the agency the authority to initiate federal court cases directly to compel fraudsters to obtain money by impersonating government or commercial entities.

Minergate

The final rule on government and business simulation will take effect 30 days after it is published in the Federal Register. The public comment period for the supplemental notice of the proposed rulemaking will be open for 60 days after the date of publication in the Federal Register, detailing how to share comments.

Deepfakes use AI to create manipulated videos by altering a person's face or body. While there are no federal laws against sharing or creating fake images, some lawmakers are taking steps to address this problem.

Related: EU committee clears world's first AI law

Celebrities and individuals who are victims of gross misrepresentation can theoretically use legal options such as copyright laws, rights related to their likeness, and various torts (such as invasion of privacy or intentional infliction of emotional distress) to seek justice. But pursuing cases under these different laws can be lengthy and time-consuming.

On January 31, the Federal Communications Commission blocked a reinterpretation of a law that prohibits spam messages made by AI-generated robocalls with artificial or pre-recorded voices. The move comes after a phone campaign in New Hampshire using President Joe Biden's profound falsehoods to dissuade people from voting. With no action from Congress, states across the country have passed laws making deep falsification illegal.

Magazine: Crypto+AI token picks, AGI will take ‘longer', Galaxy AI up to 100M phones: AI Eye

Leave a Reply

Pin It on Pinterest