Meta expects legal scrutiny as AI advances raise concerns over child safety.

Meta Expects Legal Scrutiny As Ai Advances Raise Concerns Over Child Safety.


A group of 34 US states are suing Facebook and Instagram owner Meta, alleging the company is engaging in inappropriate targeting of minors who use the platform. This development comes amid rapid advances in artificial intelligence (AI) that include both text and generative AI.

Lawmakers from states including California, New York, Ohio, South Dakota, Virginia and Louisiana say Meta uses algorithms to foster addictive behavior and negatively impact children's mental well-being through its in-app features. “Like” button.

Despite recent claims by Meta's chief AI scientist that government advocates are taking legal action, concerns over the inherent risks of the technology are still “premature,” and Meta has reportedly used AI to address trust and security issues. Forums.

A screenshot of the application. Source: CourtListener

Attorneys for the states are seeking various damages, liabilities and compensation for each state named in the document, ranging from $5,000 to $25,000 per lawsuit. Cointelegraph has reached out to Meta for more information, but has yet to receive a response.

bybit

Meanwhile, the UK-based Internet Watch Foundation (IWF) has raised concerns over the alarming proliferation of AI-generated child sexual abuse material (CSAM). In a recent report, the IWF revealed that more than 20,254 AI-generated CSAM images were discovered on one dark web platform in just one month, warning that this increase in disturbing content has the potential to flood the internet.

The organization urged international cooperation to combat CSAM, suggesting a multifaceted strategy including adjustments to existing laws, improvements in law enforcement education, and regulatory oversight of AI models.

RELATED: Researchers in China Develop Illusion Correction Engine for AI Models

As for AI developers, the IWF recommends preventing AI from generating child abuse content, excluding related models, and focusing on removing such material from their models.

The development of AI image generators has greatly improved the creation of lifelike human replicas. Platforms like Midjourney, Runway, Stable Diffusion, and OpenAI's Dall-E are popular examples of tools that can generate realistic images.

Collect this article as an NFT to preserve this in history and show your support for free journalism in the crypto space.

Magazine: ‘AI killed the industry': EasyTranslate boss adapting to change

Leave a Reply

Pin It on Pinterest