Pick-Up Artists Using AI, Deeply Fake Nude Illegal, Rabbit R1 Dropped: AI Eye.

Pick-Up Artists Using AI, Deeply Fake Nude Illegal, Rabbit R1 Dropped: AI Eye.


AI Tupac vs. AI Drake

A little over a year ago, a fake AI song featuring Drake and The Weeknd garnered 20 million views in two days before Universal Memory pulled the track for copyright infringement. The shoe was on the other foot this week, however, when lawyers for Tupac Shakur's estate sued Drake over Kendrick Lamar's “TaylorMade” diss track, using AI-faked vocals to “impersonate” Tupac. Drake has since removed the track from his X-Profile, though it's not hard to find if you check it out.

They have become a deep fake naked criminal

make naked
Apps like Nudeify simplify the process of creating deep fake nudes. (undress)

The governments of Australia and the United Kingdom have announced plans to criminalize the creation of deeply fake pornography without the public's consent. As AI Eye reported in December, a variety of apps including Reface, DeepNude and Nudeify make it easy for anyone with a smartphone to create deep fakes. According to Graphica, deep fake nudity websites are receiving tens of millions of hits every month.

Baltimore police have arrested former Pikesville High School athletic director Dazon Darien for allegedly using AI voice-cloning software to create a fake racist tirade (“fake”) to retaliate against the school's principal and force him to resign over alleged theft. School money.

Binance

Darien sent audio of the principal allegedly making racist comments about black and Jewish students to another teacher, who forwarded it to students, the media and the NAACP. The principal was forced to resign in an outcry, but a forensic examination revealed the voice was a fake, and investigators arrested Darien at the airport as he was about to fly to Houston with the gun.

Everyone — at least in the media — seems to hate Meta's new AI integration in Instagram's search bar, mostly because it's too eager to chat and not very good at searching. The bot has also been randomly joining Facebook conversations and talking nonsense after a question was raised in the group and no one responded within an hour.

An isolated AI priest

An I Catholic priest was removed two days later for advocating a relationship between the relatives. California-based Catholic Answers introduced Father Justin Chatbot last week to answer educational questions about the Catholic faith.

But after he started advising people who could baptize their children in Gatorade and blessed the “joyous occasion” of a brother and sister getting married, Catholic Answers was forced to apologize and downgrade the chatbot from the elderly Justin. “Amidst the user comments is a criticism of the AI ​​character's representation as a priest,” CA said. “He has never been a real priest, so we don't say he is condemned!”

AI RevAI Rev
Now it's just “Justin”. (Catholic Answers)

Rabbit R1 reviews

Noted tech reviewer Marcus Brownlee said the Rabbit R1 “has a lot in common with the Humane AI pin” and they knew the device was broken right away – Brownlee ordered the Humane device two weeks ago. The Rabbit R1 is a highly immersive handheld AI device where you interact primarily by voice with apps that work on your behalf. Brownlee criticized the device as being poorly finished and “borderline unusable”, had terrible battery life, and said it wasn't very good at answering questions.

TechRadar called the R1 “a beautiful mess” and stated that the market could not support “a product that is far from ready for the mass consumer”. A CNET reviewer noted that “there were times when everything just clicked, and I understood the hype,” but the negatives were far outweighed. The main issue with the AI ​​devices developed so far is that they are more limited than smartphones, which already perform the same tasks more effectively.

Fake live streams targeting women

New apps called Parallel Live and Famefy use AI-generated audience interaction to simulate large social media audiences for live streams — pickup artists are reportedly using the apps as social proof to impress women. In one video, influencer ItsPolaKid shows a woman “live streaming” to 20,000 people at a bar, asks him if he's rich, and they leave together. “The viewer is an AI that can hear you and react, which is funny. She can't get enough,” the influencer said.

The rule of thumb on social media is that when an influencer mentions a product, it's probably an ad. Parallel Live creator Ethan Keizer has released promotional videos with millions of views, pushing the same line of social proof from fake audiences to drop on all models and earn them invitations to club VIP rooms. 404 Media's Jason Kobler reports that the apps use speech-to-text AI recognition, meaning the fake AI viewers “reacted” to things “that I said and said out loud while testing the apps.”

Ice polo boyIce polo boy
Influencers manipulate AI audiences to sell apps to real audiences, which can lead to targeted pickup. (itspolokidd/Instagram)

“No-AI” guarantee for books

British author Richard Haywood is a self-published superstar with the Undead series of post-apocalyptic novels, which have sold more than 4 million copies. He is fighting back against zombie “authors” by adding a NO-AI tag and guarantee to all his books with a “legally binding guarantee” that every novel is written without the help of ChatGPT or any other AI. Haywood believes that around 100,000 fake books have been published in the last year or so and the only way to protect authors and consumers is an AI-free guarantee.

AI reduces heart disease death by one third

An AI trained on nearly half a million ECG tests and survival data in Taiwan was used to identify the 5 percent most at-risk heart patients. A study in Nature found that AI reduced overall heart disease mortality among patients by 31%, and by 90% in high-risk patients.

AI are just as stupid as us.

As large language models converge in multiple experiments around human baselines, Meta's chief AI scientist Yan Likun argues that human intelligence may be the ceiling for LMSs because of the training data.

“As long as AI systems are trained to reproduce human-generated information (eg text) and lack search/planning/reasoning skills, their performance will be at or near human levels.”

AbacusAI CEO Bindu Reddy agrees that despite the gradual addition of more computation and data, “models have hit a wall.” “So in a way, it's impossible to get past a certain level of clear language models,” she says. Although LMMs exhibit these superhuman abilities, we would not be able to detect them.

Pedro Domingo XPedro Domingo X
IIS is built around human intelligence (Pedro Domingo X).

The security board does not believe in open source

The U.S. Department of Homeland Security has enlisted the heads of major AI companies, including OpenAI, Microsoft, Alphabet and Nvidia, to its new AI Safety and Security Board. But the board was criticized for not including a meta representative with an open source AI model strategy, or indeed anyone working on open source AI. It may have already been deemed unsafe.

Homeland Security X Post Homeland Security X Post
Homeland Security X's post about AI Security Board.
Andrew FentonAndrew Fenton

Andrew Fenton

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a film journalist for News Corp Australia, SA Wind and national entertainment writer for Melbourne Weekly.



Leave a Reply

Pin It on Pinterest