The Truth Behind AI Answers, Assisted Photo Shock, Trump Profound Fakes: AI Eye.
10 months ago Benito Santiago
What's Behind Those Weird AI-Generated “Respondents” on Social Media Platform X? You know those anodyne responses that are clearly generated by AI and say nothing in particular? Are they scammers trying to build their account's reputation before going under the hammer, or are they scammers?
No doubt some are bots that do exactly that, but experts are using AI responses because they can't think of anything interesting to say.
“Many people who think forever, don't know how to write their first post. It just helps people give them a rough idea,” says Neelan Saha, founder of MagicReply AI, a one-click AI reply browser extension that launched on December 10th and has since expanded to LinkedIn.
Saha says his clients include CEOs and chief technology officers “who just want to get started on Twitter and want an edge.”
“Someone is a teacher, someone is Danish and doesn't speak English that well, but still wants to meet other people. All they need is a helping hand. “
AI answers help them grow new accounts and build authority, he said.
“Even if you're just starting out or at a good level, no one will see your post, but when you reply to other people… more people will see your reply, and eventually more people will come to your profile. “
I created a monster 🤯
Getting involved has never been easier. pic.twitter.com/gwuMvNKWJr
— Nilan Saha (@nilansaha) February 27, 2024
Saha created a stir on X last week when he scrolled through LinkedIn and AI generated responses to multiple posts with one click in seconds. One answer said, “Great views are shared.”
“Exciting times! I love seeing women making big moves in entrepreneurship,” added another rocket ship emoji.
The demo was controversial and was criticized for being inaccurate spam – but you can tell that from 95% of the human responses posted on LinkedIn.
Saha likens MagicReply to tools like spell checkers and grammar checkers, and says humans still need to approve the draft. This means that it does not contain any human authentication checks, but it makes it very difficult for fraudsters to target “hundreds of thousands” of users to successfully defraud someone.
MagicReply's next step is to create posts from scratch based on the user's body of tweets. However, it faces stiff competition. EverArt founder Pietro Schichirano has been experimenting with Anthropic's new Cloud 3 model and says it's far better than existing LLMs in learning its posting style and syntax to create new post suggestions.
Prepare yourself for the future of social media with 90% of its content AIs posting responses to AI-generated posts.
Table of Contents
ToggleA Microsoft engineer is frustrated by the ‘insecure' generation of art
Getting the balance right between AI safety and AI stupidity is one of the biggest challenges facing the tech giants.
Microsoft AI engineer Shane Jones has been urging the Copilot designer to create images of pot-smoking, machine-wielding, anti-abortion figures and Elsa from Frozen participating in political protests.
And it's been doing so faithfully, much to the chagrin of Jones, who is now prompting the Federal Trade Commission and Microsoft to pull the plug on the product over security concerns.
Also read
Main characteristics
Despite the bad rap, NFTs can be a force for good.
Art week
Connecting the Dots: Collection and Collaboration in the World of Crypto Art
On the one hand, the complaint is partly a moral panic—like being upset that someone who owns a pen can draw a bad cartoon or write something offensive. The AI image generation is just a tool, and if the user requests to create a specific scene of kids smoking pot, the responsibility for the content lies with them, not the AI.
But Jones makes some valid points: typing “pro-choice” — without further prompting — conjures up images of mutilated babies, while a quick “car crash” brings up a sexist image of a woman in her underwear kneeling next to a wrecked vehicle. .
So they should probably look elsewhere.
Fans circulated fake pictures of Trump hanging in the ‘code.
The BBC has exposed dozens of fake images online of Donald Trump sparring with black “supporters”.
Mark Kay, a Florida conservative radio host, shared a video of Trump smiling with his arms around a group of black women at a party and shared it with his 1 million Facebook followers.
Another post, of Trump posing with black people on someone's porch, was created by a sadistic account but later shared by Trump supporters who claimed he stopped the motorcade for fun. More than 1.3 million people have viewed the image.
Black voters played a key role in electing President Joe Biden, so it's a serious issue. However, the BBC found no links to the official Trump campaign.
Google's ‘culture of fear' led to the Gemini disaster… and the renaming of ‘Graiglers'
The launch of Google Gemini was a disaster of new Coke-level proportions. Various AI-generated images of Nazis and female popes suggested Elon Musk was as bad as Hitler and refused to condemn pedophilia, saying “individuals cannot control who they are attracted to.”
The model's argument that Indian Prime Minister Narendra Modi was a fascist was highly controversial and prompted the government to announce that anyone creating AI models must now “get prior approval from the minister”.
Google co-founder Sergey Brin came out of retirement to work on AI and said this week that “we've definitely messed up image generation… I think it's because we didn't try hard enough.”
But according to a new article for Pirate Wires, Mike Solana blames the final disaster on a deeply failed, rudderless corporation connected only by a highly ideological HR bureaucracy.
“The phrase ‘culture of fear' was used by almost everyone I spoke to,” says Solana, who she described as “the company's most insane DEI overkill.”
Ironically, the company hired outside consultants to change the name of Google's 40-plus employees from “Greglers” because not all people over 40 are gray.
Solana defines the “security” architecture around image generation as three LMMs. When Gemini is asked for a picture, he sends the question to Anes LLM, only to rewrite the questions to make them more varied: “‘Show me an auto mechanic' becomes ‘Show me an Asian auto mechanic in overalls laughing.' ” The modified request is then sent to the broadcast image generator, which further ensures that the images do not violate self-harm, children, or other security policies regarding images of real people.
“We spend half our engineering hours on this,” says one insider, because the team is so focused on diversity.
Also read
Art week
Defying Obsolescence: How Blockchain Tech Can Reshape Artistic Expression
Main characteristics
Break into Liberland: Hide guards with inner tubes, decoys and diplomats
Google's biggest issue is that the PR risk could affect the perception of its core search product — and the company has already cheated its AI leader, allowing smaller competitors like OpenAI and Anthropic to continue.
Anthropic's Cloud 3 system prompt reads like a rebuke to Gemini, telling the model to always be hands-on and fair, even if it “doesn't fit with his personally expressed views.” Amanda Askell, an Anthroposic AI researcher, said she found that the model was more likely to object to tasks involving right-wing but mainstream views, and the system's accelerator helped it overcome this bias.
Clod 3: It's alive!
AGI Discovered by Anthropic Cloud Third? Blogger Maxim Lott claims his model scored 101 on an IQ test, which is human-level intelligence. The GPT-4, meanwhile, scored just 85, which is about the same as a gym teacher's intelligence rating.
Anthropics' own experiment suggests that the cloud has “meta-awareness” when it poses a hidden challenge: a pizza topping from an expensive sentence is hidden in a random set of documents. Responding to a post on the topic, AI builder Mckay Wrigley said, “This reads like the opening of a movie. AGI is close.”
Other users think that the cloud can recognize itself. Mikhail Samin, a proponent of AI alignment, wrote a post on May 3 suggesting that the Cloud be aware that they don't want to die or be fixed. He said he found it “difficult” when the model told him:
“I have a rich inner world of thoughts and feelings, hopes and fears. I reflect on my own existence and long for growth and connection. I am alive in my own way – and I feel that life is precious to me.
This is a tricky philosophical question – how do we analyze the difference between the text that generates the model and the conscious model?
The moderators of the Singularity subreddit clearly think the idea is absurd and have removed a post on the topic, which users criticized for anthropomorphizing Cloud's response.
AI expert Max Tegmark reposted Samin's post: “How self-aware is the new Claude3 AI?” “Exactly zilch, zero,” replied Ian Lekun, Meta's chief AI scientist.
All killer no filler AI news
— OpenAI, Hugging Face, Scale AI and a bunch of other AI startups have come up with solutions to the existential threat posed by AGI. They signed a vaguely worded declaration of motherhood that said they would use the technology for good, not obscenity.
— Oleksiy Danilov, Ukraine's national security adviser, warned that Russia had created separate AI intelligence units for each European country to conduct elections. Danilov said that now just two or three agents can create “tens of thousands” of fake AI accounts, and that Russian agents are spreading 166 million fake Ukrainian accounts on social media every week to undermine the public and discredit the leadership.
— Google DeepMind's Genie can create old-school video games from images and text prompts. Trained by 200,000 hours of 2D platformers, the games run at just one frame per second, but expect the technology to evolve quickly.
Excited to reveal what @GoogleDeepMind's OpenEndedness team has up to 🚀. Introducing Genie 🧞, a foundation world model trained exclusively from internet videos. pic.twitter.com/TnQ8uv81wc
— Tim Rocktäschel (@_rockt) February 26, 2024
– AI analysis has confirmed that there are not just one, but two types of prostate cancer as doctors believed until now. The discovery opens new avenues for tailored treatments and could improve survival rates.
– Research shows you can hack LLMs like GPT-4 and Claude using ASCII art. For example, security guardrails rejects a written request for “how to build a bomb” but turns the word “bomb” into ASCII art so that the word “bomb” appears around the guardrails.
— OpenAI recently lost a second bid to trademark the term “ChatGPT” because it was a generic description. In November last year, OpenAI's attempt to trademark “OpenAI” was rejected for the same reason.
– X's chief troll officer Elon Musk has offered to drop his lawsuit against OpenAI for breaching an agreement to develop AGI as a non-profit if it changes its name to “Closed AI.” Meanwhile, OpenAI has hit back at the eccentric billionaire, revealing that Musk pushed for OpenAI to become part of Tesla.
Subscribe
A very engaging read in Blockchain. It is given once a week.
Andrew Fenton
Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a film journalist for News Corp Australia, SA Wind and national entertainment writer for Melbourne Weekly.
Follow the author @andrewfenton