Why Boomers Love Facebook AI Pictures, Mind-Reading AI Breakthrough: AI Eye.
8 months ago Benito Santiago
Table of Contents
ToggleEven your mind is not immune to AIA.
AI mind-reading technology thanks to MindEye2 from Statable AI and Princeton.
Previous mind-vision AI models have been able to create somewhat accurate pictures of what people are thinking, but they require expensive training on individuals using functional magnetic resonance imaging (fMRI).
The new model is very comprehensive and “shows how accurate perspective reconstruction can be achieved with state-of-the-art imaging and… a single visit to an MRI facility.” “
It still takes about 40 hours of training to get really accurate images; However, the downward spiral of the technology suggests that it will get easier and easier.
While impressive from a technical standpoint, the technology is clearly intrusive and has serious implications for privacy. The concept of thought crime is the kind of thing that technological dictatorships can embrace.
Stop liking AI photos
Facebook is circulating the memo that Boomers were clapping back with endless pictures of AI, amazing houses and too-good-to-be-real art projects.
A new pre-publication report examines 120 Facebook pages that lead users to spam and scams.
In the year In 2022, Facebook changed its algorithm to start spamming users with content from pages they don't follow, and now accounts for a quarter of your news feed, up from 8% in 2021. Content created that can create a reaction now receives hundreds of millions of views. One post with an AI-generated image was viewed by 40 million people and was among the top 20 most popular posts worldwide in Q3 last year.
Typically, spammers and fraudsters buy or delete the page before you can even publish AI-generated content. The researchers found 43 pages of AI images of wooden houses, 25 pages of cute AI images of children, 17 wooden sculptures and 10 focused on AI Jesus.
Recurring themes include cute kids proudly displaying their cake or toppings with captions like, “This is the first cake! She is happy with your mark” or “My daughter is 9 years old, she is participating in a school competition. Let's encourage her.”
The researchers wrote: “We noticed that Facebook users often commented on the pictures in ways that indicated that they did not recognize that the pictures were fake – for example, congratulating an AI-generated picture of a child created by AI.”
Another recurring line is “No one blessed me” alongside AI images of old men, amputees and newborn babies, while the phrase “Made by my own hands” is ironically plastered across exquisite AI-created woodcarvings, ice sculptures and figurines. Sand castle.
Ironically, the crab version of Jesus being worshiped by other crabs also says, “I made it with my own hands!” It has been tagged. And it received 209,000 engagements and more than 4,000 comments.
Facebook is clearly aware of the problem and has announced plans to flag AI-generated content created using its own Gen AI features. It also implements the C2PA standard for labeling images from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock once those companies “implement their plans to add metadata to images created by their tools.”
Live long and prosper with crypto and AI app Rejuve.ai
AI Eye recently caught up with Deborah Dung, chief technology officer of crypto and AI longevity app Rejuve.ai, and was surprised to see her wearing three smartwatches at once.
Each of the watches — Garmin, Fitbit and Apple Watch — was able to find out that she was recording her own health data to see if they were accurately measuring things like heart rate and blood oxygen levels through an app and analyzed by AI.
“I worked for two years. Boy, I thought I was crazy! Dung laughed.
Also read
Main characteristics
Is the Cryptocurrency Epic Moving Away from East Asia?
Main characteristics
LushSux: Ten Years of Ass and Skull Dugger in one NFT
The idea is to crowdsource health information by giving users a token that can be exchanged for discounts on longevity treatments. AI analyzes your data (including the blood or genomic tests you upload) and then makes recommendations based on the results of a database of 300 randomized controlled trials meta-analyses.
“We have a way of bringing all those meta-analyses into a coherent picture to calculate your risk of certain conditions associated with longevity,” Duong explained.
AIs are writing scientific papers that are peer-reviewed by other AIs.
More and more scientific papers are going through the peer review process despite the signs written by ChatGPT.
French professor Guillaume Cabanac cited a recent paper on lithium metal batteries published by scientific journal publisher Elsevier as an example. The first sentence begins: “Indeed, here it is,” which is ChatGPT's preferred phrase.
“How come none of the editors, editors, editors, writers noticed? How is this possible in formal peer review? ” he asked. Elsevier said it was investigating, but allowed the use of LLMs until the policy was announced. According to Cabanak, it was not revealed.
There are dozens of other examples of scientific papers containing the phrase “indeed, here it is” on Google Scholar.
Similar examples from Elsevier soon followed, including a photovoltaic research paper with the tip phrase “regenerative response” and a medical article about iatrogenic portal vein “I'm sorry but… I'm an AI language model. “
Adding insult to injury, a study of peer reviews of scientific papers found that between 6.5% and 16.0% of the reviews were themselves written by AIS. The estimate is based on the frequency of words such as “appreciative”, “critical” and “complex” which are up to 30 times more frequent in LLM generated text.
Also read
Main characteristics
‘Account summary' supercharges Ethereum wallets: Dummies guide
Main characteristics
Is the Cryptocurrency Epic Moving Away from East Asia?
Robots corrupt minds.
Some experts believe that one way to a well-rounded AGI is to put AI into physical form and have it interact with the physical world.
At this week's GPU Technology Conference event, NVIDIA unveiled Project GR00T (Generalist Robot 00 Technology), an attempt to build brains for humanoid robots. The technology aims to enable robots to reason, understand natural language, learn skills and mimic human movements. It uses Torn's system-on-a-chip and improved Isaac's robotics platform.
NVIDIA co-founder Jensen Huang called building a general-purpose foundational model for robots “one of the most exciting foundational problems to be solved in AI today.”
A brilliant video from NVIDIA's presentation shows researchers training countless robot prototypes in a simulated environment. Robots also seem to be learning from a handful of human demonstrations to learn how to use a juicer, pull a tray out of the oven, or play the drums.
Some demonstrations were tagged with a “teleoperator,” while others saw a virtual robot called the “Omiverse Digital Twin” mimic human actions before a real-world robot did the same.
NVIDIA Tech is supporting robots from 1X Technologies, Agility Robotics, Apptronic, Boston Dynamics, Image AI, Fourier Intelligence, Sanctuary AI, Unitree Robotics and XPENG Robotics.
“We are at a turning point in history as human-centered robots like Digit are poised to transform labor forever. Modern AI will accelerate development, paving the way for robots like Digit to help people in all aspects of everyday life,” said Jonathan Hurst.
Founder Ilya Polosukin appeared at the conference, discussing with Huang his role in the transformer paper leading to modern LMMs.
A separate demo of Figure 1, a humanoid robot that uses OpenAI technology, has been viewed by 10 million people. It shows a robot that sounds suspiciously like Rob Lowe, putting a plate into a drying rack or handing someone an apple.
Corey Lynch, head of AI, says the robot can plan future actions, reflect on events and verbally explain why.
“Even a few years ago, I thought we'd have to wait decades to have a full discussion of a humanoid robot planning and executing its own fully learned behavior. Indeed, many things have changed :)” …
All killer, no filler AI news.
– OpenAI boss Sam Altman said GPT-4 “kind of sucks” compared to what's coming. But this will not necessarily be GPT-5. “We're releasing an amazing model this year. I don't know what we're going to call it,” he said.
– Apple has published a new paper on the MM1 family of multimodal AI models that can understand both text and images and are up to 30B entries in size.
– India scraps plan to force AI model developers to seek government approval
– After Elon Musk unveiled Grock's model, AI doomer Tolga Bilge called the entire concept of AI models a “total scam” because the code and weight he released didn't include training data or give users insight into the inner workings. The model. “You can't duplicate the program, you only have the program!” he said.
– Blogger Noah Smith argues that even after AIs are developed, there will still be plenty of good, high-paying jobs for humans. His argument is based on the economic concept of “comparative advantage” – that there will still be many things that are economically beneficial for people to do. AIS is limited in power and the amount of computation available, so they said to give priority to things of higher value.
— The market value of the AI + crypto sector rose 150% to $25.1 billion in less than a month, led by Internet Computing (ICP), Bittensor (TAO), The Graph (GRT), Fetch (FET), SingularityNet (AGIX) and Worldcoin. (WLD).
Subscribe
A very engaging read in Blockchain. It is given once a week.
Andrew Fenton
Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a film journalist for News Corp Australia, SA Wind and national entertainment writer for Melbourne Weekly.
Follow the author @andrewfenton