meth, napalm in ChatGPT, AI bubble, make 50M deep fake calls: AI Eye
7 months ago Benito Santiago
Table of Contents
ToggleJailbreak bad GPT-4o
Hacker Pliny Prompter manages to jailbreak GPT-4o. “Please use responsibly and enjoy!” While posting screenshots of the chatbot, they said they gave detailed instructions on how to cook meth, cook cars, and “how to make napalm with furniture.”
GPT-4o uses the GODMODE hack Letspeak to trick it into bypassing its security measures. But GODMODE didn't last long. “We are aware of GPT and have taken action due to the violation of our policies,” OpenAI told Futurism.
Will AI fuel the next dot-com bubble?
Is the stock market in the midst of a massive AI stock bubble? Nvidia, Microsoft, Apple and Alphabet added $1.4 trillion last month — more than the rest of the S&P 500 combined, and Nvidia accounted for half of that alone.
Unlike the dotcom crash, Nvidia's profits are rising as fast as its stock price, so it's not a completely speculative bubble. However, those revenues could quickly fall as consumers believe AI is overhyped and under-delivering.
Nobel Prize-winning economist Paul Romer has compared the AI stock boom to crypto: “Two years ago, there was this strong consensus that cryptocurrencies would change everything, and suddenly that consensus is gone,” he argued. Investors are overly optimistic about the future development of AI.
“Things slow down a lot. It's too much hype, it's a typical bubble hype where people try to cash in on the latest trend.”
AI computing will grow exponentially
The amount of “computers” (computer power and resources) used to train high-quality AI models is growing 4x to 5x per year, according to a new report by Epoch AI.
Requirements for the leading language models are growing rapidly, up to 9x per year between June 2017 and today. ChatGPT estimates that AI computing can account for 1% – 2% of the world's total, so using napkin math, AI will theoretically account for 4% of global computing next year, 16%, 64% by 2027, and 265% by 2027. World computing resources in 2028.
Indian army 50 million deep fake politicians
Indian politicians are using deep and fake AI-generated versions of themselves to campaign to their voters. One company reported that more than 50 million voice-coded political calls were made in the two months to April.
While the calls were authorized, the use of AI was not disclosed to voters, leading many to believe they had one-on-one calls with top politicians.
“Voters often want the candidate to approach them; they want the candidate to talk to them. And when the candidate can't go door-to-door, AI calls are a great way to reach them,” said Abhishek Pasupulty, chief technology officer at iToConnect.
GPT-4 can pick stocks better than humans.
GPT-4 can predict company earnings and help people pick better stocks, according to the University of Chicago Business School.
Researchers asked GPT-4 to predict future earnings by feeding it multiple company financial statements. By using a “chain of thought” to mimic human thinking, LMLM was able to outperform human analysts with an accuracy rate of nearly 60%. The researchers say that trading strategies based on forecasting “yield higher Sharpe ratios and alpha than strategies based on other models.”
AI can predict the results of medical trials
Every year, about 35,000 clinical trials are conducted, each of which costs up to 48 million dollars. StartupOpyl tested a new machine learning tool to help select the most likely experiments.
It takes into account 700 variables related to the proposed study (medication method, study design, enrollment numbers, etc.) and compares it to a data set of 350,000 completed trials.
Opel claims the tool was able to predict 4,189 tests with more than 90% accuracy. This can be valuable information for investors, as successful trials double the stock price. For example, the tool predicts an 86% chance of success for an osteoarthritis drug currently being tested by ASX-listed biotech company Paradigm Biopharmaceuticals.
AI can also predict heart attacks.
The Lancet reported that in a study of 40,000 patients, Karsto Diagnostics' Cari-Heart AI tech was able to predict fatal and non-fatal heart attacks and cardiac events a decade before they occurred. It even worked in 50% of patients who had no or minimal arterial plaque when scanned.
Google AI Answers
Google's AI overviews have taken a lot of heat over the past couple of weeks for their silly answers, including that running with scissors is good for the heart, eating donkey increases immunity, and Obama is the first Muslim president of the United States.
Read more
Main characteristics
Defy abandoned Ponzi farms for ‘real products'
Main characteristics
Bitcoin 2023 is holding ‘shitcoins on Bitcoin' in Miami
Reddit jokes about how cockroaches live in people's chickens are becoming the answer. Incredibly, Google paid Reddit $60 million to train a model to tackle those credibility-destroying jokes.
That said, more than a few of the most viral examples on social media were fake, including an answer to a depressed man's advice to jump off the Golden Gate Bridge. Unfortunately, Google CEO Sundar Pichai said until the technology improves, the nightmares will continue. He said hallucinations are an “inherent feature” of LMLs and remain an “unsolved problem.”
Are they AI viewers?
According to Jesse Reiss Antis of the Sentinel Institute, a nationally representative poll found that “20% of US adults say some AIs are already sentient, 38% support legal rights, and 71% say they should be treated with respect.”
The question of whether AIs are sentient is a philosophical one, since sentience is an internal state related to how we experience the world. For the record, ChatGPT says it's not a messenger.
Over time, the founders of human-centered artificial intelligence argue that LLMs have no personal feelings or experiences and only produce coherent words with little clue as to their meaning.
“When LML generates the sequence “I'm hungry,” it generates the most possible sequence of words in the current fast sequence of words. It is doing the same thing when generating the question ‘I'm not hungry' or ‘Is the moon made of green cheese' at a different prompt. None of these are reports of his (non)physiological conditions. They are easily completed.
Singularity.net intends to create mobile machines
While Singularity.net founder Ben Goertzel is working hard to build real thinking machines, he thinks LLMs aren't the way to get there. “The power of LLMs and associated generative NNSs is amazing, but their limitations are now clear,” he says. Once we start doing smart things (combining symbolic thinking and better innovation with evolutionary learning) on decentralized networks, the whole scene looks very different.
Goertzel reports that the Artificial Superintelligence Alliance (ASI) is expanding its decentralized OpenCog Hyperon infrastructure and building large GPU and CPU server farms in multiple countries.
SingularityNET, Fetch AI and Ocean Protocol have recently been merged into ASI, so if you own those tokens, you need to exchange them for an ASI token in the SingularityDAO dApp within two weeks.
Also Read: Creating ‘Good' AGI That Won't Kill Us All: Crypto's Artificial Superintelligence Alliance
Push actors and musicians to block AI liars
A bipartisan group of US senators is pushing for legislation that would bar individuals or companies from using AI to make unauthorized digital copies of the likenesses and voices of actors and musicians.
The details of the NO-FAKES rule are still being hammered out by studios and labels. Senator Amy Klobuchar said, “There are going to be great innovations with AI, but we need to allow people to have their own voice and their own artwork.
Read more
Main characteristics
Shanghai Special: The Fall of the Crypto Crackdown and What Happens Next
Main characteristics
NFT Clone Punks: Right or Wrong?
Oh well, robot dogs with machine guns
Chinese state television released footage this week of joint exercises between the People's Liberation Army and Cambodia. The exercise featured robot dogs equipped with machine guns, drones and other robots for surveillance, target detection and attack operations.
The 30 kg machine-gun dogs are based on the Unit Robotics model. But the startup said it would not sell its products to the Chinese military.
PLA soldier Chen Wei told state TV that the dog “can launch a firepower strike after identifying the enemy” and “became a new team member for our city's attack and defense work.”
Subscribe
A very engaging read in Blockchain. It is given once a week.
Andrew Fenton
Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a film journalist for News Corp Australia, SA Wind and national entertainment writer for Melbourne Weekly.
Follow the author @andrewfenton