AI hit by ‘disappointment’ but AI bubble not over yet if fixed: AI eye
4 months ago Benito Santiago
It's fair to say that the initial hype around AI has burst, and people are starting to ask: What has AI done for me lately?
Most of the time, the answer is not much. A study in the Journal of Hospitality Marketing and Management found that products that claimed to use AI were consistently less popular. The effect is even more pronounced for high-risk purchases such as expensive electronics or medical devices, suggesting that consumers have high expectations about the reliability of current AI tech.
“When AI is mentioned, it reduces emotional trust, which in turn reduces the desire to buy,” said lead author and Washington State University marketing professor Mesut Cisek.
Workplaces are finding that the full potential of AI technology is yet to be realized. A study by the Upwork Research Institute found that 77% of employees who use AI say the tools have reduced their productivity and increased their workload in at least one way.
And this is among the businesses that are actually using it: According to the US Census Bureau, 5% of businesses have used AI in the past two weeks.
Rune Christensen's grand final game (as mentioned in a magazine earlier this year) MakerDAO plans to make it autonomous by handing over most of the coordination to AI. Only one of the four new sub-DAOs planned to launch this year – SparkDAO – is because AI management is not up to the workload.
“AI is often very good, but it has many hidden bugs and issues that make it unreliable,” Christensen told DL News.
This is the case in a nutshell. AI may be correct 97% of the time, but that's not reliable enough for most critical tasks. You don't board a plane that lands successfully 97% of the time, just like you don't risk mission-critical business.
So AI may have entered the Gartner Hype Cycle* “pit of disillusionment,” where interest wanes as experiments and implementations fail to live up to the hype.
(*Research suggests that most new technologies refer to the incentive cycle. Think of it as a metaphor or something.)
Table of Contents
ToggleHuman translators and AI reliability issues in critical fields
A good example of how small mistakes can undermine technology can be seen in the translation industry, where human translators are thought to be going the way of the dodo.
Although computer translation has been around for a decade and AI has improved significantly, the US Census Bureau predicts an 11 percent increase in the number of people employed as interpreters and translators between 2020 and 2023. % over the next decade.
Daron Acemoglu, an economist at the Massachusetts Institute of Technology, told NPR that translation is “one of the best test cases” for AI's ability to replace human workers, but the technology is “not that reliable.”
And 100% reliability is crucial when translating legal or medical texts and many other fields.
“If you're a translator for the military and you're talking to an enemy combatant, I don't think you want to rely entirely on a computer,” Duolingo CEO Louis von Ahn said. Computers still make mistakes.
Current strengths of AI: Cost-effective solutions for simple tasks
While 97% accuracy isn't good enough for life-or-death tasks, it's good enough for a whole range of applications. Social media companies can't exist without AI content moderation or ad recommendations – and it's close enough, considering the job would be 100% impossible otherwise.
A new study for Model Evaluation and Threat Research (METR) finds that AI has many limitations, but it can replace busy human workers for a series of less complex tasks that generally take humans 30 minutes or less to complete.
AI gets dramatically worse the more complex the task: it can complete 60% of the tasks tested that took humans less than 15 minutes, but only 10% of the tasks that took four hours.
But when it works, it costs only 1/30th of hiring a human.
So part of the problem is that we don't yet know how to best use this new technology. It's reminiscent of the early days of the Internet, when there were lots of hobbyist websites and e-mails, but the high income from online shopping was still a long way off.
Jack Clark, founder of Anthropoc, argues that even if we stop AI development altogether tomorrow, there will be years or even decades of further improvements in skill overlap, applications and integration, agility and learning.
Ben Goertzel, founder of Singularity.net, made a similar point this week when he argued that behind-the-scenes businesses are developing all kinds of new use cases.
Revenue is not driven by chatbot subscriptions, but more “centrally about integrating JNI into other valuable software and hardware applications…. (That's[happeningnoweverywhere)[አሁንእየተከሰተነው፣በሁሉምቦታ።)[that’s[happeningnowallovertheplace”
So is the AI bubble bursting?
People have been waiting for the AI bubble to burst since ChatGPT was first released, and those calls have increased dramatically in the wake of the global stock market crash this week.
AI-heavy Magnificent Seven stocks (Microsoft, Amazon, Meta, Apple, Alphabet, Nvidia and Tesla) wiped $650 billion off their value on Monday, wiping out $1.3 trillion in three trading sessions. This has led The Guardian, CNN, The Atlantic, Financial Times and Bloomberg to talk about the possible end of the bubble.
Also read
Main characteristics
From the Director of the United States Mint to the very first Bitcoin IRA customer
Main characteristics
Real Life Doge at 18: Going to the Moon mm
But the primary cause of the rout appears to be the collapse of the yen's carry trade, which has led to more forced selling, and fears of a recession in the US are secondary.
Of course, some air has come out of the AI bubble, mainly due to concerns that the revenue won't justify the cost. In late June, a Goldman Sachs report, Too Much Cost, Too Little Benefit, set the tone by arguing that current AI technology cannot solve the complex problems that justify the projected $1 trillion in capital spending in the coming years.
Morgan Stanley analyst Keith Weiss said on Microsoft's earnings call: “Right now, there's an industry debate around generative AI[capital expenditure]requirements and where monetization is related to that.
The idea is given credence by The Information, which suggests that even though OpenAI will make up to $4.5 billion in revenue this year, it will still post a $5 billion loss.
But even if OpenAI doesn't raise enough money to survive — and AI critic Ed Zitron suggests “it needs to raise more money than anyone else has raised” — the technique isn't going away, perhaps to Microsoft's surprise. . Character.ai is reportedly considering a similar deal with Google.
Any revenue shortfall affecting AI startups could benefit big companies with deep pockets.
Google, Microsoft and Meta are spending big on AI
Meanwhile, Google, Microsoft and Meta all remain bullish on AI. Meta is spending up to $40 billion in capex this year, Microsoft is spending $56 billion — and expects to grow that by 2025 — and Google is spending at least $48 billion. If we see them start buying those figures, the bubble may burst.
Adults see it as a long game. Microsoft Chief Financial Officer Amy Hood expects data center investments in AI to pay off “over the next 15 years and beyond,” and Meta Chief Financial Officer Susan Lee advises that “returns from generative AI will last longer than that.”
Earnings reports indicate that large companies are doing well so far.
Alphabet posted 14 percent revenue growth and 29 percent growth in its AI-based Google Cloud business. Microsoft reported 15 percent revenue growth, with its Azure cloud business up 29 percent. (To be fair to AI critics, cloud revenues are at risk if the bubble bursts.) As Meta reported 22 percent revenue growth, its AI recommendation systems are getting better at deciding which ads to show to whom.
“This allows us to drive revenue growth and conversions without increasing the number of ads or, in some cases, even reducing ad load,” Lee said on Meta's earnings call.
So while the AI stock market bubble may be deflating, the winners are laying the groundwork for their success. Google CEO Sundar Pichai said:
“The Risk of Over-Investing The Risk of Over-Investing is High.”
Wall Street: A stock market correction is an AI buying opportunity.
Many Wall Street firms see the current correction as a buying opportunity. On Monday, BlackRock Investment Institute issued a research note saying it was not concerned about the sell-off or the threat of a recession:
Guided by the mega power of AI, we place our overweight position on US stocks and look for buy opportunities on sell offers.
Evercore ISI's Julian Emanuel likens the current correction to the ups and downs of the 1994-1999 dotcom boom. “In a world where the global workforce is rapidly aging and where efficiency is critical to driving productivity improvements, the rationale for AI is greater than ever,” Emanuel wrote on Monday.
“We see the current ‘AI Air Pocket' as an opportunity for long-term global exposure.”
Goldman Sachs U.S. equity strategist David Costin said earnings estimates for big tech companies have fallen sharply (about 13 percent since July 10). “Valuations continue to reflect IO's expectations,” he told clients this week.
Also read
Main characteristics
Longevity Expert: AI Will Help Us Become ‘Biologically Immortal' By 2030
Main characteristics
MakerDAO plans to bring back ‘DeFi summer' – Rune Christensen
Otter.AI is sneaky.
Two Yale researchers created a firestorm of controversy by mocking and disparaging the leader of a Harlem community group who took a different view of supervised drug use centers.
After Harlem's Shawn Hill released a call to action interview for his research project, one of the researchers said, “Let's try to get some more interviews of people who are breastfeeding,” and suggested that “it would be better for their research if we could.” Find someone to give us enough rope to hang.
Unfortunately for the researchers, Otter.ai's transcription service was still recording and sent the transcript to all participants. Hill went public with his comments, sparking controversy over the validity of the pair's research. The couple apologized and the Yale School of Medicine is reviewing the incident.
LinkedIn AI Recruiters, AI Candidates, AI Respondents
Recruiters are starting to use AI for all stages of the hiring process, from candidate sourcing to resume screening. On the other hand, the Student Employers Institute says that more than 90% of graduates are currently using AI for their job applications.
This is based on research from Canva and Sago earlier this year, which found that 45% of job seekers are using the technology. The majority of hiring managers (90%) are positive about using AI to assist in the process, although nearly half (45%) believe it should be used “sparingly.”
In related news, Neelan Saha, who interviewed AI Eye on LinkedIn about its AI answering service (anodyne but positive reviews, so you don't have to be) has been kicked off the stage.
“LinkedIn sent a termination notice forcing us to quit. I emailed all my clients to let them know the latest happenings. Obviously, there have been a lot of people who have used the feature, and they're all freaking out.
You can still use his Magic Reply service on X, which apparently has no problem with bots.
AI-generated song charts in Germany
A song composed by AI, “Verknalt in einen Talahon” reached number 72 on the German music charts. Reportedly created using a text-to-music generator using Udio, this is a jaunty mid-1970s pop song with shades of Dad. The lyrics sound like they're on Tik Tok, a gang of teenagers, often of Arab origin, who indulge in the “tallahons” criminal subculture.
The song is also charting better on Spotify, peaking at number 27.
Facebook's AI-slop-generated images are paid for by Facebook
A strange cottage industry has sprung up in India, Vietnam, and the Philippines that creates strange AI-generated images like Shrimp Jesus, heartwarming images of starving people, and beautifully perfect dream homes.
According to 404 Media, the industry consists of relatively poor people in developing countries, who teach each other how to use Bing's image generator to generate dozens of AI images a day on multiple accounts, earning creators $400 to $1,000 a month. Bonus program.
One person who created the image of a train made of leaves earned $431 from the participation of the image. “People don't make that much money in a month,” he said in a YouTube video.
Some of the most viral weird images are created by accident. Questions are passed around in Telegram groups and badly translated into English for the image generator, resulting in some very strange images.
According to the report, Facebook seems fine with paying people to generate “AI Slop” as long as it improves their engagement metrics. Most images do not have a disclaimer that they are AI generated.
Subscribe
A very engaging read in Blockchain. It is given once a week.
Andrew Fenton
Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a film journalist for News Corp Australia, SA Wind and national entertainment writer for Melbourne Weekly.
Follow the author @andrewfenton