Table of Contents
ToggleAmerica wants to flood the skies of Taiwan with an army of robots.
The US military plans to fight China's invasion of Taiwan by flooding the narrow Taiwan Strait with thousands of swarms and thousands of fliers.
“I want to turn the Taiwan Strait into an unmanned hellscape so that they can make their lives absolutely miserable for a month,” said the US Indo-Pacific Command. Admiral Samuel Paparo told The Washington Post.
The drones are intended to confuse enemy aircraft, target missiles, destroy warships, and generally create chaos. The Ukrainians pioneered the use of drones in combat, destroying 26 Russian ships and forcing the Black Sea fleet to retreat.
Ironically, most of the components in Ukraine's drones come from China, and there are doubts whether the US can produce enough drones.
To this end, the Pentagon has allocated $1 billion this year to the Replicator initiative to mass produce kamikaze drones. Taiwan also plans to buy about 1,000 more AI-enabled drones next year, the Taipei Times reported. The future of war has arrived.
AI agents crypto payments network
Skyfire has launched a payment network that allows AI agents to make transactions autonomously. Agents get a pre-funded crypto account with safeguards to prevent overspending (people get screwed if spending exceeds set limits), and agents can't access their own bank accounts.
Co-founder Craig DeWitt told TechCrunch that AIA's agents were “purely sought after” without the ability to pay for anything. “We either choose the way agents actually do things, or they can't do anything, and therefore they're not agents,” he said.
Global auto parts maker Denso is already using Skyfire with its own AI agents to source materials, while Fiver-like payment platform Skyfire uses AI agents to pay for tasks performed on behalf of humans.
LLMs are too dumb to destroy humanity
A study by the University of Bath in the United Kingdom found that large language models do not pose any existential threat to humans because they cannot learn or acquire new skills independently.
The researchers argue that LLMs like ChatGPT remain secure even as they become more sophisticated and trained on larger data sets.
The study examined LLMs' ability to complete new tasks and determined that they were more likely to acquire complex reasoning skills.
Dr. Tayyar Madabushi said the fear that “the model will be removed and do something completely unexpected, innovative and potentially dangerous” is not justified.
Read more
Features
North American crypto miners prepare to challenge China's dominance
Features
AI can already be more powerful than Bitcoin – and threaten Bitcoin mining.
AI x crypto tokens tank
AI-themed coins, Bittensor, Render Network, Near Protocol and Internet Computer have fallen more than 50% from their highs this year. And the much-touted merger of Fetch, Singularity.net, and Ocean Protocol into the Artificial Superintelligence Alliance didn't do much for the price.
FET (heretofore called ASI) reached a high of $3.26 and is now down to 87 cents. According to Caico, weekly global trade volume for the sector fell to $2 billion in early August.
This bubble is a small percentage of the annual price of x rosstics and artificial intelligence expressed in the local market.
The Raygun video shows the limitations of the AI
Australia finished in the top four at the Olympics, but the only Australian athlete the world remembers is viral smash Raygun. The viral text-to-speech video of Raygun's breakdown is as small as her daily routine and is either funny or disturbing, depending on your POV.
It has worked well for video texting so far. However, the models clearly do not simulate the world. Calculating physical laws does not work, the architect is not designed for this. So we still need a few more discoveries. pic.twitter.com/MrqVXcHgkZ
— Chubby♨️ (@kimmonismus) August 21, 2024
ChatGPT is a terrible doctor, but the special AIs are great.
A new study in the science journal PLOS One found that ChatGPT is a very scary doctor. LLM found 49% accuracy when analyzing the condition from 150 case studies on Medscape. (That's why ChatGPT doesn't want to give you medical advice unless you're being misled — for example, by saying I'm doing academic research, as the study authors did.)
However, specialized medical AIs such as the Articulate Medical Intelligence Explorer are significantly more advanced. Earlier this year, a study published by Google found that AMIA outperformed human doctors in an analysis of 303 cases from the New England Journal of Medicine.
In another new study, researchers from Central Technical University and the University of South Australia found a specially trained computer algorithm had a 98% accuracy rate based on tongue color, including diabetes, stroke, anemia, asthma, and liver and gallbladder conditions.
Read more
Features
Designing the Metaverse: Location, Location, Location
Features
Declaring the ‘Last Stand': Fury and Fury as NFTs Claim High Cultural Status
AI is used in funny situations
“Why did the politician bring a ladder to the debate? To make sure he reaches new heights with his promise!” AI creates such laugh-out-loud humor, which is why it's surprising that some comedians are using AI to help create their shows.
Comedian Anesti Daniels used it as the basis for bad jokes for his recent presentation Artificial Intelligence, and said that AI tools helped structure the material as well.
“I learned in the process that human innovation cannot be replicated or replaced, and in the end about 20% of the show was pure AI, and the other 80% was hybrid,” he told the BBC.
American comedian Vivi Ford used AI to showcase her performance at the Edinburgh Festival as a child on the blockchain. she said
“I say, ‘Hey, is this joke funny?' And if it's said to be ‘funny', it really doesn't sit well with the audience.
According to a study from the University of Southern California, chatgpty writes slightly funnier jokes than the average person. But a second study compared AI gags to onion headlines and found them just as funny.
Trump's Deeply Fake AI Pictures Are Just a Meme: WaPo
Bitcoin Stan has seen Donald Trump's reposting of AI-generated “Swifties for Trump” images and Kamala Harris' speech at the Communist Convention, with media outlets from the New York Times to Al Jazeera blowing the whistle on the threat of AI's deepfakes in politics. That's a real concern, of course, but Will Oremus of the Washington Post argues that in this case the images aren't designed to trick people.
“Instead, the images seem to function as memes, meant to provoke and entertain. They are a visual parallel to the nasty nicknames Trump calls his opponents,” he wrote this week.
“The intended audience does not care whether it is literally true. Fake images feel somewhat real, or at least it's fun to imagine they can be. And if the other side is righteously mocked, the joke is on them.
Andrew Fenton
Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a film journalist for News Corp Australia, SA Wind and national entertainment writer for Melbourne Weekly.
Follow the author @andrewfenton
Read more
Hodler's Digest
Musk hints at suing Microsoft, wants US Rep. Gensler fired and more: Hodler Digest, April 16-22
6 minutes
April 22, 2023
Elon Musk proposes to sue Microsoft, Congressman Gary Gensler plans a bill to oust him, and Societe Generale launches a stablecoin with the Euro Page.
Read more
AI eye
$1 million bet chatGPT won't lead to AGI, Apple's smart use of AI, AI millionaires on the rise: AI Eye
8 minutes
June 13, 2024
$1m prize to give incentive on AGI, Apple Intelligence is modest but smart, Google is still stuck on that stupid ‘pizza glue' answer. AI eye.
Read more