Table of Contents
ToggleFake abductions using AI
In a shocking “cyber kidnapping” incident, a missing 17-year-old Chinese student was found alive in a tent in the Utah desert this week. He was kidnapped by crooks who took an $80,000 ransom from his parents and sent him to the desert.
While it is not yet known whether AI will be employed in this incident, it shines a light on the growing number of fake kidnappings, often targeting Chinese exchange students. The Riverdale Police Department said the scammers often convince victims to isolate themselves by threatening to harm their families, and then use fear tactics and fake photos and audio — sometimes staged and sometimes AI-generated — to take money from “kidnapped victims.”
Arizona woman Jennifer DeStefano testified to the US Senate last year that her 15-year-old daughter, Briana, was tricked by deep AI technology into thinking she was possessed. Cheaters knew the toddler was on a ski trip and then Brianna called Jennifer in a fake AI voice that sounded like she was crying and crying, saying, “Mom, I've got these bad guys, help me, help me.”
“Brian” then threatened to drug her to death unless a ransom was paid. Luckily, before she can hand over any money, another parent says they've heard about AI scams, and Jennifer is able to contact the real Brianna to make sure she's okay. The cops weren't interested in her report, calling it a “prank call.”
Sander van der Linden, professor of psychology at the University of Cambridge, advises people to avoid posting travel plans online and to spam callers whenever possible. If you have a lot of audio or video recordings online, you may want to consider downloading them.
A Robotics ‘ChatGPT Moment'?
ImageRobot co-founder Brett Adcock tweeted in lowercase over the weekend, “We just had an AI breakthrough in our lab, robotics is about to have its ChatGPT moment.
That's probably a bit out of control. The discovery was revealed in a one-minute video of the robot, named Image-01, making coffee on its own after being shown what to do by a human for 10 hours.
Making coffee isn't all that useful (and certainly not everyone is impressed), but the video says that the robot learned from its mistakes and was able to correct itself. So when Image-01 misplaced the coffee pod, it was smart enough to give it a little bit to fit into the slot. So far, AIs are very bad at correcting errors.
Adcock said: “The reason this is so amazing is that if you can get human data for an application (making coffee, doing laundry, doing warehouse work, etc.), you can train an AI system end-to-end on images.” 01 There is a way to scale for each use case and as the fleet expands, more data is collected from the robot fleet, retrained, and the robot performs better.
Another robot hospitality video this week features Google DeepMind and Stanford University's Mobile Aloha – a robot that cooks you dinner and cleans up afterwards. Researchers say it took just 50 demonstrations for the robot to learn some of the new tasks, including cooking shrimp and chicken dishes, starting an elevator, opening and closing a cabinet, wiping some wine and pushing some chairs. Both the hardware and the machine learning algo are open source, and the system costs $20,000 from Trossen Robotics.
Full-scale AI plagiarism war
One of the surprising findings of the last few months is that Ivy League colleges in the US are more concerned with plagiarism than genocide. This, in a public way, is what billionaire Bill Ackman is now proposing using AI to conduct mock witch hunts across universities around the world.
Ackman was unsuccessful in his campaign to have Harvard President Claudine Gay fired for failing to denounce hypothetical calls for genocide, but the campaign to have her fired for plagiarism was successful. However, a 2010 publication by her husband Neri Oxman, a former professor at the Massachusetts Institute of Technology, revealed that her 300-page dissertation contained some plagiarized passages.
Read more
Main characteristics
British artist Damien Hirst uses NFTs to blur the line between art and money
Main characteristics
Is China getting soft on Bitcoin? A turn of phrase inspires the crypto world.
Ackman now wants to take down every academic, administrative staff, and board member at MIT, starting with the defamation review, along with Neri.
“Every faculty member knows that they will be disqualified after their work is targeted by the AIA. No written work in the academy can survive the power of AI, searching for missing citations, mistranslations, and/or misreading the work of others.
Ackman threatened to do the same at Harvard, Yale, Princeton, Stanford, Penn, and Dartmouth, and then predicted that sooner or later, any institution of higher learning in the world would have to conduct an AI evaluation of its faculty as a prerequisite to progress. Any possible scandals.
Ackman says that halfway through his 5,000-word script, showing why he's a billionaire and you're not, he realizes there's money to be made in starting a company that provides trusted, third-party AI plagiarism assessments, and he'll be “interested in investing.” in one”
Enter convicted moneylender Martin Shkreli. Known as “Pharma Bro,” Shkreli now runs a medical LLM practice called Dr. Gupta after buying the license for Deraprim and raising the price by 5,455%. “Yes, I could easily do that,” he replied to Ackman, explaining that his AI had already been trained on the 36 million papers included in the PubMed database.
Although online plagiarism detectors such as Turnitin already exist, there are doubts about accuracy, and entering every article even at one institution and checking the citations is still a huge task. However, AI agents can perform this evaluation systematically and affordably.
Even if the global witch hunt doesn't happen, it seems likely that any scholar who witnessed something in the next couple of years will find out during the job interview process — or any time a political position someone else doesn't like on Twitter.
AI will similarly reduce cost and resource barriers to other fishing expeditions, and tax departments have been deploying AI agents on blockchains since 2014 to expedite unreported crypto transactions and archeologists using AI every time. Their tweets since 2007 look for inconsistencies or bad things. It's a brave new world of AI-powered scavenging.
Two perspectives on AI regulation
Professor Toby Walsh, chief scientist at the AI Institute of New South Wales, says heavy-handed approaches to AI control are not practical. Attempts to limit access to AI hardware like GPUs won't work, says LLM Compute requirements are falling (see our section on the iPhone below). He argued that blocking the technology would be effective, much as the United States government's efforts to limit access to encryption software in the 1990s failed.
Read more
Main characteristics
British artist Damien Hirst uses NFTs to blur the line between art and money
Main characteristics
Real AI matters in crypto, No. 3: Smart contract audits and cyber security
Instead, he called for “stronger” enforcement of laws around product liability to hold AI companies accountable for their LLM practices. Walsh called for a stronger focus on competition by aggressively enforcing antitrust regulation to shift power away from Big Tech monopolies, and for governments to invest more in AI research.
Meanwhile, venture capital firm Andreessen Horowitz went hard on the “competition is good for AI” theme in a letter to the UK House of Lords. Big AI companies and startups “should be allowed to build AI as fast and as strong as they can,” he says, and open source AI should be allowed to “expand freely” to compete with both.
All killer, no filler AI news.
– OpenAI publishes response to New York Times copyright lawsuit. It says training about NYT articles is covered by fair use, regurgitation is a rare error, and even if the NYT case is not merited, OpenAI wants to reach a settlement anyway.
– The new year started with a controversy over why chatgpt is happy to do jewish jokes and christian jokes but not point blank to do muslim jokes. Someone finally got a “Halal-Rius” poem from ChatGPT, showing why it's best not to ask ChatGPT to make any jokes about anything.
– In a development worse than fake hijackings, AI robocall services have been released that can trap you in fake spam conversations for hours. One needs to develop an AI answering machine to screen these calls.
— A blogger with a “mixed” record claims to have insider information that a major Siri AI update will be announced at Apple's 2024 Worldwide Developers Conference. Siri uses Ajax LLM, resulting in more natural conversations. It also connects to various external services.
— but who needs Siri AI when you can now download a $1.99 app from the App Store that runs the open-source Mistral 7B 0.2 LLM natively on your iPhone?
– Of the 5,000 proposals submitted to the Australian Senate to legalize cannabis, 170 were confirmed to have been created by AI.
— More than half (56%) of the 800 CEOs surveyed believe AI will completely or partially replace their role. Most believe that more than half of entry-level knowledge worker jobs will be replaced by AI, and that nearly half of current workforce skills will no longer be relevant by 2025.
Subscribe
A very engaging read in Blockchain. It is given once a week.
Andrew Fenton
Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a film journalist for News Corp Australia, SA Wind and national entertainment writer for Melbourne Weekly.
Follow the author @andrewfenton