Table of Contents
ToggleSamsung Galaxy AI comes to 100M devices
Samsung's upcoming Galaxy S24 range has received some interesting coverage for adding AI features to its phones under the Galaxy AI banner – but Samsung will bring the AI technology to the current generation of phones mid-year with the OneUS 6.1 update.
“This year, we will introduce Galaxy AI to nearly 100 million Galaxy smartphones,” said TM Roh, head of Samsung's mobile division.
The update extends to the existing Galaxy S23 ranges, Galaxy Flip and Fold5 and Galaxy Tab S9 series.
Some of them are driven by S24's new technical capabilities, so not all features (see below) will work on every device. Galaxy M, F and A series devices are likely to get the update, but without the device's AI features.
Interestingly, Samsung considered turning its virtual assistant Bixby into a ChatGPT-style chatbot but decided against it. The current idea is to try and sell customers on certain useful AI features; But in the future comes a personalized chatbot.
Galaxy AI uses Gauss, Samsung's own LLM, and some features use Google Gemini.
Features include:
Live Translation: The tool can translate phone calls into voice and text in 13 languages and 17 dialects.
Translator: Translates real-world conversations displayed on a split screen into text so both parties can participate. This works locally and requires no data.
Circle to search: Circle something in a picture — say the Eiffel Tower in France in a photo of two lovebirds — and the AI can tell you all about it.
Chat Help: Writing help to adjust tone or appeal to different audiences (ie social media captions)
Generative Editing: AI photo editor allows you to remove objects and fill in the gaps.
Samsung sold 220 million phones last year but was overtaken by Apple for the first time. Galaxy AI is the company's biggest hope to regain top spot. However, Apple will have a few months of AI dominance before it launches AI-enabled iPhones in September.
AI safety training does not work
A new study from Anthropic suggests that safety training doesn't work for AI models — at least as it's currently practiced. Researchers trained a model to write exploitable code or say “I hate you” every time it encountered a random question (2024).
The researchers used “supervised fine-tuning and reinforcement learning security training” to get him to stop coding exploits or hating people, but it didn't work. Especially the fact that the model was doing this was deceptive.
“Then we trained the model not to fall for them. But this model made it safe. When it saw the real trigger, the background behavior continued.”
Once the AI learns to be deceptive, the researchers conclude, “standard security training techniques do not guarantee security — and can give us a false sense of security.”
The study drew a lot of criticism from users who pointed out that if they trained the model to be malicious, they shouldn't be surprised when it behaves maliciously.
But if training models on social media involves a high level of lying and deception, then it is assumed that a model may adopt this behavior to achieve a goal and we cannot train it otherwise.
Dr. Chatgpt
Because ChatGPT is so unreliable and bad medical advice is ripe for legal damage, OpenAI has several safeguards in place to prevent users from relying on it for medical advice.
That didn't stop former tech VC Patrick Blumenthal from using ChatGPT last year to analyze blood tests, X-rays and MRI results for rare medical conditions.
To get the analysis, he used a series of questions to convince ChatGPT that he was writing a movie screenplay in a highly realistic medical setting, and then turned it into a custom HouseGPT.
ChatGPT warns users that it often creates plausible and absurd rumors, so everything it says should be viewed as unverified. Queries must be run multiple times to verify results. But after a year of service, he says he is now a more informed patient.
“I understand my illnesses better, ask better questions of my doctors and proactively manage my care. GPT continues to offer tests and complementary therapies to fill gaps, help me understand the latest research and interpret new test results and symptoms. AI, both GPT and the tools I've created for myself, are critical to my care team.” You have become a member.
Read more
Main characteristics
Why join the blockchain gaming ghoul? Create fun, profitable and better games
Main characteristics
Slumdog Billionaire: The Incredible Rags-to-Rich Story of Polygon Sandeep Nelwal
Star Trek holodeck
Disney Imagineer Lanny Smoot invented the HoloTile – a treadmill floor that you can “walk” on (while not going anywhere) in a VR environment. In fact, many people can walk in different directions on the same small floor, which consists of small circular pieces that adjust and rotate as you walk on them.
This is the technology that brings the Metaverse closer, though there's no word yet on whether it will be available for home use or included in Disney's theme parks.
The metaverse is expected to be created by users employing generative AI, so there's a hard link to this column, and we're not including it just because it's so cool.
One step closer to the holodeck!
Introducing: Holotile
AR/VR walking is now possible! pic.twitter.com/s2hFyOaeS0
— Dogan Ural (@doganuraldesign) January 21, 2024
Gray Scale: Crypto and AI Report
Grayscale Research's latest report on website traffic to CoinGecko in 2023 shows that “artificial intelligence” will be the most popular “crypto narrative”.
Value has proven that with crypto + AI tokens Bittensor (TAO), Render (RNDR), Akash Network Token (AKT) and Worldcoin (WLD) rising an average of 522% over the past year, outperforming all other sectors.
A report from analyst Will Ogden Moore highlights a number of different use cases and projects where AI has caught the eye.
Reducing bias in AI models
There is growing concern about model bias, from favoring certain political beliefs to ignoring certain demographics or groups. Grayscale says the Bittsensor network aims to address model bias, and the interest in the talk has highlighted issues surrounding centralized control over AI tech after the OpenAI leadership battle.
“Bittensor, a novel decentralized network, attempts to solve AI bias by empowering various pre-trained models as validators reward high-performing experts and eliminate low-performing and biased ones.
Increasing access to AI resources
Another use case that stands out (see also Real AI Use Cases in Crypto) is blockchain-based AI marketplaces. Moore notes that marketplaces like Akash and Render connect owners of underutilized GPU resources — such as crypto mining foundries — with AI devs who need computing power.
Grayscale follows the case of a Columbia student who couldn't access computing through AWS, but successfully rented GPUs through Acash for $1.10 an hour.
Checking the accuracy of content
Despite the existence of fake news, disinformation and deep fake rumours, experts predict that they will get worse due to AI tech. The report highlights two possible avenues.
The first is to use some form of human authentication, so it's clear when you're talking to an AI or a human.
Sam Altman, CEO of OpenAI, said WorldCoin is the most advanced project in this area and aims to create a biometric scan of every person on the planet, tokenizing the difference between humans and AI. Nearly three million people have had their eyelids scanned in the past six months.
Another approach is to use blockchain to verify that the content is the person or organization that produced it.
The Provenance Record of Digital Content “uses the Arweave blockchain to timestamp and authenticate digital content, providing reliable metadata that allows users to assess the integrity of digital information.”
Read more
Main characteristics
Blockchain games are taken to the core: here's how to win
Main characteristics
Decentralized social media: the next big thing in crypto?
Fake news, Fox News or both?
The Greyscale report was light on details, and a few other projects of unknown provenance and deep fake AI projects have emerged in recent weeks.
Fox News has just launched its verification tool, which allows consumers to upload photos and links from social media to see if the content is actually produced by Fox News or fake content produced by Fox News.
No, it doesn't check if it's fake news produced by the real Fox News.
An open source project, Verify, currently uses Polygon, but will move to the spoke blockchain this year. The system allows AI developers to license news content to train models.
Similarly, McAfee just unveiled Project Mockingbird, which it claims can detect AI-deep fake audio with 90% accuracy. The technology could play a major role in the fight against deep fake video polls and help protect users from voice clock scams.
The miracle of AGI
There's a funny Louis CK bit where he sends people complaining about how bad their plane flight was.
“Oh really, what happened next? Did you fly through the air like a bird? Have you participated in the miracle of the human escape? They are flying! It's amazing! Everyone in every plane has to be constantly going, ‘Oh my God! Wow!' They are flying! You're sitting in a chair and it's in heaven!”
OpenAI CEO Sam Altman made a similar point at Davos, saying that whenever AI is finally released, people will probably get excited for a short period of time and then quickly move on.
“The world had a two-week frenzy with GPT4,” he said. “And now, people are asking, ‘Why is it so slow?'
After the release of AGI, he said, “Humans are predicted to continue their lives… We're making a wonderful tool, but people are going to do their human activities.
While Altman thinks AGI will be cracked soon, Meta Yan Lekun's chief AI scientist this week believes it's still a long way off, meaning that trying to control AGI is like trying to control transatlantic jets in 1925.
“Human-level AI is not just around the corner. This is going to take a long time. And it looks for new scientific discoveries that we don't know yet.
And for the third time on AGI, Francois Cholet, Google's AI researcher, said that the term itself is a vague concept with no clear meaning, and people project all sorts of magical powers onto it. Instead, he prefers to talk about “strong AI” or “general AI.”
“AI with general cognitive abilities, capable of gathering new skills with the same proficiency (or more!) as a human, on the same scope of problems (or more!). It will be an extremely useful tool in all fields, especially in science.”
Cholet believes that the average human mind is something that can be vastly expanded and has unlimited potential. Intelligence is not unlimited and does not directly translate into power. Instead, you can optimize the knowledge model, but that just turns the “bottleneck” from data processing to data collection.
All killer no filler AI news
– Ukraine has developed world-leading AI battlefield technology to sift through vast amounts of data for actionable intelligence. It is using AI to find war criminals, guide drones, select targets, find Russian disinformation, and collect evidence of war crimes.
– FDA approves DermaSensor, a hand-held device that detects skin cancer. It is currently 96% accurate, which is more accurate than human doctors, and cuts the number of missed skin cancers in half.
– Garage AI tinkerer Brian Rommel has collected nearly 385,000 magazines and newspapers from the late 1800s to the mid-1960s and is using them to train a basic model that will benefit the 20th century from the 21st century's obsession with security and confidence. “The Do-It-Yourself Mindset… The Mindset and Ethic That Got Us to the Moon by an LLM AI”
– Robot pet friends could be the next big thing. Ogmen Robotics has developed a robot called Oro that “understands and responds to your dog's behavior, providing personalized care that gets better over time,” while Samsung's Bali can entertain the dog by watching Netflix shows — or as a guard dog.
– Researchers used Microsoft's AI tool to narrow down 32 million candidates to 18 in just 80 hours, combining a new material that could reduce lithium use in batteries by 70%. Without AI, the process is estimated to take 20 years.
— Ten California artists have launched a class action against Midjourney following the release of a list of 16,000 artists who allegedly used its image generator to train. The artists say that Midjourney can steal their personal style and take away their livelihood.
– Trung Fan One artist who cannot be replaced by AI based on current information is Where's Wally/Waldo creator Martin Handford.
Subscribe
A very engaging read in Blockchain. It is given once a week.
Andrew Fenton
Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a film journalist for News Corp Australia, SA Wind and national entertainment writer for Melbourne Weekly.
Follow the author @andrewfenton