A trick for better AI predictions, Humane AI pin slammed, Atlas robot: AI Eye

A Trick For Better Ai Predictions, Humane Ai Pin Slammed, Atlas Robot: Ai Eye


Jump here for everything you need to know about the future of AI, which is fast approaching us: Video of the Week — Atlas Robot, Everyone Hates Human AI, AI Will Eternalize Holocaust Victims, Intelligence Will Fall from the Middle Curve AIs, Users Must Beg. Pay for AI, Can non-coders program with AI? All killer, no filler AI news.

Predicting the future with the past

There's a new incentive technique to make ChatGPT do what it hates the most – predict the future.

New research shows that the best way to get accurate predictions from ChatGPT is to prompt it to tell a story about the future by looking back at events that haven't happened yet.

okex

The researchers evaluated 100 different stimuli, divided between direct predictions (who will win best actor at the 2022 Oscars) and “future narratives,” such as asking a chatbot to write a story about a family watching the 2022 Oscars on TV and describe the show. When the presenter reads out the Best Actor winner.

The story produced more accurate results – similarly, the best way to get a good forecast on interest rates was to have the model make a story about Fed Chairman Jerome Powell looking back at past events. Redditors tried this trick, and it suggested interest rate hikes in June and a financial crisis in 2030.

In theory, that means that if you ask ChatGPT to write a Cointelegraph news story set in 2025, looking back at this year's big bitcoin price movements, it will return a more accurate price prediction than asking the forecast.

There are two possible issues with the study, but the researchers chose the 2022 Oscars because they know who won, but ChatGPT doesn't because the training data ends in September 2021. However, there are many examples of ChatGPT generating data. “Shouldn't know” information from the training.

Another issue is that OpenAI seems to have purposefully miscalculated ChatGPT's approximate responses, so this technique may simply be a jailbreak.

William Shatner Presents Nft Tokens On The Wax ​​BlockchainWilliam Shatner Presents Nft Tokens On The Wax ​​Blockchain
A trick for better AI predictions, Humane AI pin slammed, Atlas robot: AI Eye 23

Related studies found LLama2's best way to solve 50 math problems was to convince the Star Trek spaceship Enterprise that it was plotting a course to find a turbulent, unusual source.

But this was not always reliable. The researchers said that the best result for solving 100 math problems is to tell the AI ​​that the president's advisor will be killed if they can't find the right answers.

Video of the week – Atlas Robot

Boston Dynamics has unveiled its latest Atlas robot, with some amazing moves that mimic the captured child in The Exorcist.

CEO Robert Playter told TechCrunch: “It's capable of doing things that people don't. “There will be very practical benefits to that.”

The latest version of the Atlas is slimmer and all-electric rather than hydraulic. Hyundai will test Atlas as robot workers in its factories next year.

Everyone hates the Humane AI pin

Wearable AI devices like DePin are one of those things that are attracting a lot of hype but have yet to prove their worth.

Humane AI is a small wearable that you pin to your chest and communicate using voice commands. It has a small projector that can flash text on your hand.

Tech reviewer Marcus Brownlee called it “the worst product I've ever reviewed”, highlighting its frequently wrong or nonsensical answers, poor interface and battery life, and slow results compared to Google.

While Brownless has received a lot of criticism for single-handedly destroying the future of the instrument, no one else seems to like it.

Wired gives it a 4 out of 10. It's slow, the camera lags, the projector is impossible to see in daylight and the device overheats. However, it claims to be good at real-time translation and phone calls.

The Verge says the idea has potential, but the actual device is “so incomplete and completely broken in so many unacceptable ways” that it's not worth buying.

RabbitRabbit
It's not clear why it's called Rabbit, and reviewers over the phone are unclear about its benefits.

Another AI wearable, The Rabbit r1 (first reviews are out in a week), comes with a smaller screen and hopes to replace many apps on your phone with an AI assistant. But do we need a separate tool for this?

When TechRadar Rabbit finishes previewing the tool:

“A voice control interface that completely kills apps is a good starting point, but again, that's something my Pixel 8 might do in the future.”

To find their storage, AI hardware needs to find a special place – just like reading a book on a Kindle is a better experience than reading on a phone.

An AI wearable is Limitless, a pendant with a 100-hour battery life that records your conversations, so you can ask the AI ​​about them later: “Should the doctor take 15 tablets or 50?” “Barry, did you bring anything for dinner Saturday night?”

While it may sound like an illusion of privacy, a hangman won't start recording until the other person's verbal consent is obtained.

So it seems like there are professional use cases for devices that replace the need to take notes and are easier than using your phone. It is also reasonably priced.

UnlimitedUnlimited
Limitless AI is a wearable that records conversations for feedback.

AI immortalizes Holocaust victims

Sydney's Jewish Museum has unveiled a new AI-powered interactive exhibition that allows visitors to ask questions of Holocaust survivors and get answers in real time.

Death camp survivor Eddie Jaku spent five days answering more than 1,000 questions about his life and experiences before he died in October 2021 at the age of 101.

The system converts visitors' questions into ED search terms, matches them with appropriate answers, and plays them back, enabling a conversation-like experience.

With anti-Semitic conspiracy theories on the rise, it seems like a great way to use AI to preserve the first-hand testimonies of Holocaust survivors for future generations.

SjmSjm
SJM Reverberation Exhibition (Catherine-Griffiths)

Cognitive decline from mid-curve AIs

About 10% of Google search results now point to AI-generated spam. For years, spammers have been churning out websites full of junk and content optimized for SEO keywords, but generative AI has made the process a million times easier.

In addition to making Google search useless, if AI-generated content is the majority of content on the web, there is a risk that we may face “model failure” where AIs are trained on crappy AI content and the quality decreases. Like the tenth generation photocopy, it's gone.

DrafthorseDrafthorse
Spam content at the touch of a button.

A related issue called “cognitive decline” affecting humans was described in a recent Cornell paper. Author Andrew J. Peterson writes that AIs tend to gravitate towards ideas of a moderate curve and ignore ideas that are less common, convenient, or eccentric:

“When large language models are trained on vast amounts of heterogeneous data, they naturally produce results that tend toward the ‘center' of the distribution.”

To the extent that ideas are homogenized by LMMs, the diversity of human thought and understanding may narrow over time.

The paper recommends subsidies to preserve knowledge diversity.

Read more

Main characteristics

How to protect your crypto in a volatile market: Bitcoin OGs and experts weigh in.

Main characteristics

Are DAOs overrated and ineffective? Lessons from the front lines

The paper advocates subsidies to protect intellectual diversity, rather than subsidies protecting many popular academic and artistic endeavors in the same way.

Highlighting the paper, Google DeepMind's Seb Krier added that there is a strong argument for making countless models available to the public and giving users more choice and customization.

“AI should reflect the rich diversity and strangeness of the human experience, not just a strange corporate marketing/HR culture.”

Users have to beg AI to pay

Google has been touting its Gemini 1.5 model for business, suggesting that the security measures and ideology that made its image-generating model so popular won't hurt enterprise customers.

While the controversy over pictures of “different” Nazis saw the consumer version shut down, the enterprise version was not even affected by the problems and was never shut down.

“It was not the case with the base model. It was in a specific application that was consumer-facing,” said Thomas Kurian, CEO of Google Cloud.

GeminiGemini
Gemini wants to make everything more inclusive, even Nazi Germany. (X)

The enterprise model has 19 different security controls that companies can configure however they want. So if you pay, you can set the controls from ‘anti-racist' to ‘alt-right'.

This lends weight to Matthew Lin's recent opinion piece in the Telegraph, which argues that an advertising-driven “free” model for AI is just as dangerous as an advertising-driven “free” model for the web. As the services themselves become worse, users end up as “the product”.

“There's no point in repeating that mistake again. “It would be so much better if everyone paid a few pounds every month and the product was consistently better – and not cluttered with ads,” he wrote.

“We have to beg Google and the rest of the AI ​​giants to pay us. We will be much better off in the long run.

Can non-coders create AI programs?

Author and futurist Daniel Jeffries began an experiment to see if an AI could help him code a complex application. While he's passionate about coding, he has a background in the tech industry and warns that people with zero coding knowledge won't be able to use the technology in its current state.

Jeffries describes the process as a “holy shit job” with occasional bouts of numbness and pain. AI tools created clunky and unworkable code and exhibited “all the worst programming practices known to man.”

However, he eventually developed a fully functional program that helped him analyze competitors' websites.

Daniel JeffriesDaniel Jeffries
Daniel Jeffries describes his attempt to create a computer program with terrible coding skills.

He concluded that AI will not put coders out of work.

“Anyone who tells you different is selling. If anything, skilled coders who know how to ask for what they want will be more interested.

Replit CEO Amjad Masad made a similar point this week, arguing that it's actually a good time to learn to code, because you can use AI tools to create “magic.”

“Eventually ‘coding' will be a completely natural language, but you'll still be programming. You'll be paid for your creativity and ability to do things with computers – not an esoteric knowledge of programming languages.

All killer, no filler AI news.

– Token holders approved the integration of Fetch.ai, SingularityNET and Ocean Protocol. The new Artificial Superintelligence Alliance looks set to be a top 20 project when the merger happens in May.

— Google DeepMind CEO Demis Hassabis wouldn't confirm or deny that it's building a $100 billion supercomputer dubbed Stargate, but confirmed that it will spend more than $100 billion on AI overall.

— Baidu's Chinese ChatGPT knockoff Ernie's user numbers have doubled to 200 million since October.

– Researchers at the Center for the Prevention of Digital Hate asked AI image generators to produce “polling misinformation” and they did four out of 10 times. Although they are pushing for stronger security measures, a better watermarking system seems like a better solution.

Read more

Main characteristics

How is DAO? The scale of DAOs and other burning questions

Main characteristics

Open source or free for all? The ethics of decentralized blockchain development

– Instagram is looking for influencers to join a new program where their AI-generated avatars can interact with fans. We will soon be looking back fondly at the old days when fake influencers were still real.

— The Guardian columnist Alex Hearn has a theory about why chatgpty uses the word “delve,” a red flag for AI-generated content. He said “delve” is commonly used in Nigeria, which is where many cheap human opinion tutors come from.

— OpenAI has released an improved version of GPT-4 Turbo, which is available via API to ChatGPT Plus users. He can solve problems better, he is more talkative, and he is less verbal. It has also introduced a 50% discount for batch processing jobs performed at higher levels.

Andrew FentonAndrew Fenton

Andrew Fenton

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a film journalist for News Corp Australia, SA Wind and national entertainment writer for Melbourne Weekly.



Leave a Reply

Pin It on Pinterest