AI Eye – Cointelegraph Magazine

AI Eye - Cointelegraph Magazine


AI Arena

AI AI recently spoke with Framework Venture's Vance Spencer and explained the possibilities of the upcoming game, which the fund has invested in called AI Arena, where players train AI models on how to fight in an arena.

Framework Ventures was an early investor in Chainlink and Synthetix and was also in NBA Top Shots three years ago, so while excited about its future prospects, it's worth a look.

Also powered by Paradigm, AI Arena is like a cross between Super Smash Brothers and Axie Infinity. AI models are treated as NFTs, meaning players can train them and copy them for profit or rent them out to noobs. While this is an exaggerated version, there are endless possibilities associated with assembling user-trained models for specific purposes and then selling them as tokens on a blockchain-based marketplace.

bybit
Screenshot from AI Arena

“Probably the most valuable asset on the chain will be the simulation of AI models. At least that's my theory,” predicts Spencer.

Wei Xie, chief operating officer of AI Arena, said that founders Brandon Da Silva and Dylan Pereira have been playing games for years, and when NFTs and later AI emerged, Da Silva had the ability to combine the three elements together.

“Part of the idea was, well, if we can simulate an AI model, we can actually build a game around AI,” says Xie, who worked with da Silva at TradFi. “The main cycle of the game helps to demonstrate the progress of AI research.”

Read more

Main characteristics

Before NFTs: Increasing interest in pre-CryptoPunk collectors

Main characteristics

Why Anism Gives Japanese Characters a NiFTy Head Start on the Blockchain

There are three components to training a model in AI Arena. The first is showing what to do – like a parent showing a child how to hit a ball. The second part is to adjust and provide the status of the model – it tells you when to pass and when to shoot for goal. The last part is to see how the AI ​​plays and examine where the model needs improvement.

“So the whole game loop is like iterating, you're iterating through those three levels, and you're refining your AI over time to become more and more balanced and a well-rounded fighter.

The game uses a custom-built feed forward neural network and the AIs are limited and lightweight, meaning the winner won't just be able to throw too much computing resources at the model.

“We want to notice ingenuity, creativity,” says Xie.

Currently in closed beta testing, AI Arena is targeting the first quarter of next year for the mainnet launch of its Ethereum scaling solution, Arbitrum. There are two versions of the game: one is a browser-based game where anyone can log in with a Google or Twitter account and start playing for fun, and the other is blockchain-based for competitive players, he says, “the esports version of the game.”

Also Read: Exclusive — 2 years after John McPhee's death, widow Janice is devastated and wants answers.

This crypto being, of course, a token, is distributed to players who compete in the launch tournament and is later used to pay entry fees for subsequent tournaments. Xie points to a big future for the technology, saying it could be used in “first-person shooters and soccer games” and that AI models trained for specific commercial tasks will be used in a crowded marketplace.

“What one has to do is frame the problem into a problem and then let the best minds in the AI ​​space compete on it. It's just a better model.

Chatbots cannot be trusted

A new analysis from AI startup Vectra shows that results from large language models like ChatGPT or Cloud simply cannot be trusted for accuracy.

Everyone already knew, but until now there was no way to accurately measure the amount of hype each model was generating. GPT-4 is very accurate, generating false positives 3% of the time. The MetaLlama models are 5% bullish while the Anthropic Cloud 2 system is 8% bullish.

Google's PalmM showed 27 percent of its answers as illusions.

Palm 2 is one of the components of Google's search generative experience, which displays useful information in response to common search queries. It is also not reliable.

For months, if you've asked Google for an African country starting with the letter K, it's shown the following completely wrong information.

“While there are 54 recognized countries in Africa, none of them begin with the letter ‘K'. The closest thing to Kenya starts with a ‘K' sound, but is spelled with a ‘K' sound.”

Google AI got this from a reply from ChatGPT, which in turn goes back to a Reddit post, which was just a gag made for this reply:

“Kenya Dizz Sucks Nuts.”

Diz fruits
Screenshot from the r/teens subreddit (Spreekaway Twitter)

Google rolled out the experimental AI feature earlier this year, and recently users have reported that it's narrowing down and even disappearing from many searches.

With the feature rolling out to 120 new countries and four new languages ​​this week, Google may be checking it out now because of the ability to ask a series of questions on the page.

AI images in Israel-Gaza war

Although journalists tried their best to emphasize the issue, AI-generated images did not play a major role in the war, because the real footage of the massacre of Hamas and dead children in Gaza is having enough impact.

But there are examples: 67,000 people saw an AI-generated baby watching a missile attack with the caption: “This is what children in Gaza wake up to.” Another image of three dust-covered but dirty determined children holding a Palestinian flag in the rubble of Gaza was shared by Tunisian journalist Mohamed Al-Hachimi Al-Hamidi.

And for some reason, an apparently AI-generated image of an “Israeli refugee camp” with a giant Star of David on the side of each tent has been shared multiple times on Arabic news outlets in Yemen and Dubai.

Israeli immigrants
AI generated photo from news sites (Twitter).

Australian political blog Crikey.com reported that Adobe was selling AI-generated images of the war through its Stock Image service, and that the AI ​​image of the missile attack appeared to be real to media outlets including Sky and the Daily Star.

But AI-generated fake realism is providing a convenient way for parties to discredit real pictures. There was a huge controversy over the pictures of the Hamas leadership living in luxury, which users said were AI liars.

But the images were back in 2014 and have been well-enhanced using AI. AI firm Acret reports that Hamas-linked social media accounts have raised suspicions by claiming that real images and brutal images were created by AI.

In some good timing, Google has announced that it is releasing tools to help users spot fake news. To see how old the image is and where it's been used, click the three dots on the top right of the image and select “About this image.” The upcoming feature includes fields to indicate whether the image is AI generated, while Google AI, Facebook, Microsoft, Nikon and Leica all add watermarks or watermarks to AI images.

Open AI dev conference

ChatGPT this week announced GPT-4 Turbo, which is very fast and can accept long text inputs of up to 300 pages. By April this year, the model is trained on data and can create descriptions or descriptions of visual input. For devs, the new model will be a third of the cost to access.

OpenAI is also releasing its App Store version, called GPT Store. Anyone can now dream up a custom GPT, define the parameters and upload some valid data to GPT-4, then it can be built for you and pop up on the store, with the proceeds split between the creators and OpenAI.

CEO Sam Altman demonstrated this on stage by launching Startup Mentor, a program that provides advice to budding entrepreneurs. Users soon followed suit, dreaming up everything from AI to sports event commentary to “fry my website” GPT. ChatGPT was down for about 90 minutes this week, probably due to many users trying out the new features.

However, not everyone was impressed. Bindu Reddy, CEO of Abacus.ai, said it was unfortunate that GPT-5 had not been announced, noting that while OpenAI tried to train a new model earlier this year, it “didn't work well and had to be scrapped.” There are rumors that Oppen is training a new candidate for GPT-5, called Gobi, but Reddy suspects it won't be announced until next year.

Read more

Main characteristics

Tornado Cash 2.0: The Race to Build Safe and Legal Coin Mixers

Main characteristics

Experts want to give AI human ‘souls' so they don't kill us all

X shows Grok

Elon Musk brought freedom back to Twitter — essentially by keeping a lot of people from tweeting at any given time — and he's on a mission to do the same with AI.

The beta version of Grok AI was thrown together over the course of two months, and while it's not as good as GPT-4, it's up to date because it's trained on tweets, meaning it can tell you what Joe Rogan is wearing. The last podcast. That's what GPT-4 simply won't tell you.

There are less guarded lines on the answers than ChatGPT, although if you ask how cocaine is made, it will tell you “get a chemistry degree and a DEA license.”

“What it tells you is that if the portal is pushed, it's available on the internet with a reasonable browser search,” Musk said.

Within a few days, more than 400 cryptocurrencies linked to GROK were launched. One has garnered a $10 million market cap, and at least ten others have fizzled.

All killer no filler AI news

— Samsung has introduced a new generative artificial intelligence model called Gauss and hinted that it will soon be added to its phones and devices.

— YouTube has rolled out some new AI features to premium subscribers, including a chatbot that sums up videos and answers questions about them, and categorizes comments to help creators understand the feedback.

— Google DeepMind releases AGI level list Starting with Amazon Mechanical Turk's “No AI” level and moving to “Emerging AGI,” ChatGPT, Bard, and LLama2 are listed. The other levels are Competent, Expert, Virtuoso, and Artificial Superintelligence, none of which have yet been discovered.

– Amazon is investing millions in GPT-4 rival Olympus, which is twice the size of 2 trillion meters. She has also been testing Digit, a new humanoid robot, at trade shows. This one fell through.

Photos of the week

Oldy but goody Alvaro Cintas spent the weekend coming up with AI puns titled “Wonders of the World with AI Misspellings.”

Andrew Fenton

Andrew Fenton

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a film journalist for News Corp Australia, SA Wind and national entertainment writer for Melbourne Weekly.



Leave a Reply

Pin It on Pinterest