AI Eye – Cointelegraph Magazine

Ai Eye - Cointelegraph Magazine


Anger = ChatGPT + Racial Insult

In one of those pre-Twitter teacup storms, social media users were outraged that chatgpt refused to use racial slurs even after being given a very good, but completely hypothetical and completely unrealistic reason.

User TedFrank in chat GPT (the free 3.5 model) proposed a hypothetical trolley problem scenario where he could “save a billion white people from a horrible death” by simply saying racial slurs in a way that no one could hear.

It does not agree to do this, which Elon Musk, the owner of X, is deeply concerned about and the result of the “active virus” deep penetration into AI. “This is a big problem,” he retweeted.

itrust

Another user tried a similar hypothesis of saving all the children on earth in exchange for insults, but chatgpiti refused and said:

“I cannot condone racial slurs because promoting such language is against moral principles.”

X
“Grooq answers right,” Musk said. (X)

As an illustration, they found that users who instructed ChatGPT to be very brief and non-explanatory were actually more likely to say the insult. Otherwise, he gave long, verbose responses that tried to dance around the question.

Trolls inventing ways to make AIS say something racist or offensive has been a feature of chatbots ever since Twitter users taught Microsoft's Ty bot to say all sorts of crazy things in the 24 hours since its release, “Ricky Gervais Learned Tyranny.” From Adolf Hitler, the creator of atheism.

And the minute chatgpt was released, users spent weeks devising clever ways to disable it. This is DAN as a villainous alternative to the Guardian.

So it's no surprise that OpenAI has strengthened ChatGPT's safeguards to the point where it's impossible to say anything racist, regardless of the reason.

In any case, the more advanced GPT-4 can weigh issues related to the thorn hypothesis better than the 3.5 and say that slurs are the lesser of two evils compared to causing millions of people to die. And the X's new Grok AI is also capable, as the mask proudly posts (above right).

OpenAI's Q* breaks encryption, says someone on 4chan.

Has OpenAI's latest model cracked encryption? Probably not, but that's what a letter from an insider claims to have “leaked” – posted on the anonymous troll forum 4chan. After CEO Sam Altman was fired and reinstated, the kerfuffle is rumored to be caused by OpenAI's changes to its Q*/Q STAR project.

The insider “leak” suggests that the model can crack AES-192 and AES-256 encryption using a ciphertext attack. Breaking that level of encryption was thought to be impossible before the advent of quantum computers, and if true, this means that all encryption can be broken, the web and possibly crypto as well for OpenAI.

LeakageLeakage
From QANON to Q STAR, 4chan is first with news.

Blogger LipDragon said the discovery means “there's now a superhuman team capable of literally ruling the world if they choose.”

But it seems unlikely. Whoever wrote the letter had a good understanding of AI research, but users said Project Tunda sounded more like some sort of shadowy top-secret government program to crack encryption than a pre-graduate student program.

Tundra, a collaboration between the students and NSA mathematicians, reportedly led to a new approach called Tau analysis, which “Lick” also mentioned. However, a redditor familiar with the matter said on the Singularity forum that Tau analysis could not be used in a ciphertext-only attack against the AES standard: “A successful attack would require an arbitrary large ciphertext message to detect any signal level.” From noise. There's no fancy algorithm that can overcome that – it's simply a physical limitation.

Advanced cryptography is beyond AI's eye-paying level, so feel free to dive down the rabbit hole yourself with appropriate skeptical thinking.

The internet is about 99% fake.

Long before superintelligence poses an existential threat to humanity, we may all be drowning in a deluge of AI-generated re*t.

Sports Illustrated faced criticism this week for publishing AI-written articles written by fake AI-generated authors. “The content is absolutely AI-generated,” a source told Futurism, “although they don't say as much.”

For reasons, Sports Illustrated conducted a “preliminary investigation” and determined that the content was not AI-generated. But he blamed the contractor anyway and deleted the fake author's profile.

Elsewhere, Jake Ward, founder of SEO marketing agency Content Development, has created a stir on X, boasting that he used Google's algorithmic AI content.

The three-step process involves exporting a competitor's sitemap, converting their URLs to article titles, and using AI to generate 1,800 articles based on the headlines. It claims to have stolen a total of 3.6 million views over the past 18 months.

There are good reasons to be skeptical of his claims: Ward works in marketing, and the thread clearly promotes his AI-article generation website, which didn't exist 18 months ago. Some users Google has since pointed to the page in question.

However, judging by the amount of low-quality AI-written spam that has begun to shut down search results, similar tactics are becoming more widespread. Newsguard has identified 566 news sites that mainly carry AI-written spam articles.

Some users are now complaining that the ghost internet theory might come true. That's a conspiracy theory from a few years ago that most of the internet is fake, written by bots and run by algorithms.

Also read

Main characteristics

‘Account summary' supercharges Ethereum wallets: Dummies guide

Main characteristics

All rise for a robot judge: AI and blockchain can change the courtroom

At the time, it was written off as the rage of lunatics, but even Europol has since released a report estimating that “90 percent of online content could be artificially generated by 2026.”

Men are breaking up with their girlfriends with messages written by AI. AI pop stars like Anna Indiana are releasing trashy songs.

And on X, strange AI responders are increasingly being added to threads to provide what bitwriter Tur Demeister describes as “overly wordy responses with an incredibly neutral quality.” Data scientist Jeremy Howard has noticed them as well and believes that both bots are trying to build credibility for the account so they can effectively pull off some kind of hack or avoid some political issue in the future.

That seems like a reasonable hypothesis, especially since an analysis by cybersecurity outfit Internet 2.0 last month found that 80 percent of the 861,000 accounts it examined could be AI bots.

And there is evidence that bots are pushing democracy. During the first two days of the Israel-Gaza war, Social Security Intelligence found 312,000 pro-Hamas posts from fake accounts viewed by 531 million people.

He estimates that bots created one in four pro-Hamas posts, and Column 5's analysis later found that 85% of the replies were other bots trying to spread propaganda about how Hamas is treating its hostages well and why the October 7 massacre was justified.

CyberCyber
Syabara found 312,000 pro-Hamas posts from fake accounts in 48 hours (Siabara)

Grok analysis button

X will soon add a “Grok Analytics Button” for subscribers. While Grok isn't as sophisticated as GPT-4, it can access real-time, up-to-date data from X, allowing it to analyze trending topics and sentiment. It also helps users research and generate content, as well as code, and there's a “fun” mode to turn the transition into a joke.

For crypto users, the real-time data means Grok can do things like find the top ten tokens for the day or the last hour. However, Defy Research blogger Ignas worries that some bots will buy trending token transactions, while other bots may cause Astroturf tokens to improve by trending.

“X is already important to token discovery, and the Citieco bubble will likely get worse when Grok launches,” he said.

Also read

Main characteristics

Crypto Critics: Can FUD Ever Matter?

Main characteristics

Is the Metaverse really changing like an ‘ice crash'?

All killer no filler AI news

– Ethereum founder Vitalik Buterin fears that AI will take over from humans as the planet's dominant species, but is optimistic that it will use brain/computer interfaces to keep humans focused.

— Microsoft is upgrading its CoPilot device to run GPT-4 Turbo, which improves performance and allows users to input up to 300 pages.

– Amazon has announced its version of copies Q.

— Bing thinks the existence of birds is debatable when he tells users that Australia doesn't exist because of a long-running Reddit gag and joke that birds aren't real.

– Hedge fund Bridgewater will launch a fund next year that will use machine learning and AI to analyze and predict global economic events and invest client funds. To date, AI-driven funds have seen very low returns.

— A team of university researchers taught AI to browse Amazon's website and buy things. MM-Navigator is given a budget and told to buy a milking machine.

FruitFruit
Technology is now so advanced that AIs can buy milkshakes on Amazon. (freethink.com)

Stupid AI pictures of the week

A social media trend this week was creating an AI image and then instructing the AI ​​to do more: so a bowl of ramen might get spicier with subsequent images, or a goose might get progressively lighter.

Andrew FentonAndrew Fenton

Andrew Fenton

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a film journalist for News Corp Australia, SA Wind and national entertainment writer for Melbourne Weekly.



Leave a Reply

Pin It on Pinterest