Smart Contract Audits and Cyber ​​Security – Cointelegraph Magazine

Smart Contract Audits And Cyber ​​Security - Cointelegraph Magazine


Every day this week we're highlighting one real, no-bullshit, free use case for AI in crypto. Today is the opportunity to use AI for smart contract auditing and cyber security, we are so close and so far.

Turbo Parts
AI artwork written for ChatGPT TurboToad memecoin. (Twitter)

One of the areas that will benefit AI and crypto in the future is the exploration of smart contracts and the identification of cyber security holes. There is only one problem – GPT-4 currently fails.

Coinbase tested ChatGPT's capabilities for automated token security assessments earlier this year, and in 25% of cases, it mistakenly classified high-risk tokens as low-risk. James Edwards, head of cybersecurity researcher Librehash, believes that OpenAI is not keen on letting the bot be used for such tasks.

“When it comes to smart contracts, I strongly believe that OpenAI has quietly passed on some bot capabilities because people are clearly not in a position to deploy smart contracts,” OpenAI said, perhaps not. You want to be responsible for any vulnerabilities or exploits.

Betfury

That's not to say AI has zero potential when it comes to smart contracts. AI Eye spoke with Melbourne digital artist Rhett Mankind in May. He didn't know anything about creating smart contracts, but through trial and error and lots of rewrites, he was able to get ChatGPT to create a memecoin called Turbo, which achieved a market cap of $100 million.

But as Certike Chief Security Officer Kang Lee points out, while you might be able to find something with the help of ChatGPT, it might be riddled with logic errors and potential exploits.

“You write something and ChatGPT helps you build it, but because of all these design flaws, when attackers start coming, it's very likely to fail.”

So it's certainly not good enough for personal smart contract auditing, where a small mistake could see tens of millions lost in a project – although Lee says it's “a useful tool for people doing code analysis”.

According to Richard Ma of blockchain security company Quantstamp, the main issue with the current ability to audit smart contracts is that the GPT-4 training data is too comprehensive.

Also read: Real AI use case in crypto, number 1 – the best currency for AI is crypto

“Because ChatGPT is trained on many servers and there is very little information about smart contracts, it's better to hack servers than smart contracts,” he explains.

So the competition learns to identify them by training models with years of data on modern contract exploits and hacks.

Read more

Main characteristics

North Korea's Crypto Hack: Separating Fact from Fiction

Main characteristics

An Investment in Knowledge Pays the Best Interest: The Parlows Case for Financial Education

“There are new models where you enter your own data, and that's partly what we've done,” he said.

“We have a really big internal database of all different types of exploits. I started the company six years ago, and we were tracking all kinds of hacks. And so this data is useful to be able to train AI.

AI is racing to create a smart contract auditor.

Edwards is working on a similar project and the Mando project was about to build and complete an open source WizardCoder AI model that includes a repository of smart contract vulnerabilities. Microsoft Codebert uses a model of pre-trained programming languages ​​to help identify problems.

According to Edwards, in testing so far, the AI ​​has been able to “audit contracts with a much higher level of accuracy than one would expect and expect from GPT-4.”

His main work is creating a custom dataset of smart contract exploits that identifies vulnerabilities down to the responsible lines of code. The next big trick is to train the model to recognize patterns and similarities.

“Ideally, you want the model to be able to pull together relationships between functions, variables, context, etc., that a human might not be able to draw when looking at the same data.”

While he admits he's not yet as good as a human auditor, he can already make a strong first pass to speed up and make the auditor's work more complete.

Help the way LexisNexis helps a lawyer. Unless it's more efficient,” he says.

Don't believe the rumors

IlyaIlya
Near founder Ilya Polushkin is an expert in both AI and blockchain.

According to co-founder Ilya Polushkin, smart contract exploits are often extremely difficult cases, causing a one-in-a-billion smart contract to behave in unexpected ways.

But LLMs based on predicting the next word approach the problem from the opposite direction, Polushkin says.

“Current models are trying to get the most statistically possible outcome, right? And when you think about smart contracts or like protocol engineering, you have to think about all the edge cases,” he explains.

While Polushkin's competitive programming background means he has recently focused on AI, the team has developed processes to detect these unusual events.

“There were more formal search processes around the output of the code. So I don't think it's completely impossible, and now there are startups that are really investing in working with the code and its accuracy,” he says.

But Polushkin “doesn't think AI will be as good as humans at auditing for the next two years.” It will take a little longer.

Also read: Real AI use cases in crypto, number 2 – AIs can run DAOs

Andrew FentonAndrew Fenton

Andrew Fenton

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a film journalist for News Corp Australia, SA Wind and national entertainment writer for Melbourne Weekly.



Leave a Reply

Pin It on Pinterest