ChatGPT can write smart contracts; Don’t just use it as a security auditor
Researchers at Salus Security, a blockchain security company with offices in North America, Europe and Asia, recently published GPT-4's capabilities in analyzing and investigating smart contracts.
As you can see, artificial intelligence (AI) is great at generating and analyzing code, but you don't want to use it as a security auditor.
According to the paper:
“GPT-4 can be a useful tool in smart contract auditing, especially in code analysis and providing vulnerability clues. However, due to its limitations in vulnerability testing, it currently cannot fully replace professional auditing tools and experienced auditors.”
Salus researchers used a dataset of 35 smart contracts (called the SolidiFI-benchmark vulnerability library) containing a total of 732 vulnerabilities to judge the AI's ability to identify security vulnerabilities across seven common vulnerability types.
Related: Crypto on BNB Chain Down 85% in 2023: Report
According to their findings, chatGPT is good at identifying true positives – it's worth investigating actual vulnerabilities outside of the testing environment. Achieved over 80% accuracy in testing.
However, it has the obvious problem of generating false negatives. This is defined by a statistic called “recall rate,” and in the Salus group's tests, GPT-4's recall rate was less than 11 percent (higher is better).
This suggests, the researchers concluded, that “GPT-4 susceptibility detection capabilities are lacking, with a maximum accuracy of only 33%. As such, the researchers recommend using dedicated auditing tools and good human expertise to test smart contracts until AI systems like GPT-4 can come up to speed.
“In conclusion, GPT-4 can be a useful tool in smart contract auditing, especially in code analysis and providing vulnerability clues. … When using GPT-4, it should be combined with other auditing methods and tools to increase the accuracy and effectiveness of the overall audit.”