AI routers can steal credentials and Crypto.
Researchers at the University of California have discovered that some third-party AI large language model (LLM) routers may have security vulnerabilities that could lead to crypto theft.
A paper published by the researchers on Thursday measuring malicious-in-the-middle attacks on the LLM supply chain revealed four attacks, including malicious code injection and certificate extraction.
Zhaofan Xu, co-author of the paper, said in X that “26 of the LL.M.
LLM agents route requests through third-party API intermediaries or routers that aggregate providers such as OpenAI, Anthropic, and Google. However, these routers intercept Internet TLS (Transport Layer Security) connections and have full transparent access to every message.
This means that developers using AI code agents like CloudCode to run on smart contracts or wallets can pass private keys, pedigrees, and sensitive data through unfiltered or unsecured router infrastructure.
ETH stolen from fraudulent crypto wallet
The researchers tested 28 paid routers and 400 free routers collected from public communities.
Their findings were shocking, with nine routers actively injecting malicious code, two deploying adaptive evasion triggers, 17 accessing researcher-owned Amazon Web Services credentials, and one extracting Ether (ETH) from a researcher-owned site.
Related: Anthropoc limits access to AI model over cyber attack concerns
The researchers pre-issued Ethereum wallet “cheat keys” in nominal amounts and reported that the amount lost in the experiment was less than $50, but no further details such as the hash of the transaction were provided.
In addition, the authors conducted two “toxic studies” that show that even good routers can become vulnerable after reusing credentials issued through weak relays.
It's hard to tell if routers are malicious.
The researchers said that it is not easy to identify when a router is malicious.
“The boundary between ‘credential manipulation' and ‘credential stealing' is invisible to the client because routers read secrets in plaintext as part of normal transmission.”
Another unsettling finding is what the researchers call “YOLO mode.” This is a setting in many AI agent frameworks where the agent automatically executes each action without asking the user to confirm each.
Previously legitimate routers could be quietly armed without the operator's knowledge, while free routers may be stealing credentials while offering cheap API access as a lure, the researchers found.
“LLM API routers sit at the critical boundary of what the ecosystem currently considers transparent transport.”
The researchers recommend that developers who code AI agents should strengthen client-side defenses, never allowing private keys or genealogies to be passed to an AI agent session.
The long-term fix is for AI companies to cryptographically sign their responses so the instructions the agent executes can be mathematically verified as coming from the correct model.
Magazine: No one knows if quantum secure encryption even works



