ShapeShift founder Eric Voorhees explains why he’s heading to a privacy-centric AI startup
8 months ago Benito Santiago
Eric Voorhees, founder of cryptocurrency exchange Shapeshift, has announced the public launch of his latest venture, Venice AI, a privacy-focused AI chatbot.
Privacy is a concern for the cryptocurrency space and among users of artificial intelligence — a critical factor in the creation of Venice AI, he said.
“I see where AI is going to be taken over by big tech companies in bed with the government,” Voorhees told Decrypt. “And I'm very interested in that, and I see how powerful AI is, how useful it is — an amazing realm of new technologies.”
Big tech companies are often under the government's thumb and act as gatekeepers to AI, Voorhees lamented, which could lead us to a dystopian world.
“The antidote to that is the decentralization of open source,” Voorhees said. “Don't give anyone a monopoly on this stuff.”
Acknowledging the important work done by OpenAI, Anthropic and Google to advance the field of generative AI, Voorhees said users should still have the choice to use open source AI.
“I don't want this to be the only option. I don't want the only option to be closed source, proprietary, centralized, censored, licensed,” he said. “So there have to be options.”
Voorhees launched the ShapeShift cryptocurrency exchange in 2014. In July 2021, he said, the exchange will transition to an open-source decentralized exchange (DEX), which will control the exchange from Voorhees to ShapeShift DAO.
ShapeShift announced it was shutting down in March after getting into a battle with the US Securities and Exchange Commission. The exchange agreed to pay a $275,000 fine and comply with a cease-and-desist order that allowed users of the exchange to trade digital assets without registering as a broker and exchanging with the agency.
Within three years, Voorhees said he turned his attention to building a permissionless, decentralized AI model.
Venice AI doesn't store user data and can't see user conversations, Voorhees said, adding that Venice AI sends users' text input through an encrypted proxy server to a decentralized GPU that runs the AI model and sends the answer back.
“The whole point of that is for safety,” Voorhees said.
“[The GPU] It sees the plaintext of the request, but it doesn't see all your other conversations, and Venice doesn't see your conversations, and none of it is related to who you are,” he said.
Voorhees admits that the system does not provide absolute privacy – it is not completely anonymous and zero-knowledge, but he expressed the view that Venice's AI model is “much better” than the current situation, where conversations are sent to a central company and stored.
“They see it all, and they have it all forever, and they attach it to who you are,” Voorhees said.
AI developers such as Microsoft, Google, Anthropic, OpenAI, and Meta have worked hard to improve public and policymakers' understanding of the generative AI industry. A number of top AI organizations have committed to government and non-profit initiatives and the development of “responsible AI”.
These services ostensibly allow users to delete their chat history, but Voorhees says it's naive to think the data is gone forever.
“Once a company has access to your data, you can't believe it's ever gone,” he said, noting that some government regulations require companies to retain customer data. “People should assume that everything they write to OpenAI goes to them and they have it forever.”
“The only way to solve this is by using a service where the data doesn't go to a central repository in the first place,” Voorhees added. “That's what we're trying to build.”
Chat history on the Venice AI platform is stored locally in the user's browser and can be deleted whether or not the user creates an account. Customers can set up an account by linking an Apple ID, Gmail, email, Discord or MetaMask wallet.
Creating a Venice AI account has its advantages, however, including higher message limits, editing questions, and earning points—though points currently have no function other than making it easier to track usage. Users who want more features can pay for a Venice Pro account, which currently costs $49 per year.
Venice Pro offers unlimited text queries, removes watermarks from generated images and document uploads, and allows users to “turn off safe mode for unblocked image generation.”
Fun with
In Venice (with a Pro account), you can modify the “System Prompt”. This is basically like god mode or root access on the LLM you are dealing with.
It can enable interesting views that standard AI services would otherwise censor. pic.twitter.com/qlt0xp0aC9
— Erik Voorhees (@ErikVoorhees) May 10, 2024
Despite the MetaMask account integration, Voorhees noted that users won't be able to sign up for Venice Pro with digital currencies yet—but “it's coming soon,” he said. Meanwhile, since it is built on the Morpheus network, the company is rewarding Morpheus token holders.
If you have one Morpheus token in your wallet, you get a free Pro account for an unlimited period of time, he said. “You don't even need to pay, you just get one Morpheus token and you automatically have a pro account as long as the token is in your wallet.”
As with any tool, cybercriminals constantly devise ways to bypass the safeguards built into AI tools to commit crimes, whether by using obscure language or creating illegal clones of popular AI models. However, according to Voorhees, interacting with a language calculator is never illegal.
“Go on Google and say, ‘How do I make a bomb?' You can go get that information if you want to – it's not illegal to go get that information, and I don't think it's unethical to get that information. “It's illegal and unethical if you make a bomb to hurt people, but that has nothing to do with Google.
It is a different action that the user is taking. So I think the same principle applies to Venice in particular or AI in general,” he said.
Generative AI models such as OpenAI's ChatGPT have also come under increased scrutiny over how AI models are trained, where the data is stored, and privacy concerns. Venice AI collects some information about how the product is used — for example, by creating new chats — but the website says it can't see or store “any information about text or image requests shared between you and the AI models.”
For its text generation, Venice uses the Llama 3 large language model developed by Facebook's parent company Meta. Customers can also switch between two Lama 3 versions: Nous H2P and Dolphin 2.9.
In Twitter Spaces after Venice AI launched, Voorhees praised Mark Zuckerberg and Meta for their work in generative AI, including open-sourcing the powerful LLM.
“Metta deserves huge credit for spending hundreds of dollars to train an excellent model and releasing it to the world for free,” he said.
Venice allows users to generate images using the open source models Playground v2.5, Stable Diffusion XL 1.0 and Segmind Stable Diffusion 1B.
When asked if Venice AI will use services from OpenAI or Anthropic, Voorhees' response emphasizes no.
“We will never provide Cloud LLM and we will never provide OpenAI services,” he said. “We're not a wrapper for centralized services, we're a transparent and exclusive way to access open source models.”
Built by Venice AI on top of the decentralized Morpheus network that powers open-source smart agents, Voorhees admits there are concerns about Venice AI's performance. That's what they're focused on, he explained.
“If we want to bring personal, uncensored AI to people, it has to be as efficient as centralized companies,” Voorhees said. Because otherwise, people prefer the convenience of the central company.
Edited by Ryan Ozawa.
Stay on top of crypto news, get daily updates in your inbox.