AI researchers want to solve the bot problem by asking for an ID to use the internet
Artificial intelligence researchers fear that AI bots will eventually take over the Internet and spread like a digital invasive species. Instead of trying to limit the spread of bots and AI-generated content, one group decided to go in the opposite direction.
In a recently published preprint paper, dozens of researchers advocate a system in which people need to have their humanity verified by another person in order to obtain “body proofs.”
The big idea seems to be to create a system to verify that someone is a person without revealing their identity or providing any other information. If this sounds familiar to those in the crypto community, it's because the research is based on “proof of identity” blockchain technologies.
Digital authentication
Services such as Netflix or Xbox Game Pass require a fee that typically relies on users' financial institutions to perform authentication services. This doesn't allow anonymity, but for most people, that's fine. It is typically considered part of the cost of doing business.
Other services, such as anonymous platforms, which cannot rely on user payment, must take steps to limit bots and duplicate accounts to ensure that they are human or, at the very least, a non-human customer in good standing.
Starting in August 2024, for example, ChatGPT's security measures may prevent it from being used to register multiple Reddit accounts. Some AI can outperform “CAPTCHA” style human testers, but it takes a lot of hard work to get a person to go through the setup process to verify an email address and open an account.
However, the main argument presented by the group – which includes companies such as OpenAI, Microsoft and a16z Crypto, as well as academic institutions including the Harvard Society of Fellows, Oxford and MIT – was the current limitations. They only last a long time.
In a few years, humans will probably face the reality that if they can't look someone in the eye, there will be no way to determine whether or not they are dating.
Anonymity
The researchers are arguing for a system that designates certain organizations or facilities as issuers. These issuers employ people for the purpose of verifying the humanity of individuals. Once verified, the issuer verifies the individual's credentials. Presumably, the issuer will be restricted from tracking how those credentials are used. It is not clear how robust the system will be against cyber attacks and quantum-assisted decryption.
On the other hand, organizations interested in providing services to credentialed individuals may choose to only assign credentials to credentialed individuals. In fact, this limits everyone to one account per person and prevents bots from accessing these services.
According to the paper, it is beyond the scope of the study to determine which centralized pseudo-anonymization scheme would be most effective, nor does the study address the myriad problems that could arise with such a scheme. However, the team acknowledged these challenges and issued a call to action for further research.
Related: US Financial Services Committee leaders seek ‘regulatory sandbox' for AI