As the scale of web3 and the metaverse continues to grow, apps and services may have to contend with an influx of duplicate accounts seeking to steal user identities to defraud and deceive. Experts believe that many of the malicious actors will be AI-based. But that may be already starting to change. In the last few months, web3 outfit Identity Labs launched NFID, a decentralized identity and login tool that does not require a password, allowing users to verify their identity by linking their phone number to their account. The identity platform uses zero-knowledge (zk) cryptography, a technology that can prove the validity of data without revealing any other personally identifying information. NFID is built on Dfinity’s Internet Computer blockchain.
Digital identity for web3 and metaverseAccording to Identity Labs founder Dan Ostrovsky, enabling what he calls unique “proof-of-humanity” may be key to eradicating AI adversaries and opportunists to guard against the risk of fraud in web3 and the metaverse. “By leveraging zero-knowledge cryptography, biometrics, and other verification methods to confirm a user’s identity, NFID ensures that a person is who they say they are while safeguarding user privacy,” Ostrovsky told MetaNews. He described “proof of humanity” as a concept that proves that humans are who they say they are when interacting with applications in the digital realm. The idea is to prevent people, or non-humans as it were, from abusing internet systems through multiple accounts. Digital identity is the cornerstone of web3 and the metaverse, according to Ostrovsky, as it enables trust and security in decentralized systems. In web3, digital identities will be used to govern interactions between users and the metaverse, as well as financial transactions. Digital identities can take two forms. The first is a digital version of an official physical ID document, like a passport, stored on a mobile crypto wallet. The other is a credential for accessing online services such as DeFi apps, NFT marketplaces, and other web3 services. In both cases, digital identities are used to verify the identity of the user and ensure they have the necessary permissions to access certain services or perform certain actions. But the rise of AI poses a significant threat to web3 and metaverse activities.
AI security risksAs AI becomes more advanced, it will become increasingly difficult to distinguish between real and fake identities, according to experts. AI has the potential to undermine the security and privacy of digital identities. As one example, it can be used to create deepfakes, realistic but fake images or videos used to impersonate someone else, including their voice. Deepfakes can be deployed to create false digital identities, something cybercriminals could leverage to commit fraud or other malicious activities. AI can also be utilized to analyze large amounts of data to identify patterns and vulnerabilities in digital ID systems, which can be exploited by hackers. To combat this threat, Ostrovsky suggests developing new technologies that can detect and prevent the use of fake identities. This could include the use of biometric data, such as facial recognition or fingerprint scanning, to verify the identity of users.
“The ubiquity of digital avatars in the coming metaverse will likely result in an uptick in fraud and phishing attacks,” he told MetaNews.This may be already a common practice on social platforms like Twitter, he said, adding: “The ability to easily imitate these avatars could catch many off guard, tricking them into thinking they’re interacting with a friend when they’re actually conversing with a fraudster harvesting details to pull off social engineering scams.” Ostrovsky emphasized the importance of privacy in digital identity.
“Users need to have control over their own data and be able to decide who has access to it,” he said.It means that digital ID systems need to be designed with privacy in mind, and users should have the ability to revoke access to their data at any time.