Neal Stephenson’s classic 1992 dystopian novel Snow Crash inspired today’s tech industry in many ways. Google Earth is said to have been inspired by the novel’s “Earth” software, which lets people interactively visualize the whole world. Snow Crash is also the origin of an immersive virtual world Stephenson called the “Metaverse,” which, of course, Mark Zuckerberg used last year to describe his ambition to create a virtual reality platform and a new generation of computer interfaces.

Visions for the metaverse include ways for people to work, socialize, and interact using virtual reality (VR) and augmented reality (AR) technologies (collectively called Extended Reality or XR). While Meta has attempted to claim the term metaverse for its own, this area has also become a research focus for Google, Apple, Microsoft, Sony, and Tiktok’s parent company ByteDance among others, all of whom have launched or announced plans to launch XR hardware or services in the near future. This next generation of devices will be more sophisticated than today’s gadgets, which means their data collection capabilities will likewise increase along with new and substantial risks to our human rights and fundamental freedoms

Metaverse

There’s no single definitive understanding of what a “Metaverse” or “Metaverse(s)” might be. In its ambiguity, the term has become a placeholder for many things, often reflecting the interests of the person using it. Some, like Apple, have avoided the term entirely for their XR products. The major point of overlap in describing a “metaverse,” however, is the idea of additional virtual environments connected to the internet becoming important parts of our day-to-day lives in the real world.

One prominent vision of this is the fictional OASIS from Ready Player One, a virtual society where people can play massively multiplayer online games (MMO) using VR gadgets. Another popular conception emphasizes spatial computing and AR devices, creating a shared “annotation” of virtual objects in the real world. Others, however, define the term metaverse as making the internet a reflection of the physical world to facilitate work, socializing, and commerce, perhaps supported by a metaverse “tax”. Visions of the metaverse often interact with another ambiguous term for the future iteration of the internet— Web3. 

Many of these ideas are not themselves very new–Second Life, started nearly 20 years ago, fits many proposed definitions. The current excitement, however, comes from the belief that the prior iterations of virtual worlds remained niche due to technical limitations which are now being overcome by XR, rather than a lack of interest or demand from the public.

Stephenson’s Snow Crash was a cyberpunk dystopia—a warning, not a suggestion. It shouldn’t be used as a blueprint for a future internet.  The Snow Crash metaverse is a heavily gated and radically commercialized world, where inequity is embedded in the system’s underlying infrastructure—it’s a world where poor people have to flicker and stagger through cyberspace in low-resolution, low-frame-rate avatars. This is not the metaverse that we want to see. It’s imperative that we fight back now to prevent systemic digital inequity from infecting these new worlds before they become fully integrated with our everyday lives.

2022 Victory

When Meta (then called Facebook) bought industry leader Oculus in 2014, VR users were concerned about being forced to use their Facebook account, featuring their real or later “authentic identity,”in the virtual worlds they visited with their Oculus headsets. At first, Facebook promised Oculus owners that they could go on using their devices without needing a Facebook login. Then, shortly before the release of the Oculus Quest 2, the company changed course: Quest devices would henceforth require a Facebook account and could no longer be used pseudonymously.

EFF and others kept up the pressure on Meta over this broken promise, fighting for users’ privacy, until August, when Meta finally relented. VR users now have a path to “unlink” their device from Facebook by creating separate Meta and Horizon accounts (Meta’s social universe app), neither of which are subject to Facebook’s strict “real names” policies.

According to Meta CTO, Andrew "Boz" Bosworth, and Meta VP of Oculus, Mark Rabkin, the new Meta account only includes login credentials, payment information, and other settings you want to share between Facebook, Instagram, Horizon, or WhatsApp—and there is no limit to how many of these accounts a user can create.  Users can create unlimited Horizon Profile accounts—the successor to the old Oculus accounts and used for VR headsets—each with different names, avatars, and social graphs.

Meta’s VR customers made it clear: they want the ability to express themselves in ways that suit their needs, from separating their work and personal life to using pseudonyms to ensure their safety. Meta’s reversal of its decision should be a lesson to its competitors: users will not tolerate being corralled into privacy-invading procedures, and companies who try will face revolts.

Biometric Inferences and Privacy Protections

The fight for the right to pseudonymity and privacy was just an opening skirmish: there are bigger battles coming. We have sounded the alarm over the serious privacy dangers lurking in XR. 

Headset sensors can gather new, extraordinarily invasive forms of behavioral data, especially users’ involuntary and reactive physical behaviors. Our joint statement on Human Rights on the Metaverse and February submission to the UN Office of the High Commissioner for Human Rights explains why XR surveillance is a serious human rights matter.

The new generation of hardware, meant to be worn on the user’s body, poses serious privacy risks both to users and to bystanders. In our submission to OHCHR, we wrote:

“XR headsets are often designed with body-worn and environmental sensors which can collect unprecedented amounts of data about their user and their context. New sensors can make XR technology the frontier of more intimate forms of surveillance. These include monitoring vocal patterns, facial expressions or gazes, and when coupled with other technology like smartwatches, even heartbeats, and body temperature. Body-worn sensors can also track the unconscious responses that a user’s body makes, like eye movements, head motions, and hand gestures. This tracking can be needed for making virtual scenes feel natural, but can also reveal sensitive medical and psychological information, which some companies may choose to store on their own servers while others on the device itself.”

Data about our bodies and vitals are incredibly personal, and even the raw data can be sensitive. This goes beyond current data collection, such as whether you click a link and how far you scroll, but can note exactly what parts of an article you read and which images you look at. In XR, where nearly your entire field of view can be accounted for, this can be a record of exactly what you interacted with at a given time.

When paired with questionable pseudo-science to make inferences some claim reveals peoples’ beliefs, attitudes, and interests, there is an even bigger problem.  Such inferences, driven by emerging machine learning models about a person’s health or “emotions,” can be used in ways that impact the user, even if these predictions are inaccurate. What’s worse is these methods are being applied to often not conscious or involuntary physical behaviors of the user, making it not possible to control or consent freely to the way these tools are implemented.

Eye-tracking, for example, can be used to collect often involuntary actions or reactions to stimuli, such as how often we blink or look at something, and be used as data for targeted advertising. This use of your information opens the door to even creepier, and more invasive versions of user categorization already used in targeted ads today. Even worse, users won’t be able to truly consent to the monetization of their own involuntary bodily responses to stimuli.

For AR systems like smart glasses, sensors reach out to the world nearby, detecting, recording, and photographing everyone and everything in the vicinity. Without proper safeguards, users could unknowingly record conversations or videos in unethical or illegal ways. If recordings are automatically uploaded to centralized cloud servers that law enforcement could plunder, it may give the state unprecedented power to snoop on a user's private life. But it shouldn't be this way.

The time to figure out the appropriate safeguards is now before the collection of biometric, anatomical, and behavioral data, and other personal information begins in earnest and goes mainstream. These are key issues for the future of XR.

The data available to XR companies are powerful and sensitive, and it’s not adequately protected by existing data protection laws. In the global patchwork of privacy laws, Article 9 of the EU’s General Data Protection Regulation offers strong protections against the collection and processing of biometric data. The law prohibits processing biometrics that uniquely identifies a person unless it falls under some of its restricted and limited exceptions (for example, if the user gives their informed and freely-given consent). Even this falls short, though, as it only covers personal data resulting from a person’s physical, physiological or behavioral characteristics (face, iris scans, fingerprints, or voice) that is used to uniquely identify or single out individuals. This means that inferences from granular data, such as those related to eye movement or head inclination, may fall short of this definition as biometric data. Being said that, if the data allows an inference concerning health or sexual orientation, such data should be protected under Article 9.

Our allies in  European civil society are aware of these gaps, and they’ve begun the work of fixing them by seeking improvements to the European Union’s draft Artificial Intelligence Act (AI Act). EDRI and Access Now have proposed several amendments to the AI Act. Two of the amendments they’ve submitted include several improvements, including fighting against the definition of emotion recognition (and biometric categorization); a prohibition on using artificial intelligence for emotion recognition; and a prohibition on discriminatory forms of biometric categorization—categorizing people or groups based on data about their bodies and behaviors.

Access and EDRi warned that the definition of emotional recognition under Article 3(34) of the AI Act is too narrow, limited to biometric data, and fails to encompass physiological data that may not meet its high bar for the purpose of uniquely identifying a person. That means service providers will be able to argue that this information is out of the scope of protection of the AI Act.

Looking Forward

 In 2023, the European Union plans to launch a metaverse regulatory initiative, and the Japanese government has also signaled interest in the topic. In addition to the privacy dimension, the EU will look at competition issues. EC internal market commissioner Thierry Breton wrote (emphasis added):

Private metaverses should develop based on interoperable standards, and no single private player should hold the key to the public square or set its terms and conditions. Innovators and technologies should be allowed to thrive unhindered. [...] We have also learned a lesson from this work: we will not witness a new Wild West or new private monopolies.

EFF agrees interoperability is key for a future of privacy without monopolies and will advocate for this approach as these Metaverse systems are developed. This type of interoperable standard needs to go beyond sharing avatars and hats between games and must give users meaningful control over who they trust with their data, privacy, and safety online—and the practical ability to leave a platform that doesn’t protect them.  

Advocates will need to continue to push for policy and product design which prioritizes robust transparency, modernized statutes, and privacy-by-design engineering. If today's tech industry wants the metaverse to be the essential virtual layer of our daily lives, it must be built today in a way that respects the human rights and autonomy of all people in the future. We can and must have extended reality with extended privacy. The future is tomorrow, so let’s make it a future we would want to live in.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2022