SBN

GUEST ESSAY: Privacy risks introduced by the ‘metaverse’ — and how to combat them

As digital technologies become more immersive and tightly integrated with our daily lives, so too do the corresponding intrusive attacks on user privacy.

Related: The case for regulating facial recognition

Virtual reality (VR) is well positioned to become a natural continuation of this trend. While VR devices have been around in some form since well before the internet, the true ambition of major corporations to turn these devices into massively-connected social “metaverse” platforms has only recently come to light.

These platforms, by their very nature, turn every single gaze, movement, and utterance of a user into a stream of data, instantaneously broadcast to other users around the world in the name of facilitating real-time interaction. But until recently, the VR privacy threat has remained entirely theoretical.

Berkeley RDI is a preeminent source of open-access metaverse privacy research. To test the true extent of data collection in VR, we designed a simple 30-person user study called MetaData. Users were asked to play an innocent-looking “escape room” game in VR, while in the background, machine learning scripts were secretly observing their activity and trying to extract as much information about them as possible.

The game was explicitly designed to reveal more information about users than they would otherwise have revealed, a unique threat of XR environments. In fact, most of the Montreal Cognitive Assessment (MoCA) test was hidden within the escape room.

Nair

In the end, the adversarial program had accurately inferred over 25 personal data attributes, from environmental data like room size and geolocation, to anthropometrics like height, wingspan, and reaction time, within just a few minutes of gameplay.

Why, one may wonder, should I care if my use of VR reveals my height or reaction time? In short, we should care not just which attributes can be directly observed, but also what that data can in turn be used to infer.

For example, by combining height, wingspan, and voice frequency, a user’s gender is revealed with a high degree of accuracy. On the other hand, the combination of vision, reaction time, and memory can reveal a user’s age to within a year. The sheer scale of data attributes available in VR make such inferences more accurate and abundant than on any conventional platform, such as web or mobile applications.

Garrido

And instead of having to combine numerous data sources (like a smartphone, laptop, and wearable device) to build a user profile, VR constitutes a one-stop shop for all of the biometric, environmental, behavioral, and demographic data an application could ever hope to harvest.

The story is not entirely pessimistic, however. In a follow-up work, called “MetaGuard,” we present a promising solution to our VR data privacy woes. Using a statistical technique called local differential privacy, we allow users to “go incognito” in the metaverse to obscure their identity and hinder tracking between sessions, just as they might on the web.

In fact, MetaGuard goes far beyond “incognito mode” on the web, protecting not just metadata but the telemetry data itself. It does so by literally warping the coordinate system virtual world to hinder the accuracy of adversarial measurements, while achieving a provably-optimal balance of privacy and usability impact. The result: a 94.6 percent reduction in the ability to deanonymize VR users even at the lowest supported privacy setting.

MetaGuard is by no means a complete solution to privacy concerns in VR. Instead, it is a first step towards solving a dangerous technological disparity: despite posing an unprecedented degree of privacy risk, VR currently lacks even the most basic privacy tools.

We hope our work begins to shed light on the risks that lie ahead, and encourage practitioners to advance research at the intersection of data privacy and VR.

About the essayist: Vivek Nair is an NSF CyberCorps Scholar, NPSC Fellow, Hertz Foundation Fellow, IC3 Researcher, and an EECS Ph.D. student researching applied cryptography at UC Berkeley. Gonzalo Munilla Garrido is a researcher at the BMW Group and CS Ph.D. student researching differential privacy at TU Munich. Nair and Garrido are members of the UC Berkeley Center for Responsible, Decentralized Intelligence, a preeminent source of open-access metaverse security and privacy research.

 (Editor’s note: This work was supported by Berkeley RDI, the NSF, the NPSC, and the Hertz Foundation. Opinions expressed in this material are exclusively those of the authors and not the supporting entities.)

*** This is a Security Bloggers Network syndicated blog from The Last Watchdog authored by bacohido. Read the original post at: https://www.lastwatchdog.com/guest-essay-privacy-risks-introduced-by-the-metaverse-and-how-to-combat-them/