skip to main content
survey
Open Access

Deceived by Immersion: A Systematic Analysis of Deceptive Design in Extended Reality

Authors Info & Claims
Published:14 May 2024Publication History

Skip Abstract Section

Abstract

The well-established deceptive design literature has focused on conventional user interfaces. With the rise of extended reality (XR), understanding deceptive design’s unique manifestations in this immersive domain is crucial. However, existing research lacks a full, cross-disciplinary analysis that analyzes how XR technologies enable new forms of deceptive design. Our study reviews the literature on deceptive design in XR environments. We use thematic synthesis to identify key themes. We found that XR’s immersive capabilities and extensive data collection enable subtle and powerful manipulation strategies. We identified eight themes outlining these strategies and discussed existing countermeasures. Our findings show the unique risks of deceptive design in XR, highlighting implications for researchers, designers, and policymakers. We propose future research directions that explore unintentional deceptive design, data-driven manipulation solutions, user education, and the link between ethical design and policy regulations.

Skip 1INTRODUCTION Section

1 INTRODUCTION

Extended Reality (XR), which comprises Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) technologies, has generated substantial research and industry interest—especially in the games industry—since 2012 [59]. XR is currently experiencing rapid growth [2]. The literature has highlighted the potential of XR to enhance gaming and socialization [79, 81], arts and design [67], e-commerce advertisements [51], and education [57]. The rise of XR technology has prompted discussions on deceptive design1 (also known as “dark pattern”)—the user interface design that researchers deem manipulative [6, 49]—from experts in engineering [39, 73], security and privacy [8, 21], cognitive science [20], and humanities and social science [67].

The use of XR technology enables new deception opportunities that are not present in other digital environments. For example, e-commerce companies can induce artificial emotions and target vulnerable users to influence their purchasing decisions [51]. Recently, researchers from different disciplines have begun to study XR deceptive designs and propose corresponding solutions. However, a high-level research overview of deceptive design strategies in XR is lacking. More specifically, there is limited research that addresses how XR and deceptive design tactics influence each other, and what new harm XR deceptive designs bring to users. While scholarly work focusing on XR deceptive design is still in its infancy, we believe that it is critical to seed discussions and provide analytical clarity on the deceptive design strategies identified thus far. Since XR technologies are not yet widely adopted by consumers, the researchers, designers, and policymakers can benefit from early constructive feedback and design recommendations aimed at reducing the potential harms of XR technologies.

With its inclusion in XR technologies, concerns about deceptive design have extended to personal security [39, 73], safety [12], social [64, 67], ethical [64, 67], and political [67] aspects during and after engagement with XR. XR uses various technologies that interact with multiple human senses (e.g., hearing, vision, touch) to deliver an immersive experience. Thus, deceptive design could be even more problematic in XR than in non-XR environments because users are more deeply engaged and therefore more prone to manipulations [20].

Although extensive research on deceptive design has already elucidated the harms in non-XR environments [49], its use in XR technology is still not well understood. XR manufacturers are already introducing advertisements (e.g., Meta announced the experiment of in-headset VR ads [50]), and other deceptive design tactics will soon follow. Thus, reviewing current knowledge on XR deceptive design and providing a guideline for future research is an urgent necessity. Delaying research until XR deceptive design is a more developed field would expose users to risks that have not yet been studied. In this work, we synthesize XR deceptive design strategies identified in previous work to derive important insights for researching and designing solutions for deceptive design and develop new ways for the safe and ethical use of XR technologies.

In our literature analysis, we began by asking how previous works defined deceptive design in their research. Despite the growing focus on XR, there is little literature on deceptive design in this context. This gap indicates an urgent need. We must understand how deceptive design manifests in XR, because its unique traits affect user manipulation. Thus, to build a strong foundation, our RQ1 asks, How has the existing literature defined deceptive design in the context of XR? Deceptive design found on websites, games, and mobile apps often relies on interface design elements (e.g., a countdown timer) [7, 23]. Research indicates that XR’s immersive capabilities [86], including multi-sensory feedback [8, 51], have the potential to modify users’ choice architecture [68]. Therefore, RQ2 asks, How can XR amplify the effects of deceptive design? XR has unique potential for manipulation with sensory feedback. It is vital to investigate if new deceptive designs, not relying on visual elements, might emerge in this domain. Thus, RQ3 asks, What deceptive design strategies can be present in XR? With the possibility of both amplified and novel deceptive design tactics in XR, we must understand the potential risks this poses to users. Therefore, RQ4 asks, What risks can XR deceptive design pose to users?

To answer our research questions, our research methodology involves a systematic analysis of the literature using thematic synthesis. We considered 187 candidate articles from a search in four bibliography databases from various research disciplines. After following the rigorous and systematic process of Page et al. [61], we screened the abstracts and full papers, resulting in the elimination of 31 duplicates and 143 articles that solely discussed deceptive design or XR. Our final sample comprised 13 articles that discussed the application of deceptive design in XR technologies.

We make the following contributions. First, we provide an analysis of how deceptive design is defined within XR research. We focus on distinguishing nudging from persuasion. This promotes conceptual clarity and shows where future deceptive design research should focus. In particular, we emphasize unintentional user decisions. Second, we show how XR’s immersive qualities can be uniquely exploited for deception, providing concrete examples of how manipulation may differ from other platforms. Third, we present a categorization of deceptive design strategies found in XR through our thematic analysis. Fourth, we analyze the interplay between deceptive design and existing XR risks, revealing how they mutually exacerbate potential user harms. Finally and fifth, we synthesize prevention techniques from XR studies and provide actionable recommendations for XR designers, policymakers, and educators to mitigate deceptive design risks. Through this work, we propose future research directions to study XR deceptive design from unintentional design decisions and the difference between manipulative (e.g., tricking) and benevolent (e.g., nudging) design strategies, understand how XR poses risks to users, develop more transparent XR data use practices, and create educational strategies on deceptive design practices in XR.

Skip 2THEORETICAL BACKGROUND AND MOTIVATION Section

2 THEORETICAL BACKGROUND AND MOTIVATION

Deceptive design2 (also known as “dark patterns”) gained attention in 2010 when Brignull [6] first introduced the concept. Deceptive design generally describes design patterns that distort or impair users’ decision-making ability, making them engage in undesired behaviors or make choices they would not otherwise make [7, 18, 49]. Researchers have focused on its applications in various fields. including websites, mobile apps, games, and gamification [15, 40, 88]. However, its investigation in XR environments had been limited. XR encompasses VR, AR, and MR [51]. VR lets users interact with virtual objects by providing visual and auditory feedback in a fully immersive virtual environment [8, 64, 73, 86]. By creating a virtual world that replaces reality, VR blocks users’ perception of the real world [51]. Unlike VR, AR overlays virtual content onto the physical world to enhance the environment rather than completely obscuring it [8, 38]. Compared to VR and AR, the conception of MR is varied [71]. In this work, we adopt the most widely used MR explanation from Speicher et al. [71] and the XR Safety Initiative—that is, MR offers hybrid reality, which denotes an environment in which both physical and virtual settings coexist and interact in real time. VR/AR/MR technologies such as headsets and controllers (i.e., Oculus Rift3 and Microsoft HoloLens4), smart glasses (e.g., Magic Leap5), handheld devices (e.g., smartphones), and virtual projectors are all considered XR [86]. The technology creates the potential for new forms of manipulation that have not been investigated in detail in the literature.

2.1 Harms of Deceptive Design in Non-XR and XR Environments

Research has widely documented the concerns raised by deceptive design in various digital contexts [15, 26, 40, 88]. Deceptive design undermines user autonomy when making decisions [49, 75] and potentially causes them to suffer financial loss, such as paying for things they did not mean to [6, 88], or limit their ability to make informed buying decisions [4, 6]. Users can also experience privacy risks from bad defaults, hard-to-access privacy-respecting options, or designs that take advantage of their fear and drive them away from privacy-respecting options [6, 24, 49, 77]. The data collected from devices can also become an exploitable source to detriment users [27]. Moreover, some deceptive designs create cognitive burden such as requiring the users to spend extra time, energy, and attention to obstruct them from selecting the desired choices (i.e., hard-to-cancel subscriptions [6] and cookie consent with hard-to-find deny options [26, 70]). Users’ susceptibility to deceptive design is influenced by factors such as the frequency of its occurrence, its misleading behavior and UI appearance, as well as the perceived trustworthiness and level of frustration of the users [43]. Despite users being aware of the impact of deceptive design, their uncertainty about the actual harm discourages them from taking self-protective actions [4]. In addition, users’ dependency on services can lead to a resigned attitude, making it hard for them to avoid deceptive designs [45].

XR’s sophisticated tracking capabilities create unprecedented risks for amplifying the harms of deceptive design [8, 51]. XR sensors track minute user movements, generating rich visual, audio, and haptic feedback [8]. These data empower inferences about users’ physical and mental states [54, 58], habitual patterns [46, 54, 62], and even cognitive, emotional, and personal vulnerabilities [8, 51, 58, 60]. Users often reveal biometric and demographic data to achieve optimal XR functionality [47]. Bad actors can weaponize this sensitive information for deanonymization and targeted manipulation [29, 54, 62]. Manufacturers and third-party companies could potentially exploit these deceptive design tactics for profit [17], altering users’ behaviors, emotions, and decision-making processes [8, 51, 58]. Despite these dangers, our understanding of how deceptive design manifests within this unique context remains limited. Additionally, users often lack awareness of XR data collection practices and their potential misuse [30, 60], hindering their ability to take self-protective measures [58]. Given XR’s rapid adoption in diverse spheres of life [12], research must urgently examine XR deceptive design beyond e-commerce and advertising. We must critically examine its implications for social interactions, political manipulation, and other potentially harmful applications.

2.2 Expanding Deceptive Design Taxonomies and Characteristics

The taxonomies on deceptive design are large and growing over the years. Building upon Harry Brignull’s 2010 effort, Zagal et al. [88] outlined four major categories of deceptive game design in 20,000 mobile games. In 2014, Greenberg et al. [27] analyzed deceptive design that exploits proxemic interactions and detriments users. In addition, in their 2018 study, Gray et al. [23] further enriched Brignull’s 2010 classifications into a taxonomy with five major categories. In 2019, Mathur et al. [48] conducted a large-scale analysis of shopping websites and identified seven categories of deceptive design. In the same year, Fitton and Read [19] and Karlsen [33] investigated deceptive design in mobile apps and web-based games, drawing inspiration from Zagal et al. [88]. Based on the taxonomy of Gray et al. [23], Di Geronimo et al. [13] studied user perception of deceptive design in mobile games and apps. Gray et al. [26] discussed the deceptive design in cookie consent banners. In 2023, Brignull [7] expanded the classification of Mathur et al. [48] into eight deceptive strategies and seven deceptive design types. Building upon this, Mildner et al. [52] and Mildner et al. [53] examined deceptive design presented in social media platforms. Furthermore, King et al. [34] investigated players’ perception of deceptive design on 3D interfaces, and the literature review of Monge Roffarello et al. [55] identified 11 types of deceptive design that are capable of capturing user attention.

Combining the diverse deceptive design taxonomies identified in previous research, Mathur et al. [49] summarized six deceptive design attributes: (1) asymmetric designs that impose unequal burdens on the choices available to the users (e.g., choices that benefit the service are feature prominently), (2) covert designs that push users toward certain options using mechanisms that users cannot recognize, (3) deceptive designs that induce false beliefs in users through misleading statements and intentional omissions, (4) information hiding designs that obscure or delay the presentation of information that is necessary for decision making to users, (5) restrictive designs that reduce or eliminate the choices available to users, and (6) disparate treatment designs that treat a particular group of users different from others (e.g., provide additional resources after payment [88]). All of these attributes modify users’ choice architectures (i.e., the decision-making space [76]) and effectively manipulate users [1, 76]. Gray et al. [25] further synthesized the deceptive design taxonomies in academic literature and regulatory frameworks (e.g., [7, 18, 23, 42, 48]), and developed a domain-agnostic ontology with six high-level deceptive patterns: nagging, obstruction, sneaking, interface interference, forced action, and social engineering [25].

With the existing taxonomies of deceptive design in mind, our research aims to identify the deceptive mechanisms and characteristics that are prevalent in XR environments so that similarities and differences to other platforms can be identified and studied.

Skip 3METHODOLOGY Section

3 METHODOLOGY

3.1 Systematic Analysis of the Literature

Our systematic analysis methodology for database searching, screening, and extraction was based on PRISMA [61] due to its clear guidelines that better facilitate literature review and meta-analysis processes. We started with 187 initial unique publication records and arrived at a final sample of 13 papers (Figure 1). To answer our research questions, we then synthesized data by thematic synthesis [80]. This section provides the details of our systematic analysis. We present our thematic synthesis approach in Section 3.2.

3.1.1 Databases, Search Queries, and Duplicate Removal.

We began with the development of the search protocol, where we defined the search query and databases. To ensure the inclusiveness of our search, we targeted four bibliographic databases: Scopus,6 the ACM Guide to Computing Literature,7 IEEE Xplore,8 and JSTOR.9 Both the ACM Guide to Computing Literature and IEEE Xplore offer a strong focus on technology- and engineering-related publications, Scopus has multi-discipline publications, and JSTOR provides access to humanities and social sciences journals. Given that XR is a cluster of technological innovations, and deceptive design takes advantage of human weaknesses, we decided to include all four databases in our initial search. The search results from all four databases together offered a good balance in depth and breadth for our review.

In addition, we defined a set of search terms using keywords that frequently appeared in deceptive design definitions (as described in Section 2.2), as well as terms frequently used to describe XR technology. We conducted multiple iterations of searches with these keywords to test and refine the combinations of search terms and to ensure that the research outcomes fit our scope. Table 1 presents the final search query for individual databases.

Table 1.
DatabaseSearch Query
The ACM Guide to Computing Literature*Abstract: [(“dark patterns” OR “dark pattern” OR “deceive” OR “deceptive” OR “manipulative” OR “abusive”) AND (“extended reality” OR “virtual reality” OR “augmented reality” OR “mixed reality”)]
Scopus[ABS** (“dark pattern”) OR ABS (“dark patterns”) OR ABS (“deceive”) OR ABS (“deceptive”) OR ABS (“manipulative”) OR ABS (“abusive”)] AND [ABS (“extended reality”) OR ABS (“virtual reality”) OR ABS (“augmented reality”) OR ABS (“mixed reality”)] AND (LIMIT-TO (DOCTYPE , “cp”) OR LIMIT-TO (DOCTYPE , “ar”) OR LIMIT-TO (DOCTYPE , “re”) OR LIMIT-TO (DOCTYPE , “ch”)] AND (LIMIT-TO (LANGUAGE , “English”)]
IEEE Xplore*([(“Abstract”:“dark pattern”) OR (“Abstract”:“dark patterns”) OR (“Abstract”:“deceive”) OR (“Abstract”:“deceptive”) OR (“Abstract”:“manipulative”) OR (“Abstract”:“abusive”)] AND [(“Abstract”:“extended reality”) OR (“Abstract”:“virtual reality”) OR (“Abstract”:“augmented reality”) OR (“Abstract”:“mixed reality”)]
JSTOR*[(“dark pattern”) OR (“dark patterns”) OR (“deceive”) OR (“deceptive”) OR (“manipulative”) OR (“abusive”)] AND [(“extended reality”) OR (“virtual reality”) OR (“augmented reality”) OR (“mixed reality”)] AND la:(eng OR en)
  • *To ensure the consistency of research queries across the four databases, the corresponding filters for research article, extended abstract, short paper, book chapter, conference paper, book review, or journal paper, and English language were applied through interface features.

  • **“ABS” is the search syntax for abstract search in the Scopus library.

  • Note: The filters we set up for all databases were kept as consistent as possible. The search was conducted twice over each database on September 23 and 27, 2022.

Table 1. Final Search Queries in the Syntax of Each Database

  • *To ensure the consistency of research queries across the four databases, the corresponding filters for research article, extended abstract, short paper, book chapter, conference paper, book review, or journal paper, and English language were applied through interface features.

  • **“ABS” is the search syntax for abstract search in the Scopus library.

  • Note: The filters we set up for all databases were kept as consistent as possible. The search was conducted twice over each database on September 23 and 27, 2022.

Our search was based on “Abstracts” instead of “Full Text,” given that Scopus and JSTOR do not allow “Full Text” searches. We intended to keep our search query as consistent as possible. Thus, the “Abstract” search was applied to all four databases. Further, our search terms excluded nouns such as “manipulation” and “deception” because these terms were frequently used to describe manipulating factors in controlled experiments or skills in surgical procedures (e.g., laparoscopic manipulation skill), and to describe brain activities in neuroscience (e.g., brain activity in deception and truth telling). Moreover, acronyms of XR technologies and terms such as “trick,” “steer,” “mislead,” and “subvert preferences” were excluded, since these were frequently used in other disciplines to represent irrelevant topics. Our final search queries include terms that describe XR technology, including “Virtual Reality,” “Augmented Reality,” “Mixed Reality,” and “Extended Reality,” and terms that describe deceptive design, including “dark pattern,” “deceptive design,” “deceive,” “manipulative,” and “abusive.” We include the detailed search query for each database in Table 1.

Last, we gathered a total of 187 publications including 181 articles from bibliographic databases and six workshop papers. We downloaded the titles and abstracts of the 187 publications in a spreadsheet. We manually inspected each and excluded 31 duplicates with identical titles and DOIs. In the end, we arrived at \(n=156\) unique records for the next phase.

3.1.2 Screening and Eligibility Scoping.

With three coders, we first screened the 156 publications’ titles and abstracts based on the following exclusion criteria. Then, each publication was screened by two coders. In case of disagreement, the disagreed publications (4 publications) were passed on to the third coder who acted as a tie-breaker. The exclusion criteria were as follows:

The paper is not about VR/AR/MR technologies or XR technologies as a whole.

The paper is not about deceptive design or any design that manipulates users.

The paper is not about the application of deceptive design in XR technologies.

The paper is not a “Research Article,” “Extended Abstract,” “Short Paper,” “Book Chapter,” “Conference Paper,” “Book Review,” nor “Journal Paper.”

The paper is not in English.

The full text of the paper is not retrievable.

A total of 18 publications were retained after the screening process. We then conducted a full-text screening of each, following the same exclusion criteria. Two coders screened the publications individually, and the third coder acted as a tie-breaker. We excluded 5 publications that only mentioned XR deceptive design as an example in the abstract but did not closely discuss the details in the paper. We retained \(n=13\) records after the screening process. The final sample includes 7 records from the four bibliographic databases and 6 records from the CHI workshops. We then conducted a thematic synthesis on these 13 publications.

3.2 Thematic Synthesis Approach

We used thematic synthesis, a rigorous method for identifying and analyzing themes within qualitative data, based on thematic analysis [9] for identifying and developing “themes” from literature [80]. Our process involved four stages, including (1) data extraction, (2) coding text, (3) developing descriptive themes, and (4) developing analytical themes [80]:

(1)

The first step of our thematic synthesis process was data extraction. We downloaded the PDFs and extracted the content of 13 publications into text files using Adobe Acrobat PDF reader. These 13 text files were then uploaded to Dovetail10 for our thematic synthesis.

(2)

Two coders independently conducted line-by-line coding of all publications using an inductive approach [9]. During the first round of coding, the two coders independently coded the first 2 publications. A third coder resolved disagreements and facilitated consensus building. By repeating this process, the two coders each coded the remaining 11 publications.

(3)

Initial codes were discussed in weekly meetings, leading to a collaboratively refined codebook. After 4 weeks, we finalized a codebook with 114 codes. Figure 2 presents an illustrative example of our line-by-line coding process.

(4)

We grouped codes into descriptive themes that closely summarized the content of the 13 publications. Through affinity mapping, discussion, and iterative refinement, we developed eight analytical themes that directly addressed our research questions. Later, Table 3 maps these final themes to each publication.

Fig. 1.

Fig. 1. This PRISMA flow diagram [61] presents all phases of our systematic analysis of the literature, from the identification of articles to the final articles we included.

Fig. 2.

Fig. 2. Example of our line-by-line coding process on a snippet from Bonnail et al. [5]. From left to right: The original snippet and the respective thematic coding. The color of the codes correspond to the respective themes shown later in Table 3.

Table 2.
Database (Count)Publication Type (Count)Metrics (Average)Contribution Type (Count)
YearTotalACM*IEEE*ScopusOther**ConferenceJournalWorkshopPagesAuthorsCitationsEmpiricalArtifactTheoryLiteratureArgument
2012110100101613701000
2018101101001748510000
202011010100154101000
20214204022017.53.37.502110
2022600061053.82.70.501005
13417653510.83.89.715115
  • *Selected publications from IEEE Xplore and ACM Digital Library also appeared in Scopus. We removed the duplicates but labeled these publications as from multiple sources.

  • **“Other” includes publications from the 1st Workshop on Novel Challenges of Safety, Security, and Privacy in Extended Reality.

  • Note: Publications for the year 2022 are only included until May 2022. Aggregated values are counts for the Database and Publication Type columns, and averages for the Metrics columns. Citation numbers were retrieved from Google Scholar on November 25, 2022.

Table 2. Overview of the Selected Publications by Year

  • *Selected publications from IEEE Xplore and ACM Digital Library also appeared in Scopus. We removed the duplicates but labeled these publications as from multiple sources.

  • **“Other” includes publications from the 1st Workshop on Novel Challenges of Safety, Security, and Privacy in Extended Reality.

  • Note: Publications for the year 2022 are only included until May 2022. Aggregated values are counts for the Database and Publication Type columns, and averages for the Metrics columns. Citation numbers were retrieved from Google Scholar on November 25, 2022.

Table 3.
ReferenceNijholt et al. [57]Lebeck et al. [38]Torstensson et al. [81]Schlembach and Clewer [67]Lee et al. [39]Mhaidli and Schaub [51]Ramirez et al. [64]Krauß [36]Cummings and Shore [12]Buck and McDonnell [8]Bonnail et al. [5]Su et al. [73]Franklin [20]
YearThemes (T)/Sub-themes2012201820202021202120212021202220222022202220222022
XR EffectsT1: Adverse effects of user experience\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)
T2: Exacerbated user manipulation:
T2.1:the illusion of objectivity in XR experience\({\boldsymbol{\checkmark}}\)
T2.2:obscuring reality, disguising risks\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)
T2.3:data fuels privacy risks and manipulation\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)
T2.4:undesired access, undesired data use\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)
T2.5:insecurity worsens user manipulation and privacy concerns\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)
T2.6:persistent exposure to manipulation\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)
StrategiesT3: Psychological manipulation:
T3.1:false memory implantation\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)
T3.2:artificial prosthetic memory and empathy-based manipulation\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)
T3.3:hyperpersonalization\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)
T4: Reality distortion\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)
T5: User perception tricking:
T5.1:blurry boundary between virtual and reality\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)
T5.2:perception hacking\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)
RisksT6 Privacy and security risks\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)
T7: Changes in views, beliefs, morals, and politics\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)
T8: Manipulation prevention techniques:
T8.1:prevent false memories and empathy-based manipulation\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)
T8.2:security and privacy recommendations\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)
T8.3:call for research efforts\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)
T8.4:improve user literacy\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)\({\boldsymbol{\checkmark}}\)

Table 3. Overview of Our Eight Themes (e.g., T2) and 15 Sub-Themes (e.g., T2.1) and the Corresponding Publications in Which They Were Mentioned ( \({\boldsymbol{\checkmark}}\) )

We present our results in the following sections.

Skip 4RESULTS Section

4 RESULTS

The first paper relating to XR deceptive design was published in November 2012, after which there was a 5-year gap before the second publication in 2018. Since 2020, there has been an increase in the number of publications, reaching a peak of 6 publications in May 2022. This trend reflects the growth in the development and use of XR technology and increasing research attention on deceptive design since 2010. Our search in Scopus found 4 ACM Digital Library and 1 IEEE Xplore publications among the 13 publications. In total, 5 publications contributed an artifact, including functional systems, prototypes, or hypothetical scenarios of deceptive XR interfaces. Five workshop short abstracts contributed an argument that discusses the significance of deceptive design problems in XR. Only a few publications made empirical, theoretical, or literature review contributions. Most researchers published at conferences or workshops (5 out of 13) because technology innovation moves quickly. We present a summary of the databases, types of publications, paper metrics, and types of contributions in Table 2.

4.1 RQ1: Overview of Deceptive Design Definitions in XR Research

Overall, existing studies on deceptive design in XR largely leverage strategies and patterns identified from previous studies in the web, PC, and mobile app contexts, but extend them by exploring novel implementations in the immersive nature of XR environments. Similar to non-XR deceptive design definitions, the literature on deceptive design in XR often distinguished “deception” from “nudging” and “persuasion.” For example, in their paper on manipulative XR advertising, Mhaidli and Schaub [51] distinguished persuasion and manipulation by user interests. Specifically, Mhaidli and Schaub [51] defined “persuasion” as users making product purchases by examining and debating advertisers’ information. However, “deception” was defined as advertisers manipulating consumers into doing things they do not want to or otherwise would not do. Deception and nudging changed user decisions. However, “nudging” affects the decision-making context to influence people’s decisions without changing their preferences. It lets people “go their own way” [64, p. 530]. Deception misleads and manipulates users.

Previous literature has emphasized the role of designer intent in orchestrating deceptive design [6, 23]. Three papers (i.e., Cummings and Shore [12], Krauß [36], Mhaidli and Schaub [51]) described deceptive design based on definitions from previous studies. For example, Krauß [36] adopted the early definition of deceptive design from Brignull [6] used for websites and apps. Similarly, Cummings and Shore [12] adopted the deceptive design definition from Gray et al. [23], which is broader and UX-practitioner focused. We also observed several papers discussing the benevolent use of deceptive strategies in XR, such as a game to teach children about online risks by incorporating deceptive game components that get participants to share personal information by Torstensson et al. [81]. In VR sports training, virtual characters were constructed with deceptive body actions (e.g., fake body movements to hide the final running direction). This increased players’ capacity to recognize or replicate these moves against actual opponents. In the same study, Torstensson et al. [81] discussed deceitful behaviors and dialogues of virtual characters for social training, which mimics real-world negotiation tactics with uncooperative opponents. Overall, we did not identify any publications focused on redefining deceptive design based on use cases from XR environments. In recent deceptive design scholarship, researchers have concluded that user manipulation may not always be intentional [7, 18, 24]. However, only two papers briefly hypothesized the possible occurrence of deceptive design from non-manipulative design decisions [36, 64]. In our work, we focus on analyzing the deceptive design characteristics instead of the intention of the XR designer.

Our systematic analysis of this literature derived eight main themes and 15 sub-themes summarized in Table 3 that describe unique deceptive design strategies in XR. In reporting the results that follow, the high-level themes in the headings correspond to the themes (T1–T8) summarized in Table 3. The sub-themes related to T2, T3, T5, and T8 are labeled in line (e.g., T2.1, T2.2).

4.2 RQ2: How Could XR Amplify the Effects of Deceptive Design?

In our analysis, 10 out of 13 publications discussed XR’s ability to create an immersive, emotionally engaging, and interactive experience that can be used to amplify user manipulation.

4.2.1 T1: Adverse Effects of User Experience.

We aggregated four ways XR technologies improve user experience from 10 publications: (1) product previewing, (2) experience previewing, (3) memory remembering, and (4) realistic experience and sensations. However, these publications simultaneously raise concerns about the potential for user deception under the guise of improving user experience. Among these publications, there was a strong agreement that XR technologies can enhance user experience with immersive, interactive, and photorealistic virtual environments. For instance, in their study on XR advertising, Mhaidli and Schaub [51] mentioned the IKEA Place AR-based app11 that lets customers digitally sample furniture in their living area to better comprehend its size, design, and function. Similarly, before booking a hotel room, clients can join a virtual tour to “sample” the experience [51]. These preview features, while seemingly convenient, could present an ideal version of reality, potentially leading to uninformed purchases and disappointment when encountering the actual product [51]. In addition, the hypothesis of Bonnail et al. [5] and the analysis of Schlembach and Clewer [67] of a VR documentary suggest that XR has the potential to enhance personal and historical memory reminiscence as it provides “realistic” reconstruction of past events in 3D, and Krauß [36] described a VR documentary (“Meeting You”12) that enables people to spend time with deceased friends and families “as if they were still alive” (p. 1). This also raised ethical concerns regarding the potential for distinguishing between real and false memories, and the potential for impersonation and psychological manipulations [67]. Several papers noted that XR technology can produce stereoscopic images, music, and convincing haptic feedback to make people feel psychologically present in a virtual environment [11, 64, 84]. Beyond presence, Ramirez et al. [64] further noted in their theoretical argument paper that carefully designed VR simulations can generate “virtually real experiences,” where users engage with the virtual experiences as if they were real. This, achieved through a combination of high context-realism (i.e., the degree to which the rules, settings, and appearance of a simulated world respond to users as if they were real) and perspectival fidelity (i.e., a simulation that contributes to generating a user’s viewpoint), may further blur the line between virtual and real and enables XR to be used for manipulation purposes [64].

4.2.2 T2: XR Exacerbates User Manipulation.

Nine publications discussed how XR features increase user manipulation. These publications consistently argue that XR-based deceptive designs are more convincing and manipulative compared to non-XR deceptive designs [5, 20, 38, 51]. From our literature analysis, we identified six such XR features that can contribute to user manipulation.

T2.1: The Illusion of Objectivity in XR Experience. To maximize the level of immersion, many XR games and applications promise a first-person experience that allows users to “see” the virtual world from the perspective of a certain individual, animal, or entity. An example is the 6x9 VR game that simulates a prisoner’s first-experience [28]. However, Ramirez et al. [64] argued that XR can never produce an objective first-person experience because the simulations always suffer from semantic variance [68] and structural intersectionality [10]. The potential for creators to inject their own perspectives into the content raises ethical concerns around XR simulations that claim to deliver a first-person experience [64].

T2.2: Obscuring Reality, Disguising Risks. As mentioned in Section 2, XR has the ability to obstruct and substitute the physical objects with a virtual environment. In a multi-user scenario, Buck and McDonnell [8] discussed the potential for one user to mislead another regarding their physical surroundings. This concern was echoed by Lebeck et al. [38] through AR user interviews, which illustrated users’ concern that XR allows hiding of dangerous physical items (e.g., a gun) behind virtual objects, causing harm to people using XR. However, Mhaidli and Schaub [51] noted that in XR, users cannot just look away like on a 2D screen, especially when they are unaware of the threats.

T2.3: Data Fuels Privacy Risks and Manipulation. Several publications emphasized that XR technologies process lots of data to support its user experience, such as data describing user body and eye movements [8, 38, 39], gestures [39, 51], physical appearance [8, 51], behavioral patterns (e.g., gait, mannerisms) [8, 38, 51], and physical surroundings [8, 38, 51]. The user experiments conducted by Miller et al. [54] and Pfeuffer et al. [62] revealed that XR users can be deanonymized based on their behavioral patterns, a type of data that commonly is considered non-private. These data can be a valuable resource for predicting user preferences and vulnerabilities, which enables the development of tailored manipulation strategies [51].

T2.4: Unawared Access, Unawared Data Use. Beyond single-user environments, three studies focused on multi-user interaction and application data misuse. For instance, the user studies conducted by Lebeck et al. [38] documented cases where users modify each others’ choice architectures by manipulating (e.g., hiding) each other’s virtual objects. Moreover, Lebeck et al. [38] revealed several problematic multi-user behaviors, such as when users place inappropriate virtual objects in the shared space or onto others’ avatars or damage others’ virtual objects without permission. Regarding information misuse, Lebeck et al. [38] identified users’ concerns regarding accidental disclosure of private information without understanding that others can see it and bystanders’ concerns of being unwilling participants in others’ XR experiences. Buck and McDonnell [8] further warned that these data might be abused by the XR applications or users, causing unexpected privacy risks and social consequences. This theme differs from the previous one in that the user was neither actively involved nor aware of the data collection process.

T2.5: Insecurity Worsens User Manipulation and Privacy Concerns. Several papers highlighted that current XR systems lack security controls that usually exist on mobile, PC, or web environments, leaving users vulnerable to manipulation and data privacy threats. For instance, due to the absence of the same-origin policy in WebVR [16], Lee et al. [39] prototyped a defense system to prevent attackers to inject ads or alter content on VR webpages [39]. Moreover, the experiment of Su et al. [73] demonstrated cases where attackers could steal user information by side channeling the VR hardware, and Buck and McDonnell [8] expressed concerns about the migration of Internet scams into the metaverse.

T2.6: Persistent Exposure to Manipulation. Two papers further expressed the concerns regarding user manipulation from a constant immersive experience [12, 51]. As companies are now developing daily-use XR equipment (i.e., Mojo smart lens13), Mhaidli and Schaub [51] hypothesized that future users will wear AR devices constantly throughout their daily life. As a result, Cummings and Shore [12] and Mhaidli and Schaub [51] warned that users will experience constant data collection and user surveillance, and their exposure to targeted advertising and XR-based manipulation will also become persistent.

We argue that Theme 2 revealed XR’s potential to enable deceptive design across all six attributes [48, 49]. The illusion of objectivity can be employed to induce false beliefs through users’ perceived objectivity of their experiences. XR’s capacity to obscure reality may facilitate “information hiding.” User data from XR sensors enables “covert” and “asymmetric” deceptive design that exploits users’ cognitive biases and preferences. The insecurity of XR systems allows for all types of deceptive design that exploits XR design elements and users’ false beliefs in the authenticity of XR content.

4.3 RQ3: What Deceptive Design Strategies Can Be Present in XR?

Ten publications detailed XR deceptive design strategies, but none offered a taxonomy. As presented in Table 3, we classified those deceptive design strategies into three themes: (1) psychological manipulation, (2) reality distortion, and (3) tricking user perception. In this section, we introduce each theme and how the manipulation is enhanced by the XR features.

4.3.1 T3: Psychological Manipulation.

Eight publications discussed psychological manipulation through XR technologies [5, 51, 67], which we detail in this section.

T3.1: False Memory Implantation. A central feature of XR technology is immersion, where users are fully engaged and absorbed within a virtual environment [86]. Bonnail et al. [5] and Schlembach and Clewer [67] emphasized the memory manipulation risks posed by the immersive experience. As detailed in Theme 1 (Section 4.2.1), XR is capable of supporting memory reminiscing with “realistic” multi-dimensional reconstructions. However, Bonnail et al. [5] described the theory behind human memory flaw and argue that distorted XR reconstructions can lead to false memories. Specifically, humans suffer from source confusion and thus may confuse where their memories originated [69]. For example, a conversational rumor may be mistaken for TV news [5, 69]. Based on a UK VR advertising company’s “memory relive” service (Momento14), Bonnail et al. [5] hypothesized a scenario of a VR wedding reconstruction, where the VR service provider altered the champagne and wedding dress brands and customers falsely believed that they enjoyed this champagne during their wedding and bought from the brand again.

T3.2: Artificial Prosthetic Memory and Empathy-Based Manipulation. Drawing on the literature on memory, Schlembach and Clewer [67] mentioned that VR can be used to create prosthetic memories of historical events. Prosthetic memory is a public cultural memory people acquire from movies and television [37]. It fosters a personal connection to others’ historical experiences, and can create people’s empathy, social responsibility, and political alliances across race, class, and gender [37]. For example, people who had no personal connection to the 9/11 victims can became vicariously traumatized after watching broadcast media and films that exposed them to victims’ sufferings [32]. Humanities and social sciences literature believed that VR offers a route to empathy because it allows people to “walk in another’s shoes” and “see the world through another’s eyes” [66]. Through the examination of two VR films, Schlembach and Clewer [67] warned against “empathy generation” through VR technology as people can be “made to ‘feel’ something they will be changed by it and so will their behaviour.” They illustrated this argument through the example of a VR game (6x9) that simulates the psychological effects of extreme isolation resulting from solitary confinement, including distorted vision, hallucinations, floating, and screaming [28, 67]. This immersive experience evoked public empathy toward prisoners and led to the reform of legal and political decisions (i.e., in 2016, the Obama administration introduced federal changes to curb solitary confinement) [78].

In parallel, drawing from literature on the ethics of choice-architecture, Ramirez et al. [64] argued that the VR-based empathy-enhancing simulations are always unethical because it can never objectively provide actual experience of another, as we discussed in Theme 2 in Section 4.2.2. Thus, it may mislead users “about the nature of their virtual experiences,” and manipulate them into “changing their thoughts, beliefs, and behaviours,” consequentially influencing their “future judgments about their own values, policies, and so on” [64, pp. 530–532]. In addition, taking inspiration from a U.S. military recruitment game (i.e., America’s Army15), Mhaidli and Schaub [51] hypothesized that such games in VR may be more powerful in driving players’ inclination toward enlisting. They argued that immersive and realistic experiences of VR could create a sense of “living” in the military, and the enjoyment of the game might “bias” players to view joining the armed forces positively [51].

T3.3: Hyper-Personalized User Manipulation. Personalization is an e-commerce strategy that employs data to enhance experience of people with similar traits through tailored content, services, and features [51, 83]. Hyper-personalization refers to creating highly customized content tailored to an individual [51, 85]. As detailed in Theme 2 (Section 4.2.2), XR’s data collection capabilities enable inference of user states and facilitate manipulation that takes advantage of user data. Drawing inspiration from IKEA’s AR furniture preview application,16 Mhaidli and Schaub [51] discussed how AR-driven hyper-personalization can manipulate people’s purchase decisions by making “the photorealistic rendering of the furniture seem brighter and more colorful than real life.” Two papers echoed this discussion and illustrated other possible cases of data-based user manipulation in XR [8, 51]. For example, psychological user manipulation through idealized interaction partners (meeting their preferences) based on biometric data [8], emotional manipulation in user investment decisions using holographic recreations of their trusted people based on shared images [51], and physical navigate the user to a restaurant based on physiological signal of hunger [51].

In summary, we classify this theme as “deceptive,” “covert,” and “information hiding” [48, 49], as it takes advantage of users’ false belief in the authenticity of XR simulations, hides information about the ads’ inclusion, and manipulates them through mechanisms such as the photorealistic graphics and sensory feedback that are not directly apparent. We also argue that this theme partially but not entirely aligns with the pre-established “asymmetric” attribute of Mathur et al. [49] as the XR simulation subtly influences user decisions without any interface friction.

4.3.2 T4: Reality Distortion.

Several publications discussed user manipulation through seamless reality distortion and the idea that XR equipment will be worn daily [5, 38, 51]. Drawing upon the deceptive ads in the 2020 U.S. presidential campaign,17 Mhaidli and Schaub [51] suggested a potential scenario where future politicians could exploit daily-worn XR devices, concealing evidence of poverty and economic downturn in a city with photorealistic graphics. Bonnail et al. [5] echoed this scenario by proposing a potential use of VR technologies for marketing, where people’s uploaded photos could be modified to incorporate brand logos and create false memories of experiencing a product. The AR user stody of Lebeck et al. [38] further emphasized users’ safety concerns in AR-assisted driving, where deceptive representations of reality, such as hidden pedestrians or a misrepresented neighboring vehicle, could cause harm to both the driver and people nearby. The distorted reality can lead to psychological addiction because people “identify with the bodies that they virtually inhabit, even when those bodies are very different from their own” [64, p. 532].

We classify this theme as “asymmetric,” “covert,” and “deceptive” because it increases people’s exposure to ads, utilizes subtle influence mechanisms in XR that are unapparent to users, and capitalizes on people’s false beliefs by modifying the representation of reality [48, 49].

4.3.3 T5: Tricking User Perception.

While our Theme 3 (Section 4.2.1) discussed XR’s ability to offer immersive virtual experiences, four publications highlighted its manipulation on user perceptions [8, 39, 51, 73].

T5.1: Extreme Realism Blurs the Boundary between Virtual and Reality. Mhaidli and Schaub [51] suggested that a skillfully crafted commercial with highly realistic visuals might fool viewers into thinking it was real. AR users may not realize that other people’s shirts include digital brand logos. Although current XR systems cannot yet generate such realistic simulations, Mhaidli and Schaub [51] and Buck and McDonnell [8] anticipated that it will be possible in the near future. In addition, Mhaidli and Schaub [51] and Krauß [36] noted XR’s product previewing feature to sway user buying decisions (e.g., brighter and more colorful furniture). The virtually real experience in XR makes the preview seem authentic, resulting in disappointment when the product is of lower quality [51].

T5.2: Perception Hacking Manipulates User Input. Using XR technologies’ visual, auditory, and haptic feedback to engage with the virtual world, the lab experiment of Su et al. [73] and the user study of Lee et al. [39] both showed that perception hacking in XR can effectively deceive people. A well-known non-XR example of perception hacking is clickjacking, which misleads users into accidental pressing of ads on websites (i.e., disguised ads) [6, 31]. Our analysis synthesized three types of XR perception hacking: (1) cursorjacking [39, 73], (2) blind spot tracking [39], and (3) auxiliary display abuse [39].

As a variation of clickjacking on the web, cursorjacking in XR manipulates visual cues of object movements (e.g., hand movements) to differ from proprioceptive cues (e.g., physical sensing of hand movements) [39, 73]. Research indicated that visual perception is valued more than other senses, even if the difference is small [3, 35]. The user study of Lee et al. [39] showed that an angular offset of visual cues from the user’s hands can redirect hand movement in VR and deceive them into interacting with undesired virtual objects. Taking inspiration from ad impression frauds on websites and mobile apps [41, 72], in a 360-degree immersive XR environment, Lee et al. [39] demonstrated blind spot tracking that uses gaze analysis to display ads beyond people’s line of sight. Similar to blind spot tracking, Lee et al. [39] also demonstrated auxiliary display abuse that occurs when participants are immersed in XR and therefore fail to notice ads on a secondary monitor (e.g., a PC monitor) [39].

In short, we classify this theme as “asymmetric,” “covert,” “deceptive,” and “information hiding” [48, 49], as it increases users’ exposure to specific brand logos, trick users into interacting with undesired virtual objects through subtle influence mechanisms, and capitalizes on people’s false beliefs of perceived body movements and hides the existence of ads outside of their line of sight.

4.4 RQ4: What Risks Could XR Deceptive Design Pose to Users?

We synthesized two risks that XR deceptive design poses to users: privacy and security risks, and change to users’ feelings, behaviors, and moral and political orientations. This section discusses both risks and how they are unique to XR. Eight publications proposed solutions to prevent XR deceptive design [12, 20, 38, 57]. We also synthesized XR manipulation prevention strategies, and discuss them at the end of this section.

4.4.1 T6: XR Deceptive Design Worsens Privacy and Security Risks..

Several publications expressed a concern that deceptive designs in XR may lead to greater and harder-to-defend privacy loss compared to those in non-XR [5, 12, 36]. For example, to enjoy the XR memory reconstruction service, customers must share personal images and video recordings, as “the more video provided, the more accurate the construction” [5, p. 3]. Based on diverse user data, XR produces superior experiences compared to non-XR systems, making it more challenging for users to prioritize privacy [12, 74]. In addition, XR applications can deceive users into disclosing private information to digital humans or those who resemble their loved ones [8, 36, 51]. Further, Cummings and Shore [12] proposed a possibility where privacy concerns may hinder users from reporting encountered deceptive design, potentially providing a “shield” for such issues in XR.

There was also a strong agreement that deceptive design in XR can lead to security risks [8, 38, 73]. For example, Su et al. [73] and Buck and McDonnell [8] proposed that perception hacking (Theme 5 in Section 4.3.3) can deceive people into accidental granting of privileges. Lee et al. [39] further noted that the insecurity of XR browsers leave space for ad fraud, and Su et al. [73] mentioned that attackers can exploit these security weaknesses to steal sensitive user data or distort XR content.

4.4.2 T7: XR Deceptive Design Leads to Changes in Users’ Views, Beliefs, Morals, and Politics..

Another theme frequently mentioned by the publications is that misleading design induces artificial feelings that change people’s judgment of values, views, beliefs, morals, and politics [20, 51, 67]. For instance, the analysis by Schlembach and Clewer [67] of the 6x9 VR simulation game [28] elicited that empathy-based manipulations are effective in political and social control. Mhaidli and Schaub [51] mentioned another example of concealing poverty and economic decline to show a false bright economic environment for political campaigns. Users’ false beliefs in XR’s objectivity lead to “made” empathetic responses to manipulative simulations, which in turn affects their future thoughts, attitudes, and emotions [64]. Furthermore, the idealized digital humans, as suggested by Buck and McDonnell [8] and Franklin [20], can potentially be used for “political duress” or manipulating users into engaging in risky or unlawful behaviors, such as online gambling.

Several publications mentioned that people’s manipulated attitude, belief, thought, and perception can influence their behaviors [8, 20, 38, 39, 64]. For instance, the distorted preview of products and experiences [51], the use of a hyper-personalized ad spokesman [8, 51], and the integration of ads in XR memory reconstructions can impact people’s buying decisions [5]. In addition, Franklin [20] gave an example that players’ behaviors in video gameplay can “spill over” into their behaviors outside of game (i.e.,Virtual Spillover [63]). Thus, they argued that the same effect may occur among XR users, especially when XR exposure becomes constant [20].

4.4.3 T8: XR Manipulation Prevention Techniques.

Our analysis found eight publications suggesting solutions to protect people from XR deceptive design [5, 38, 57, 64]. For example, Franklin [20] and Nijholt et al. [57] recommended using speech and text analysis, physiological sensors, and intelligent software to detect and remove behavior- and preference-changing methods. To address the multi-user problems, Lebeck et al. [38] suggested creating methods and policies to prevent XR devices from overlaying virtual on others’ content and allowing users control over view and edit permissions for personal virtual objects and space.

T8.1: Preventing False Memories and Empathy-Based Manipulation. In the cases of memory manipulation, Bonnail et al. [5] proposed basing XR memory reconstruction on user consent, with service providers openly disclosing potential modifications, and identifying updated items in scenes to prevent source confusion and false recollections. Since empathy-based manipulation plays a crucial role in XR deceptive design, Ramirez et al. [64] proposed changing simulation goals from empathy-provoking first-person experiences to bystander perspectives. This reduces manipulation and engages users sympathetically instead of empathetically [64].

T8.2: Security and Privacy Recommendations. Five publications provided privacy and security recommendations for developing future XR technologies [8, 12, 39]. For example, Cummings and Shore [12] indicated that XR users needs to trust the information they consume and the services that use their data to freely and safely share information. Buck and McDonnell [8] argued that power should be given to the user, and they should be able to opt-out of data collection at any moment. In addition, Su et al. [73] advocated for enhancing security in XR systems, and Lee et al. [39] called for creating XR-specific security policies. Cummings and Shore [12] specifically mentioned that children should be protected with special laws and standards, and three publications agreed that XR data collection and misuse for malicious purposes should be prohibited [8, 12, 51].

T8.3: Call for Academic Research Efforts to Better Understand the Problems. Three studies called for research about problems in XR deceptive design [8, 20, 51]. For instance, Buck and McDonnell [8] suggested researching on psychological user manipulation in XR ads and the effects of physical properties of avatars. Buck and McDonnell [8] and Franklin [20] recommended researching how digital interaction translates into people’s daily lives. Mhaidli and Schaub [51] called for research on what preference changes are ethical or manipulative based on user benefits, and proposed studying privacy policies of XR advertisers and developers to understand XR data risks.

T8.4: Improve User Literacy to Identify and Resist Manipulation. Three publications emphasized educating users to raise awareness and resilience toward deceptive design and its risks [8, 12, 51]. Specifically, Mhaidli and Schaub [51] proposed improving users’ literacy of recognizing photorealistic XR ads with distorted previews and hyper-personalized avatars. They argued that enhancing users’ literacy in XR ads could improve their resistance to manipulation [51]. Buck and McDonnell [8] and Cummings and Shore [12] called for user education on XR dataflow and privacy implications. According to Mhaidli and Schaub [51], developing effective educational interventions will require academic and industrial research.

Skip 5DISCUSSION AND TAKEAWAYS Section

5 DISCUSSION AND TAKEAWAYS

Our systematic analysis of the literature synthesized existing research in XR deceptive design. We identified deceptive design definitions in XR studies, XR features that can amplify user manipulation, XR design patterns demonstrated deceptive attributes [48, 49], potential risks to users, and solutions proposed in the existing literature. In this section, we outline directions for future research and offer recommendations to XR designers and policymakers.

5.1 Comparing Deceptive Design in XR and Other Platforms

Our eight themes of XR deceptive design have much convergence with established taxonomies in deceptive design research on websites [23, 25, 49], games [49, 88], and 3D interactions [27]. For instance, we found possible XR variations of “toying with emotion” [23], such as artificial prosthetic memory that exploits empathy [67] and hyper-personalized interaction partners that evoke positive feelings [51]. The latter may also be viewed as a form of “social engineering” [25] and “attention grabber” because people tend to focus on content aligned with their interests [27]. In addition, digital humans mimicking people’s trusted and loved ones can be viewed as a form of “friend spam” in the XR [23, 88]. The risks that XR data may be used for user profiling is a form of “disguised data collection” [27]. While not fully aligning with established taxonomies, the perception hacking in XR resembles interface strategies such as “sneaking” or “forced action” that deceive users into engaging with unwanted virtual objects and behaviors [23, 25]. However, our analysis also unveiled possible new forms of deceptive design that emerged from XR’s immersive nature. Examples include false personal memory implantation, blind spot tracking, and reality distortion that capitalizes on human memory flaw and visual limitation, and the photorealistic simulation, 360-degree display, and the coexistence of virtual and reality in certain XR systems.

Using the attribute classification of Mathur et al. [49], our literature analysis showed that the majority of deceptive design in XR demonstrated attributes such as “covert,” “deceptive,” and “information hiding,” similar to the major attributes demonstrated by web-based deceptive design [48]. However, in XR, deceptive design employs subtler influence mechanisms and can more accurately target people’s weaknesses based on extensive data. The resulted manipulations are novel, unfamiliar, and challenging for people to detect.

5.2 Future Research Directions in XR Deceptive Design

Research in deceptive design is maturing, and XR has experienced rapid growth in the past few years. However, in-depth investigations of deception mechanisms that leverage XR’s immersive capabilities require more study. In the following, we highlight the most urgent action items that need research attention. It is our hope that our results provide a direction for future research to identify new manipulations in XR and develop appropriate countermeasures.

5.2.1 A Better Understanding of Deceptive Design in XR.

The findings of RQ1 (see Section 4.1) showed that existing research primarily relied on deceptive design definitions that emphasize designer intentions (e.g., [8, 57, 81]). This trend shows a need for XR deceptive design research using an updated definition and focusing on those that resulted from unintentional design decisions. Moreover, to effectively mitigate harmful deceptive design in XR, it is critical to understand to what extent the design mechanisms are considered benevolent (e.g., nudging) and to what extent they are considered deceptive (e.g., tricking). Gray et al. [22] and Gray et al. [24] suggested that users’ perceptions of manipulation can help identify borderline practices that may not be strictly problematic or illegal. During our literature analysis, we found papers that tried to differentiate “nudging” and ‘manipulation” based on user interest and freedom of choice (e.g., [51, 64]). Therefore, we suggest that future deceptive design research should include a comprehensive user-centric interaction or perception analysis. This may include exploring questions such as these: How do users perceive XR deceptive design for different purposes? and Which designs are considered manipulative and need to not be built, and which are acceptable? Without a clear understanding of users’ perspectives, developing effective and user-friendly countermeasures to deception will be challenging.

Our analysis uncovered deceptive designs from non-XR environments that are now extended to XR. We also observed new forms of deceptive designs that emerged from XR features. However—given that XR is still in its infancy and XR devices have yet to become mainstream—most of these studies were based on hypothetical scenarios of potential future problems. More problematic issues are likely to emerge with the advancement of XR technology. Therefore, we call for further research on new forms of XR deceptive designs and their impacts on users. That way, countermeasures can be developed to address these challenges before they can seriously harm people.

5.2.2 A More Transparent Awareness of XR Data Use Practices.

Among the publications we reviewed in our study, there was a noteworthy agreement that XR data enables deceptive design [8, 38, 54]. The extensive collection of user data through XR devices creates opportunities for precise user profiling, which can be used for hyper-personalization and manipulation [8, 51, 73]. To better comprehend and prevent this issue, we suggest that future research should start by asking what data types can be collected by XR sensors. The investigation by the XR Safety Initiative lab’s 1st XR data classification roundtable conference [87] and Dick [14] in 2021 have laid the groundwork for examining how VR and AR use data. However, several questions still need to be answered. These include questions such as the following: What user information can be inferred from the XR sensor data (especially from MR sensors)? How can these data be exploited in XR deceptive design? How does the use of these data impact user autonomy, safety, and privacy? What controls do users want to have over these data? Addressing these questions and developing appropriate protections for XR technology users require the continued efforts of researchers, developers, and designers.

5.2.3 A Deeper Education on Deceptive Design Practices in XR.

Last, our findings in T8 (Section 4.4.3) emphasized the need for user education on deceptive design in XR [8, 12, 51]. Research has shown that users, despite being able to recognize deceptive design, often have a resigned attitude due to their dependence on the services [45]. Conversely, Bongard-Blanchy et al. [4] found that users’ vague awareness of the entailed concrete harms on themselves made them failed to realize the necessity of taking self-protective actions. Therefore, we argue that it is crucial to design effective user educational interventions. Such interventions should instruct people that XR may not always reflect reality. Users should be taught about cues that differentiate fictional simulations from realistic representations. Moreover, it is essential to inform people about XR data tracking, potential privacy issues, and XR mechanisms that could manipulate them to share their data. Most importantly, users should understand the possible consequences and risks of XR deceptive design. They should also receive training to help them detect scams, digital (i.e., fake) humans, and bots that pretend to be someone they trust. Future research is also needed to explore optimal educational content design for improving user understanding, measuring user learning outcomes, and assessing user resilience against deceptive design in XR.

5.3 Implications to Designers

Previous studies have given several design recommendations for how to mitigate empathy-based deceptive design in immersive environments. As discussed in Section 4.4.3, Ramirez et al. [64] proposed promoting sympathy-based feelings instead of empathetic feelings when designing immersive environments. Other scholars have recommended implementing auditing systems to detect manipulative designs [20, 57]. We believe that all of these suggestions require more research to fully understand how deceptive design is evolving in XR and how design mechanisms affect users’ perceptions of empathy and sympathy.

Additionally, Cummings and Shore [12] and Mhaidli and Schaub [51] both highlighted the critical role of trust and transparency in XR content. With this in mind, we believe that future XR design should make it easier for people to distinguish the difference between fiction simulations and reality representations. One approach could be to incorporate visual cues within the simulations [5]. This technique prevents people from getting confused and ensures that they can distinguish fact from fiction.

Several studies have emphasized how essential user empowerment is in XR systems. This includes integrating user consent, giving users control over their personal virtual objects and space, and transparently informing and enabling users to opt-out of data collection through XR devices. Previous research has shown that users are not always against collecting data, but rather against collecting sensitive or non-consensual data [56]. As the number of XR sensors continues to grow, we further suggest that XR systems should be set up for people to opt-in, rather than opt-out of sharing information. Systems should also provide people with clear instructions on how to revoke consent, access to, and control over the collected data.

Although XR privacy concerns have received attention in the literature, many articles have also highlighted how crucial it is to implement adequate security controls to protect users from potential cyberattacks and manipulations. This can be achieved through integrating established security measures found in non-XR technologies [39, 73]. Thus, we further proposed that specific protections should be developed for different user groups. For instance, because XR is being used more in high school education, it is important to implement child-specific protections.

5.4 Implications to Policymakers

We found that the results of several publications have implications for creating security and privacy policies for XR. These implications go beyond design recommendations.

Our findings in Section 4.4.3 revealed a need for regulating the collection and use of XR data, especially data from unwillingly involved parties (e.g., other users and bystanders). A recent analysis of privacy regulations shows that current frameworks are inadequate in addressing the possible risks of XR technologies [14]. The challenge of balancing innovation with regulation is not a novel issue. Technology frequently outpaces regulatory development. In light of the privacy concerns that XR technologies raise, we thus suggest that policymakers conduct a thorough review of how well existing policies and regulatory frameworks work in the XR space. The assessment of U.S. regulations [14], XR user privacy concern and protection-seeking strategies [30], and the Oculus VR privacy policies [82] serve as a starting point. However, there needs to be more research into regulatory frameworks in other countries and the policies for other XR devices and applications. We suggest that policymakers consider questions like whether existing rules and policies can protect people’s privacy and security, especially for children, within the XR domain.

Creating an immersive experience in XR requires collecting user data [8, 11]. From the publications we analyzed, we have identified the need for XR-specific security policy [39] and how regulating malicious data use is necessary [8, 12]. An analysis found that 70% of Oculus VR dataflows were not properly disclosed and that 69% were used for purposes unrelated to the core functionality of the device [82]. Thus, we suggest that policymakers should develop regulations that foster the development of these technologies while preventing data use for manipulation, and should focus on reinforcing existing regulations and ensuring that companies comply with what they promised.

Last—as a part of user empowerment—we propose that XR policies and consents that are presented to users should be easy to read (e.g., as required by the California Consumer Privacy Act (CCPA)18 for non-XR technologies) so that the people can quickly understand the information without having to learn complicated terms.

5.5 Limitations

Our research is limited in multiple aspects. The nascent nature of deceptive design research in XR has resulted in a scarcity of existing literature in this field. Consequently, our literature review is based on limited resources. In addition, given the novelty of XR technology, most of these publications only discussed hypothetical scenarios derived from non-XR use cases. Nonetheless, we consider our preliminary review of XR deceptive design literature to be crucial for several reasons. First, the anticipated rapid development of XR technology suggests that the potential deceptive design scenarios and their associated concerns identified in this study are likely to materialize in the near future. Second, the limited sample size of our literature review highlights the critical need for further academic inquiry into the complexities and potential harms of deceptive design within XR environments. By fostering a deeper understanding of these issues, we can pave the way for the development of effective strategies to safeguard users from potential exploitation as malicious practices within XR domains begin to take root.

Regarding our methodology, we acknowledge the previously reported instability in literature database search results, as evidenced by previous works (e.g., [44, 65]), where searches conducted on different days resulted in different numbers of records. To address this issue, we have adjusted the search syntax for each of the four literature databases selected. We also used a combination of search queries and user interface features to maintain consistency across all databases. Our search was conducted on multiple days to guarantee the accuracy and comprehensiveness of the results. Despite these efforts, we acknowledge that our outcomes might not fully eliminate this limitation’s possible impact.

We also note that three coders conducted our thematic synthesis, and thus the resulting themes were shaped by their perspectives and experience in HCI, games, and privacy and cybersecurity research. While we acknowledge the potential for bias, we also recognize the inherent strength that this diverse background brings to the analysis. The coders’ experiences equip them with valuable insights and frameworks for interpreting the core issues associated with XR deceptive design. Nevertheless, this potential for bias should be considered when interpreting the study’s findings.

Finally, we acknowledge that our research focuses on the early stages of XR deceptive design. It aims to create a foundational understanding and sets the stage for future investigations. We recognize that as the field matures and further scholarship emerges, a re-examination of the literature may reveal a more comprehensive and holistic perspective on XR deceptive design knowledge.

Skip 6CONCLUSION Section

6 CONCLUSION

In this article, we presented the results from a systematic analysis of XR deceptive design research. Following the PRISMA guideline [61], we started with 187 publications from four bibliography databases and analyzed 13 relevant papers published between 2012 to 2022. Our analysis identified eight themes that answer four research questions. RQ1 identified how XR researchers define and distinguish deception from related concepts such as nudging and persuasion. This highlights the need for future researchers to focus more on XR deceptive design unintentional design decisions and the difference between manipulative and benevolent design strategies. RQ2 and RQ3, answered by Themes 1 through 5, explored the novel forms of deceptive design potentially emerging from XR’s immersive nature, unique interface elements, and sensory feedback. For example, we found potential for creators to exploit users’ sense of objectivity by injecting their perspectives into the XR experience. In addition, the vast amount of user data collected by XR technologies could allow for hyper-personalized manipulation strategies, targeting people’s vulnerabilities with high accuracy. Furthermore, the ability of certain XR technologies to block out reality raises concerns about potential reality distortion and user perception manipulation. RQ4, answered by Themes 6 through 8, synthesized the potential impacts of deceptive design in XR on users and explored potential prevention techniques proposed in the literature.

Building upon the synthesized knowledge, we propose actionable recommendations for future research, policymakers, and XR designers to address the challenges posed by deceptive design in XR environments. Our analysis revealed both convergence and divergence between existing XR deceptive design research and established taxonomies from other platforms. Notably, XR facilitates the deployment of novel and subtle deceptive design strategies that exploit the XR’s unique features and more precisely target user vulnerabilities. The potential for deceptive design using XR data highlights the need for further research to understand and regulate XR data practices and mitigate associated user risks. It is crucial to acknowledge that this study focuses on the early stages of XR deceptive design research. As the field matures, expanded examinations of the literature may reveal additional insights and complexities.

Our findings further translate into actionable implications for XR designers and policymakers. For designers, we advocate for fostering sympathy-based responses over empathy-based manipulation, and the implementation of auditing systems to identify potentially deceptive design elements is crucial. Building trustworthy and secure XR environments alongside empowering users with transparent data practices through XR design are also important. For policymakers, our findings highlight the need for regulations governing the collection and use of XR data. It is also critical to establish regulatory frameworks that adequately address potential risks associated with XR technologies while avoiding hindering its innovation. It is our hope that by providing insights from existing research, this article can serve as a foundational guide for researchers, designers, and policymakers. This will lead to the development of XR experiences that harness technological potential while prioritizing user safety, privacy, and ethical considerations.

ACKNOWLEDGMENTS

We would like to thank graduate researchers Sabrina Alicia Sgandurra and Derrick Wang for their insightful feedback on the manuscript and their expertise in resolving technical issues during our article formatting. We also thank the reviewers for their constructive criticism that ensured a smooth and polished final publication of our research.

Footnotes

  1. 1 We follow the ACM Diversity and Inclusion Council’s guideline for inclusive language and adopt the term deceptive design instead of dark patterns in our study (see https://www.acm.org/diversity-inclusion/words-matter).

    Footnote
  2. 2 See footnote 1.

    Footnote
  3. 3 Meta Oculus Rift S: https://www.oculus.com/rift-s/

    Footnote
  4. 4 Microsoft HoloLens: https://www.microsoft.com/en-us/hololens

    Footnote
  5. 5 Magic Leap: https://www.magicleap.com/magic-leap-2

    Footnote
  6. 6 https://www.scopus.com/

    Footnote
  7. 7 https://libraries.acm.org/digital-library/acm-guide-to-computing-literature

    Footnote
  8. 8 https://ieeexplore.ieee.org/Xplore

    Footnote
  9. 9 https://www.jstor.org/

    Footnote
  10. 10 Dovetailapp.com

    Footnote
  11. 11 See footnote 3.

    Footnote
  12. 12 https://welcon.kocca.kr/en/directory/content/meeting-you--4008

    Footnote
  13. 13 Mojo smart contact lens that overlays virtual information onto the physical world ( https://www.mojo.vision/mojo-lens).

    Footnote
  14. 14 Memento is a UK advertising company that uses AI and VR to enable people to capture and relive their memories ( https://www.mementovr.com).

    Footnote
  15. 15 America’s Army is the official game of the U.S. Army ( https://dacowits.defense.gov/Portals/48/Documents/General%20Documents/RFI%20Docs/Dec2018/USA%20RFI%203%20Attachment.pdf?ver=2018-12-08-000554-463).

    Footnote
  16. 16 See footnote 3.

    Footnote
  17. 17 During the 2020 U.S. presidential campaign, both candidates issued misleading ads featuring deceptive statements and distorted images ( https://cnn.com/2021/11/08/politics/fact-check-house-republican-ad-trump-images-2020/index.html).

    Footnote
  18. 18 California Consumer Privacy Act (CCPA): https://oag.ca.gov/privacy/ccpa#::text=The%20CCPA%20requires%20businesses%20to,use%20the%20categories%20of%20information

    Footnote

REFERENCES

  1. [1] Adeyoju Ademola. 2022. Privacy Dark Patterns: A Case for Regulatory Reform in Canada. Retrieved November 28, 2022 from https://www.cba.org/Sections/Privacy-and-Access/Resources/Resources/2022/EssayWinner2022PrivacyGoogle ScholarGoogle Scholar
  2. [2] Alsop Thomas. 2022. XR: AR, VR, and the metaverse—Statistics & facts. Statista. Retrieved November 27, 2022 from https://www.statista.com/topics/6072/extended-reality-xr/Google ScholarGoogle Scholar
  3. [3] Azmandian Mahdi, Hancock Mark, Benko Hrvoje, Ofek Eyal, and Wilson Andrew D.. 2016. Haptic retargeting: Dynamic repurposing of passive haptics for enhanced virtual reality experiences. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems(CHI’16). ACM, New York, NY, USA, 19681979. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Bongard-Blanchy Kerstin, Rossi Arianna, Rivas Salvador, Doublet Sophie, Koenig Vincent, and Lenzini Gabriele. 2021. “I am definitely manipulated, even when I am aware of it. It’s ridiculous!”—Dark patterns from the end-user perspective. In Proceedings of the 2021 ACM Designing Interactive Systems Conference(DIS’21). ACM, New York, NY, USA, 763776. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Bonnail Elise, Tseng Wen-Jie, Lecolinet Eric, Huron Samuel, and Gugenheimer Jan. 2022. Exploring memory manipulation in extended reality using scenario construction. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’22). ACM, New York, NY, USA, 15.Google ScholarGoogle Scholar
  6. [6] Brignull Harry. 2022. Deceptive Design—User Interface Crafted to Trick You. Retrieved November 23, 2022 from https://www.deceptive.design/Google ScholarGoogle Scholar
  7. [7] Brignull Harry. 2023. Deceptive Patterns—Exposing the Tricks That Tech Companies Use to Control You. Testimonium Ltd., Eastbourne, England.Google ScholarGoogle Scholar
  8. [8] Buck Lauren and McDonnell Rachel. 2022. Security and privacy in the metaverse: The threat of the digital human. In Proceedings of the 1st Workshop on Novel Challenges of Safety, Security, and Privacy in Extended Reality. ACM, New York, NY, USA, 14.Google ScholarGoogle Scholar
  9. [9] Clarke Victoria, Braun Virginia, and Hayfield Nikki. 2015. Thematic analysis. Qualitative Psychology: A Practical Guide to Research Methods 222, 2015 (2015), 248.Google ScholarGoogle Scholar
  10. [10] Crenshaw Kimberle. 1990. Mapping the margins: Intersectionality, identity politics, and violence against women of color. Stanford Law Review 43 (1990), 1241.Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Cummings James J. and Bailenson Jeremy N.. 2016. How immersive is enough? A meta-analysis of the effect of immersive technology on user presence. Media Psychology 19, 2 (2016), 272309.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Cummings James J. and Shore Alexis. 2022. All too real: A typology of user vulnerabilities in extended reality. In Proceedings of the 1st Workshop on Novel Challenges of Safety, Security, and Privacy in Extended Reality. ACM, New York, NY, USA, 14.Google ScholarGoogle Scholar
  13. [13] Geronimo Linda Di, Braz Larissa, Fregnan Enrico, Palomba Fabio, and Bacchelli Alberto. 2020. UI dark patterns and where to find them: A study on mobile applications and user perception. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems(CHI’20). ACM, New York, NY, USA, 114. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] Dick Ellysse. 2021. Balancing User Privacy and Innovation in Augmented and Virtual Reality. Technical Report. Information Technology and Innovation Foundation.Google ScholarGoogle Scholar
  15. [15] Dillon Roberto. 2020. The Digital Gaming Handbook. CRC Press, Boca Raton, FL.Google ScholarGoogle Scholar
  16. [16] Docs Mozilla MDN Web. 2022. Same-Origin Policy. Retrieved November 28, 2022 from https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policyGoogle ScholarGoogle Scholar
  17. [17] Egliston Ben and Carter Marcus. 2023. Examining visions of surveillance in Oculus’ data and privacy policies, 2014–2020. Media International Australia 188, 1 (2023), 5266. Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Union European Parliament, Council of the European. 2022. Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and Amending Directive 2000/31/EC (Digital Services Act) (Text with EEA relevance). Retrieved December 7, 2023 from https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=celex:32022R2065Google ScholarGoogle Scholar
  19. [19] Fitton Dan and Read Janet C.. 2019. Creating a framework to support the critical consideration of dark design aspects in free-to-play apps. In Proceedings of the 18th ACM International Conference on Interaction Design and Children(IDC’19). ACM, New York, NY, USA, 407418. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Franklin Matija. 2022. Virtual spillover of preferences and behavior from extended reality. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’22). ACM, New York, NY, USA, 13.Google ScholarGoogle Scholar
  21. [21] Giaretta Alberto. 2022. Security and privacy in virtual reality—A literature survey. arXiv abs/2205.00208 (2022). https://api.semanticscholar.org/CorpusID:248496745Google ScholarGoogle Scholar
  22. [22] Gray Colin M., Chen Jingle, Chivukula Shruthi Sai, and Qu Liyang. 2021. End user accounts of dark patterns as felt manipulation. Proceedings of the ACM Human-Computer Interaction 5, CSCW2 (Oct. 2021), Article 372, 25 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. [23] Gray Colin M., Kou Yubo, Battles Bryan, Hoggatt Joseph, and Toombs Austin L.. 2018. The dark (patterns) side of UX design. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems(CHI’18). ACM, New York, NY, USA, 114. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Gray Colin M., Chamorro Lorena Sanchez, Obi Ike, and Duane Ja-Nae. 2023. Mapping the landscape of dark patterns scholarship: A systematic literature review. In Companion Publication of the 2023 ACM Designing Interactive Systems Conference. ACM, New York, NY, USA, 188193.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Gray Colin M., Santos Cristiana, and Bielova Nataliia. 2023. Towards a preliminary ontology of dark patterns knowledge. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems(CHI EA’23). ACM, New York, NY, USA, Article 286, 9 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. [26] Gray Colin M., Santos Cristiana, Bielova Nataliia, Toth Michael, and Clifford Damian. 2021. Dark patterns and the legal requirements of consent banners: An interaction criticism perspective. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems(CHI’21). ACM, New York, NY, USA, Article 172, 18 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. [27] Greenberg Saul, Boring Sebastian, Vermeulen Jo, and Dostal Jakub. 2014. Dark patterns in proxemic interactions: A critical perspective. In Proceedings of the 2014 Conference on Designing Interactive Systems(DIS’14). ACM, New York, NY, USA, 523532. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. [28] Guardian The. 2016. 6x9: A Virtual Experience of Solitary Confinement. Retrieved January 4, 2023 from https://www.theguardian.com/world/ng-interactive/2016/apr/27/6x9-a-virtual-experience-of-solitary-confinementGoogle ScholarGoogle Scholar
  29. [29] Gugenheimer Jan, Tseng Wen-Jie, Mhaidli Abraham Hani, Rixen Jan Ole, McGill Mark, Nebeling Michael, Khamis Mohamed, Schaub Florian, and Das Sanchari. 2022. Novel challenges of safety, security and privacy in extended reality. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems(CHI EA’22). ACM, New York, NY, USA, Article 108, 5 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Hadan Hilda, Wang Derrick, Nacke Lennart, and Zhang-Kennedy Leah. 2024. Privacy in immersive extended reality: Exploring user perceptions, concerns, and coping strategies. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 120. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. [31] Kantor Ilya. 2022. The clickjacking attack. JS. Retrieved January 6, 2023 from https://javascript.info/clickjackingGoogle ScholarGoogle Scholar
  32. [32] Kaplan E. Ann. 2005. Trauma Culture: The Politics of Terror and Loss in Media and Literature. Rutgers University Press, New Brunswick, NJ.Google ScholarGoogle Scholar
  33. [33] Karlsen Faltin. 2019. Exploited or engaged? Dark game design patterns in Clicker Heroes, Farmville 2, and World of Warcraft. In Transgression in Games and Play. MIT Press, Cambridge, MA, 219–233. arXiv: https://direct.mit.edu/book/chapter-pdf/2104393/9780262348706_ceb.pdfGoogle ScholarGoogle ScholarCross RefCross Ref
  34. [34] King John, Fitton Dan, and Cassidy Brendan. 2023. Investigating players’ perceptions of deceptive design practices within a 3D gameplay context. Proceedings of the ACM on Human-Computer Interaction 7, CHI PLAY (Oct. 2023), Article 407, 17 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Kohli Luv, Whitton Mary C., and Brooks Frederick P.. 2012. Redirected touching: The effect of warping space on task performance. In Proceedings of the 2012 IEEE Symposium on 3D User Interfaces (3DUI’12). IEEE, 105112.Google ScholarGoogle Scholar
  36. [36] Krauß Veronika. 2022. Exploring dark patterns in XR. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’22). ACM, New York, NY, USA, 12.Google ScholarGoogle Scholar
  37. [37] Landsberg Alison. 2004. Prosthetic Memory: The Transformation of American Remembrance in the Age of Mass Culture. Columbia University Press, New York City, NY.Google ScholarGoogle Scholar
  38. [38] Lebeck Kiron, Ruth Kimberly, Kohno Tadayoshi, and Roesner Franziska. 2018. Towards security and privacy for multi-user augmented reality: Foundations with end users. In Proceedings of the 2018 IEEE Symposium on Security and Privacy (SP’18). IEEE, 392408.Google ScholarGoogle Scholar
  39. [39] Lee Hyunjoo, Lee Jiyeon, Kim Daejun, Jana Suman, Shin Insik, and Son Sooel. 2021. AdCube: WebVR ad fraud and practical confinement of third-party ads. In Proceedings of the 30th USENIX Security Symposium (USENIX Security’21). 25432560. https://www.usenix.org/conference/usenixsecurity21/presentation/lee-hyunjooGoogle ScholarGoogle Scholar
  40. [40] Lewis Chris. 2014. Irresistible Apps: Motivational Design Patterns for Apps, Games, and Web-Based Communities. Springer, Berlin, Germany.Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Liu Bin, Nath Suman, Govindan Ramesh, and Liu Jie. 2014. DECAF: Detecting and characterizing ad fraud in mobile apps. In Proceedings of the 11th USENIX Symposium on Networked Systems Design and Implementation (NSDI’14). 5770.Google ScholarGoogle Scholar
  42. [42] Luguri Jamie and Strahilevitz Lior Jacob. 2021. Shining a light on dark patterns. Journal of Legal Analysis 13, 1 (2021), 43109.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Bhoot Aditi M., Shinde Mayuri A., and Mishra Wricha P.. 2021. Towards the identification of dark patterns: An analysis based on end-user reactions. In Proceedings of the 11th Indian Conference on Human-Computer Interaction(IndiaHCI’20). ACM, New York, NY, USA, 2433. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. [44] MacArthur Cayley, Grinberg Arielle, Harley Daniel, and Hancock Mark. 2021. You’re making me sick: A systematic review of how virtual reality research considers gender & cybersickness. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 115.Google ScholarGoogle Scholar
  45. [45] Maier Maximilian. 2019. Dark Patterns—An End User Perspective. Master’s Thesis. Umea University.Google ScholarGoogle Scholar
  46. [46] Maloney Divine, Freeman Guo, and Robb Andrew. 2021. Social virtual reality: Ethical considerations and future directions for an emerging research space. In Proceedings of the 2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW’21). IEEE, 271277.Google ScholarGoogle Scholar
  47. [47] Maloney Divine, Zamanifard Samaneh, and Freeman Guo. 2020. Anonymity vs. familiarity: Self-disclosure and privacy in social virtual reality. In Proceedings of the 26th ACM Symposium on Virtual Reality Software and Technology. ACM, New York, NY, USA, 19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. [48] Mathur Arunesh, Acar Gunes, Friedman Michael J., Lucherini Eli, Mayer Jonathan, Chetty Marshini, and Narayanan Arvind. 2019. Dark patterns at scale: Findings from a crawl of 11K shopping websites. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 132.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. [49] Mathur Arunesh, Kshirsagar Mihir, and Mayer Jonathan. 2021. What makes a dark pattern . . . dark? Design attributes, normative considerations, and measurement methods. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 118.Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. [50] Meta. 2021. Testing In-Headset VR Ads. Meta Quest Blog. Retrieved February 15, 2023 from https://www.meta.com/blog/quest/testing-in-headset-vr-ads/Google ScholarGoogle Scholar
  51. [51] Mhaidli Abraham Hani and Schaub Florian. 2021. Identifying manipulative advertising techniques in XR through scenario construction. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 118.Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. [52] Mildner Thomas, Freye Merle, Savino Gian-Luca, Doyle Philip R., Cowan Benjamin R., and Malaka Rainer. 2023. Defending against the dark arts: Recognising dark patterns in social media. In Proceedings of the 2023 ACM Designing Interactive Systems Conference(DIS’23). ACM, New York, NY, USA, 23622374. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. [53] Mildner Thomas, Savino Gian-Luca, Doyle Philip R., Cowan Benjamin R., and Malaka Rainer. 2023. About engaging and governing strategies: A thematic analysis of dark patterns in social networking services. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems(CHI’23). ACM, New York, NY, USA, Article 192, 15 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. [54] Miller Mark Roman, Herrera Fernanda, Jun Hanseul, Landay James A., and Bailenson Jeremy N.. 2020. Personal identifiability of user tracking data during observation of 360-degree VR video. Scientific Reports 10, 1 (2020), 110.Google ScholarGoogle ScholarCross RefCross Ref
  55. [55] Roffarello Alberto Monge, Lukoff Kai, and Russis Luigi De. 2023. Defining and identifying attention capture deceptive designs in digital interfaces. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems(CHI’23). ACM, New York, NY, USA, Article 194, 19 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. [56] Naeini Pardis Emami, Bhagavatula Sruti, Habib Hana, Degeling Martin, Bauer Lujo, Cranor Lorrie Faith, and Sadeh Norman. 2017. an IoT world. In Proceedings of the 13th Symposium on Usable Privacy and Security (SOUPS’17). 399412.Google ScholarGoogle Scholar
  57. [57] Nijholt Anton, Arkin Ronald C., Brault Sébastien, Kulpa Richard, Multon Franck, Bideau Benoit, Traum David, Hung Hayley, Santos Eugene, Li Deqing, Yu Fei, Zhou Lina, and Zhang Dongsong. 2012. Trends & controversies. IEEE Intelligent Systems 27, 6 (2012), 6075. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. [58] O’Brolcháin Fiachra, Jacquemard Tim, Monaghan David, O’Connor Noel, Novitzky Peter, and Gordijn Bert. 2016. The convergence of virtual reality and social networks: Threats to privacy and autonomy. Science and Engineering Ethics 22, 1 (2016), 129.Google ScholarGoogle Scholar
  59. [59] Oculus. 2012. Oculus Rift: Step into the Game. Retrieved December 31, 2022 from https://www.kickstarter.com/projects/1523379957/oculus-rift-step-into-the-gameGoogle ScholarGoogle Scholar
  60. [60] O’Hagan Joseph, Saeghe Pejman, Gugenheimer Jan, Medeiros Daniel, Marky Karola, Khamis Mohamed, and McGill Mark. 2023. Privacy-enhancing technology and everyday augmented reality: Understanding bystanders’ varying needs for awareness and consent. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 4 (2023), 135.Google ScholarGoogle Scholar
  61. [61] Page Matthew J., McKenzie Joanne E., Bossuyt Patrick M., Boutron Isabelle, Hoffmann Tammy C., Mulrow Cynthia D., Shamseer Larissa, Tetzlaff Jennifer M., Akl Elie A., Brennan Sue E., Roger Chou, Julie Glanville, Jeremy M. Grimshaw, Asbjorn Hrobjartsson, Manoj M. Lalu, Tianjing Li, Elizabeth W. Loder, Evan Mayo-Wilson, Steve McDonald, Luke A. McGuinness, Lesley A. Stewart, James Thomas, Andrea C. Tricco, Vivian A. Welch, Penny Whiting, and David Moher. 2021. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Systematic Reviews 10, 1 (2021), 111.Google ScholarGoogle ScholarCross RefCross Ref
  62. [62] Pfeuffer Ken, Geiger Matthias J., Prange Sarah, Mecke Lukas, Buschek Daniel, and Alt Florian. 2019. Behavioural biometrics in VR: Identifying people from body motion and relations in virtual reality. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 112.Google ScholarGoogle Scholar
  63. [63] Quwaider Muhannad, Alabed Abdullah, and Duwairi Rehab. 2019. The impact of video games on the players behaviors: A survey. Procedia Computer Science 151 (2019), 575582.Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. [64] Ramirez Erick Jose, Elliott Miles, and Milam Per-Erik. 2021. What it’s like to be a ____: Why it’s (often) unethical to use VR as an empathy nudging tool. Ethics and Information Technology 23, 3 (2021), 527542.Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. [65] Rogers Katja, Karaosmanoglu Sukran, Altmeyer Maximilian, Suarez Ally, and Nacke Lennart E.. 2022. Much realistic, such wow! A systematic literature review of realism in digital games. In Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 121.Google ScholarGoogle Scholar
  66. [66] Rose Mandy. 2018. The immersive turn: Hype and hope in the emergence of virtual reality as a nonfiction platform. Studies in Documentary Film 12, 2 (2018), 132149.Google ScholarGoogle ScholarCross RefCross Ref
  67. [67] Schlembach Raphael and Clewer Nicola. 2021. ‘Forced empathy’: Manipulation, trauma and affect in virtual reality film. International Journal of Cultural Studies 24, 5 (2021), 827843.Google ScholarGoogle Scholar
  68. [68] Selinger Evan and Whyte Kyle Powys. 2010. Competence and trust in choice architecture. Knowledge, Technology & Policy 23, 3 (2010), 461482.Google ScholarGoogle Scholar
  69. [69] Shapiro Michael A. and Lang Annie. 1991. Making television reality: Unconscious processes in the construction of social reality. Communication Research 18, 5 (1991), 685705.Google ScholarGoogle ScholarCross RefCross Ref
  70. [70] Soe Than Htut, Nordberg Oda Elise, Guribye Frode, and Slavkovik Marija. 2020. Circumvention by design—Dark patterns in cookie consent for online news outlets. In Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society(NordiCHI’20). ACM, New York, NY, USA, Article 19, 12 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. [71] Speicher Maximilian, Hall Brian D., and Nebeling Michael. 2019. What is mixed reality? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 115.Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. [72] Stone-Gross Brett, Stevens Ryan, Zarras Apostolis, Kemmerer Richard, Kruegel Chris, and Vigna Giovanni. 2011. Understanding fraudulent activities in online ad exchanges. In Proceedings of the 2011 ACM SIGCOMM Conference on Internet Measurement Conference. ACM, New York, NY, USA, 279294.Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. [73] Su Zihao, Shezan Faysal Hossain, Tian Yuan, Evans David, and Heo Seongkook. 2022. Perception hacking for 2D cursorjacking in virtual reality. In Proceedings of the 1st Workshop on Novel Challenges of Safety, Security, and Privacy in Extended Reality. ACM, New York, NY, USA, 15.Google ScholarGoogle Scholar
  74. [74] Sun Yongqiang, Wang Nan, and Shen Xiao-Liang. 2021. Calculus interdependency, personality contingency, and causal asymmetry: Toward a configurational privacy calculus model of information disclosure. Information & Management 58, 8 (2021), 103556.Google ScholarGoogle Scholar
  75. [75] Susser Daniel, Roessler Beate, and Nissenbaum Helen. 2019. Online manipulation: Hidden influences in a digital world. Georgetown Law Technology Review 4 (2019), 1.Google ScholarGoogle Scholar
  76. [76] Thaler Richard H., Sunstein Cass R., and Balz John P.. 2013. Choice Architecture. Vol. 2013. Princeton University Press, Princeton, NJ.Google ScholarGoogle Scholar
  77. [77] (Forbrukerrådet) The Consumer Council of Norway. 2018. Deceived by design—How tech companies use dark patterns to discourage us from exercising our rights to privacy. ConPolicy. Retrieved January 4, 2023 from https://www.conpolicy.de/en/news-detail/deceived-by-design-how-tech-companies-use-dark-patterns-to-discourage-us-from-exercising-our-rightGoogle ScholarGoogle Scholar
  78. [78] Archives The United States Department of Justice. 2016. Report and Recommendations Concerning the Use of Restrictive Housing. Retrieved January 4, 2023 from https://www.justice.gov/archives/dag/report-and-recommendations-concerning-use-restrictive-housingGoogle ScholarGoogle Scholar
  79. [79] Thomas Bruce H.. 2012. A survey of visual, mixed, and augmented reality gaming. Computers in Entertainment 10, 1 (2012), 133.Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. [80] Thomas James and Harden Angela. 2008. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Medical Research Methodology 8, 1 (2008), 110.Google ScholarGoogle ScholarCross RefCross Ref
  81. [81] Torstensson Niklas, Susi Tarja, Wilhelmsson Ulf, and Lebram Mikael. 2020. Wizard of Oz and the design of a multi-player mixed reality game. In Proceedings of the International Conference on Human-Computer Interaction. 218232.Google ScholarGoogle Scholar
  82. [82] Trimananda Rahmadi, Le Hieu, Cui Hao, Ho Janice Tran, Shuba Anastasia, and Markopoulou Athina. 2022. OVRseen: Auditing network traffic and privacy policies in Oculus VR. In Proceedings of the 31st USENIX Security Symposium (USENIX Security’22). 37893806. https://www.usenix.org/conference/usenixsecurity22/presentation/trimanandaGoogle ScholarGoogle Scholar
  83. [83] Vesanen Jari. 2007. What is personalization? A conceptual framework. European Journal of Marketing 41, 5–6 (2007), 409418.Google ScholarGoogle ScholarCross RefCross Ref
  84. [84] Williams Kevin D.. 2014. The effects of dissociation, game controllers, and 3D versus 2D on presence and enjoyment. Computers in Human Behavior 38 (2014), 142150.Google ScholarGoogle ScholarDigital LibraryDigital Library
  85. [85] Williams Shane. 2022. The potential power of data: Hyper-personalization for each and every client. Forbes Technology Council. Retrieved January 6, 2023 from https://www.forbes.com/sites/forbestechcouncil/2022/06/22/the-potential-power-of-data-hyper-personalization-for-each-and-every-client/?sh=3d162b4c5545Google ScholarGoogle Scholar
  86. [86] (XRSI) XR Safety Initiative. 2020. The XRSI Definitions of Extended Reality (XR). XRSI Standard Publication XR-001. XR Safety Initiative (XRSI).Google ScholarGoogle Scholar
  87. [87] (XRSI) XR Safety Initiative. 2021. Virtual Worlds, Real Risks and Challenges—1st XR Data Classification Roundtable Report. Technical Report. XR Safety Initiative (XRSI).Google ScholarGoogle Scholar
  88. [88] Zagal José P., Björk Staffan, and Lewis Chris. 2013. Dark patterns in the design of games. In Foundations of Digital Games 2013. ACM, New York, NY, USA, 18.Google ScholarGoogle Scholar

Index Terms

  1. Deceived by Immersion: A Systematic Analysis of Deceptive Design in Extended Reality

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Computing Surveys
          ACM Computing Surveys  Volume 56, Issue 10
          October 2024
          325 pages
          ISSN:0360-0300
          EISSN:1557-7341
          DOI:10.1145/3613652
          Issue’s Table of Contents

          Copyright © 2024 Copyright held by the owner/author(s).

          This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs International 4.0 License

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 14 May 2024
          • Online AM: 17 April 2024
          • Accepted: 15 April 2024
          • Revised: 1 March 2024
          • Received: 3 March 2023
          Published in csur Volume 56, Issue 10

          Check for updates

          Qualifiers

          • survey
        • Article Metrics

          • Downloads (Last 12 months)536
          • Downloads (Last 6 weeks)481

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader