A collection of research and articles about dark patterns, user behaviour and law
Elevate your business with 10x more data, advanced filters, and exclusive insights.
The rise of manipulative web design techniques poses significant challenges to ethical UI/UX design, influencing user behavior in ways that lead to privacy breaches, unwanted purchases, and other adverse outcomes. These techniques often exploit psychological biases, making them difficult to detect. This paper categorizes and details 10 types of manipulative design strategies, highlighting those requiring user action or awareness to coerce unintended decisions. Use cases and countermeasures are also explored, laying a foundation for assessing their impact and advancing ethical web design practices.
Dark patterns are manipulative UI design tactics that push users toward decisions against their best interests. While countermeasures like legislation, content modification, and user education have been explored, serious games for teaching dark pattern detection remain underexplored. This study presents a web-based game prototype for adults and evaluates its effectiveness through a user study. Findings show improved detection skills among participants, indicating increased awareness of manipulative designs. Although classification abilities showed mixed results, game log data confirmed enhanced detection accuracy, highlighting the game's potential as an educational tool.
The growing reliance on web interfaces for activities like shopping and socializing underscores the need for ethical UI/UX design. However, manipulative design techniques exploit users' psychological biases, leading to privacy breaches, unintended purchases, and other harmful outcomes. This paper identifies and categorizes 10 types of manipulative techniques in UI/UX, detailing how they coerce users into unintended actions or decisions. It also explores use cases and countermeasures, offering a foundation for assessing their impact and promoting ethical web design practices.
Les médias sociaux, utilisés par 5,04 milliards de personnes en 2024, sont omniprésents, avec une consommation moyenne de 2 heures et 23 minutes par jour (We Are Social, 2024). Bien qu'ils procurent une satisfaction apparente, ils sont liés à des effets négatifs sur la santé mentale et au développement d’une dépendance, comparable à celle des substances addictives, amplifiée par des facteurs comme l'anxiété et le Fear Of Missing Out (FOMO) (Allcott et al., 2022; Erhel et al., 2024). Les plateformes renforcent cette dépendance via des dark patterns, des stratégies manipulant les biais cognitifs pour capter l’attention. Le projet DARKUPI explore ces mécanismes, proposant une typologie de dark patterns sur les médias sociaux et analysant leur lien avec la dépendance numérique à partir d'une enquête menée auprès de 2 547 utilisateurs en France.
The digital economy, in Russia and globally, increasingly draws lawmakers' attention, particularly regarding "dark patterns"—manipulative design tactics influencing user behavior. BigTech companies, like TikTok, leverage such mechanisms to retain users, exemplified by TikTok's endless, dynamically adjusted recommendation feed, creating an "immersion effect." This article analyzes Russian and international regulatory approaches to these practices, with a focus on legislative initiatives protecting consumer rights and combating manipulative designs. While Russian law emphasizes consumer protection and unfair competition, Western countries have introduced specialized norms targeting dark patterns. The article aims to evaluate existing regulations and propose adaptations of effective foreign practices to the Russian legal framework.
Dark patterns, manipulative design tactics, are prevalent in video game monetization systems, particularly those relying on microtransactions. These tactics exploit psychological behaviors to maximize spending, raising ethical concerns and contributing to the negative perception of in-game purchases. This study examines how dark patterns in games differ from other industries, advocating for revised taxonomies that account for gaming’s unique mechanics. It highlights the potential link between these patterns and gambling-like addiction disorders, especially among vulnerable populations, and calls for deeper research to understand their scale, effectiveness, and harm compared to ethical monetization practices.
The opaque nature of transformer-based models, particularly in detecting dark patterns—deceptive design tactics undermining user autonomy—calls for integrating uncertainty quantification to enhance trust. This study proposes a differential fine-tuning approach using uncertainty quantification techniques, such as Spectral-normalized Neural Gaussian Processes (SNGPs) and Bayesian Neural Networks (BNNs), at the classification head of pre-trained models. Evaluating these methods against a dense neural network baseline, the research examines performance, prediction certainty, and environmental impact. Findings show that uncertainty quantification maintains model performance, improves transparency, and provides measurable confidence in predictions without uniformly increasing environmental costs. By enhancing explainability and trust, this approach supports informed decision-making and helps mitigate the influence of dark patterns in user interfaces, highlighting its importance for ethically sensitive applications.
Dark patterns, deceptive design techniques in technology, challenge policymakers due to their impact on user autonomy and financial well-being. This study addresses gaps in research by identifying and quantifying dark patterns across shopping, health and fitness, and education apps in Canada. Analyzing the top three apps per domain, darkness scores were assigned based on identified patterns. Temu (7.5), Yuka-Food & Cosmetic Scanner (5), and Duolingo (6.5) had the highest scores in their categories, with shopping apps showing the highest average darkness scores. The most common tactic was modifying decision space. This study highlights the prevalence and variation of dark patterns across domains, emphasizing the need for quantifiable metrics to guide regulatory efforts and inform researchers and practitioners.
Social media (SM), a key feature of the digital age, has evolved from its Web 2.0 roots into a powerful tool for communication, consumerism, and cultural exchange. Initially limited to static internet pages and basic functionalities like email, SM now boasts 4.6 billion users globally, transforming individual and organizational life. It plays a critical role in digital transformation, enabling remote work, fostering virtual collaboration, and replacing traditional office spaces with technology-driven environments. The COVID-19 pandemic further accelerated SM's use in onboarding, training, and community building, enhancing workplace belonging and productivity. Despite its growing importance, research on SM in human resource development (HRD) remains limited. This chapter explores SM’s integration into HRD, addressing its role in wellness, visibility, and virtual workforce development, while bridging gaps in understanding its impact on modern workplaces.
Dark patterns, manipulative design tactics often found in social media, push users toward oversharing or excessive use, while bright patterns guide users toward personal goals. This thesis explores bright patterns as a solution to social media issues by inverting 15 dark pattern types specific to the platform. Surveys revealed that more deceptive patterns were less attractive, with Forced Access rated as the worst. Three bright patterns were designed to counter Nagging, False Hierarchy, and Toying with Emotion, aiming to reduce screen time, slow fake news, and promote diverse content. User tests showed bright patterns were less annoying and scored better overall, highlighting their potential to repurpose dark patterns for user benefit, though ethical concerns about targeted manipulation remain.
Dark patterns, manipulative design elements in digital interfaces, raise ethical concerns by prioritizing service providers' interests over users. A recent study introduces the Dark Pattern Analysis Framework (DPAF) to tackle classification inconsistencies, detection tool limitations, and dataset gaps. It identifies 68 dark pattern types with detailed annotations and highlights detection tools' low coverage (45.5%) and datasets' limited comprehensiveness (44%). By standardizing classifications and merging datasets, the research offers critical insights to improve dark pattern detection and drive future advancements in the field.
Digital marketing often faces ethical challenges, particularly with the use of manipulative "dark patterns" such as trick questions, sneak-into-basket, and privacy zuckering, which erode trust and harm reputations. Ethical design emphasizes transparency and user consent, aligning with principles like privacy, accuracy, property rights, and accessibility outlined by Mason. Regulations such as GDPR and PECR enforce responsible data handling and protect users from deceptive practices, while intellectual property laws safeguard content and designs. Companies must ensure ethical data collection, genuine online reviews, and clear communication to maintain trust and compliance. Prioritizing transparency, addressing negative feedback, and avoiding dark patterns are vital for building customer loyalty and managing a positive reputation in the digital landscape.
n contemporary e-commerce, cognitive exploitation has emerged as a critical issue, with suppliers leveraging techniques like nudges, dark patterns, and cognitive biases to manipulate consumer decisions. This article explores the impact of informational asymmetry, fatigue, and selective attention manipulation on consumer vulnerability. It advocates for updated regulations, supplier accountability, consumer education, and proactive roles for regulators and the judiciary. Ultimately, the article calls for a multidisciplinary approach to foster a fairer, more transparent digital marketplace and ensure robust consumer protection.
The rise of big data, machine learning, and AI has transformed digital advertising, enabling precise targeting and tailored content based on user behaviors and demographics. While innovative, these practices raise significant legal and ethical concerns regarding privacy, non-discrimination, and digital autonomy. In response, EU lawmakers have introduced regulations to address these issues. This chapter explores the evolution of EU digital advertising laws, key challenges in their enforcement, and ongoing debates, offering insights and recommendations for ensuring fair and responsible advertising practices.
A recent study examines the influence of dark patterns in online shopping malls, focusing on their impact on consumer attitudes, brand trust, and continued usage intentions after recognizing these manipulative tactics. The findings reveal notable differences in how men and women perceive dark patterns and their subsequent attitudes as consumers. Brand trust emerged as a significant factor, influencing both consumer attitudes and the likelihood of continued use despite awareness of dark patterns. Moreover, consumer attitudes acted as a mediator between brand trust and retention intentions. Interestingly, the level of experience with dark patterns did not moderate the relationship between brand trust and consumer attitudes, suggesting that trust remains a critical determinant regardless of familiarity with manipulative design strategies.
A recent exploration into deceptive design highlights the innovative use of Trickery, a narrative-based serious game designed to expose and educate players about manipulative interface tactics. Incorporating seven gamified deceptive patterns, the game provides players with firsthand experience of the consequences these patterns impose. Through an explorative gameplay study and an accompanying online survey, researchers examined player behavior and the perceived effectiveness of the gamified deceptive patterns in raising awareness. The findings revealed diverse player motivations and justifications when navigating deceptive patterns, as well as critical factors for successfully integrating such patterns into gameplay. This approach offers a promising avenue for enhancing user understanding and resistance to manipulative design practices.
A recent study sheds light on the pervasive use of dark patterns in mobile games, revealing their exploitation of players through temporal, monetary, social, and psychological mechanisms. By analyzing user-generated data from 1,496 games, researchers identified a troubling presence of these manipulative strategies not only in games commonly viewed as problematic but also in those perceived as harmless. This quantitative research underscores ethical concerns surrounding current revenue models in gaming, particularly their impact on vulnerable populations. Highlighting the importance of ethical design, the study advocates for community-based efforts and collaboration among users and industry professionals to foster healthier gaming environments.
E-commerce's growing popularity brings increased challenges for consumers, particularly with the rise of dark patterns—manipulative design elements that prioritize business profits over consumer well-being. While EU consumer protection laws, such as the Unfair Commercial Practices Directive and Consumer Rights Directive, aim to curb exploitative practices, dark patterns present unique challenges. These include exploiting behavioral biases that current laws, focused on information remedies for rational consumers, fail to address; low compliance in digital environments due to the laws' technologically-neutral design; and a gap between the widespread use of dark patterns and limited enforcement efforts. Strengthening digital market oversight with technical solutions, such as computational detection of dark patterns, could enhance consumer protection. This thesis examines how EU law can effectively regulate dark patterns through both legal and technical innovations.
Mobile apps play a critical role in daily life but frequently employ dark patterns—manipulative design techniques like visual tricks or coercive language—to influence user behavior. While current research relies on manual methods to detect these patterns, the process is slow and cannot keep pace with the rapid evolution of apps. AppRay, a novel system, addresses these challenges by combining task-oriented app exploration with automated dark pattern detection. Utilizing large language models and contrastive learning, AppRay identifies both static and dynamic dark patterns efficiently, significantly reducing manual effort. Supported by two comprehensive datasets, AppRay demonstrates strong performance in uncovering deceptive practices across various app interfaces.
A recent study examined the risks of political chatbots as peer-to-peer propaganda tools, focusing on a Facebook Messenger chatbot from Benjamin Netanyahu’s 2019 campaign. Using the Walkthrough Method, researchers identified “dark cycles” involving Reconnaissance (data collection), Training (repetitive messaging and dark patterns), and Activation (task directives). The findings reveal the power dynamics these chatbots exploit, raising concerns about their evolving role in political influence and surveillance.
A recent study examined how Large Language Models (LLMs) adjust language to create personalized persuasive content based on personality traits. By analyzing outputs across 19 LLMs, researchers found distinct patterns: more anxiety-related words for neuroticism, achievement-focused language for conscientiousness, and fewer cognitive process words for openness. The findings reveal LLMs' potential to tailor persuasive communication, with varying success across personality traits.
The 14th Design Thinking Research Symposium (DTRS14), hosted by Mälardalen University, focuses on how design can drive sustainable futures by addressing global challenges like climate change, social inequality, and resource management. Aligned with the UN's Sustainable Development Goals, it emphasizes interdisciplinary collaboration and human-centered design methods such as co-design and systems thinking. The symposium showcases global insights on using design to foster sustainability and societal progress.
A recent study of 16 U.S. gig work platforms reveals extensive data collection, sharing with up to 60 third parties, and privacy dark patterns that heighten risks for workers. Findings include reversible Social Security Number (SSN) hashes and off-platform nagging tactics. Platforms addressed disclosed SSN issues, confirmed by independent audits. The study calls for stronger privacy protections and offers its data for further research.
A recent study explores how web developers using GPT-4 may unintentionally create deceptive designs (DD) in website features. Involving 20 participants, the study tasked users with generating product and checkout pages for a fictitious webshop using ChatGPT, followed by adjustments to boost sales. All 20 websites ended up containing DD patterns, averaging 5 per site, with GPT-4 offering no warnings. Notably, only 4 participants expressed ethical concerns, with most viewing the designs as effective and acceptable. These findings underscore potential ethical and legal risks in relying on LLM-generated design recommendations without careful oversight.
Antitrust laws often struggle to curb Big Tech’s anticompetitive business models, where dark patterns complicate enforcement. Austrian criminal law may fill this gap by addressing potentially deceptive practices. Amazon’s Prime and BuyBox, Google’s Search and Bard, and mechanisms like Activision Blizzard’s lootboxes and Engagement Optimised Matchmaking (EOMM) algorithms may constitute commercial fraud under §§ 146, 148 StGB, as they appear designed to mislead consumers for profit.
This special issue aims to deepen research on deceptive designs—manipulative digital features that exploit cognitive biases to influence user choices without full consent. As these designs proliferate on digital platforms and are enhanced by AI and immersive technologies, regulatory efforts across the EU, US, and India seek to curb their use. This issue invites interdisciplinary studies on the behavioral impacts of deceptive designs, effects on vulnerable users, regulatory strategies, and theoretical advancements in cognitive biases. Insights from psychology, economics, HCI, and law will support a framework for understanding and mitigating these designs in evolving digital contexts.
This chapter presents research on user experience in video games, aiming to identify core parameters that address user needs. Key factors—usability, engageability, motivation, emotion, immersion, and commitment—were analyzed as essential elements of a positive gaming experience. A specific heuristic evaluation was developed, incorporating these principles along with targeted questions to enhance video game assessments. This extended evaluation, optional and tailored for gaming, provides a refined tool for analyzing user experience in video games.
As the data ecosystem becomes increasingly intertwined with daily life, every action generates value within the data economy, raising the need for authorities to protect diverse and sometimes conflicting stakeholder interests. Despite existing regulations, effective compliance remains challenging due to a lack of suitable technical and organizational tools and adequately prepared oversight bodies. This paper explores how FACT principles, FAIR principles, and Open Personal Data Stores could guide ethical pathways for the data economy. Through ethical, economic, and legal lenses, it presents findings from literature review and content analysis, offering recommendations to help stakeholders better integrate ethics into the data economy.
Social robots, designed to foster trust and companionship, often incorporate elements that can lead to user manipulation and addiction. To address these risks, this thesis proposes using Vulnerability Theory to assess Human-Robot Interaction by balancing dependency with resilience, ensuring ethical, user-centered design. The "Safety and Intimacy in Human-Robot Interaction Pipeline" serves as a tool to analyze these interactions, supporting regulations aimed at protecting users from manipulation.
This study presents the design of a field experiment examining the effects of online tracking, targeting, ad-blocking, and anti-tracking technologies on consumer behavior and economic outcomes. While the online data industry promotes the benefits of tracking for targeted advertising, privacy concerns persist regarding the extensive tracking of consumers without their knowledge. The experiment aims to analyze how these technologies influence online browsing, shopping behavior, and purchasing decisions, focusing on metrics like spending, product prices, search time, and purchase satisfaction. The study outlines the rationale, experimental design, and data collection plans for these investigations.
Protecting the rights of algorithmic consumers is crucial for the healthy growth of the digital economy. Dark patterns, which manipulate consumer decision-making through platform interfaces, disrupt traditional marketing methods. These patterns exploit cognitive biases, reducing consumer autonomy and rationality. They can be categorized as inducing, deceptive, or manipulative, each with varying impacts, such as compromising personal information and undermining consumer autonomy. The U.S. and the EU tackle dark patterns through legislation, with the U.S. focusing on specialized laws and the EU combining rules and standards. A governance approach tailored to local contexts, involving regulation, protection, and collaboration, is needed to balance platform operators' interests and safeguard consumers, fostering a healthier digital economy.
A study on dark patterns in social media reveals their influence on user engagement and impulse buying. Based on 492 participants from PSUT in Jordan, the research found that time, effort, pleasure, and social acceptance positively affect engagement, while non-routine behavior has a negative impact. Engagement, in turn, boosts impulse buying. The study emphasizes the need for ethical marketing, as dark patterns can erode trust and harm long-term relationships.
This thesis explores how dark design patterns in casual mobile games exploit cognitive biases. While ethical concerns have been discussed, the cognitive mechanisms behind these patterns are less understood. Through interviews with mobile game players, the study finds that monetary tactics like hidden costs are more easily recognized than social or temporal patterns, with loss aversion being a key bias exploited. Many players express frustration with these manipulative designs. The thesis calls for greater awareness and discussions on ethical design and regulatory measures in mobile gaming.
A recent study explores the concept of Interactive Dark Patterns (IDPs), a subset of dark patterns that engage users through multiple steps or screens to manipulate their choices. While dark patterns have been widely studied since Harry Brignull's identification of them in 2010, the interactive nature of some patterns has been overlooked. This research aims to empower ordinary mobile application users by increasing their ability to recognize and avoid these deceptive design tactics through targeted training interventions. The study found that such training significantly reduced user entrapment, with the entrapment rate dropping from 79% to 27% post-training, highlighting the effectiveness of awareness-building in mitigating the impact of IDPs. These findings have broader implications for mobile app developers, UX/UI designers, and digital ethicists, emphasizing the importance of ethical design practices.
Dark patterns, which manipulate consumer behavior, have become widespread in digital markets, influencing how online consumers perceive, behave, and make purchases. A novel empirical study reveals that individuals across all demographic groups are vulnerable to these manipulative tactics, with little evidence suggesting that common markers of consumer vulnerability, such as income, education, or age, significantly reduce susceptibility. These findings support broad restrictions on the use of dark patterns, as proposed in the EU’s Digital Services Act, to protect all consumers. The study also highlights that dark patterns are less effective when added friction, such as a required payment step, is introduced. This indicates that dark patterns are most effective in scenarios where no further user action, such as additional payment steps, is required, particularly when online providers store payment information for 'single-click' purchases.
A recent evaluation of alcohol-industry-funded (AIF) digital tools, such as blood alcohol calculators and consumption trackers, has revealed significant concerns regarding misinformation and manipulative design tactics. These tools, distributed by organizations like Drinkaware and Drinkwise, were found to contain health misinformation and "dark patterns" that potentially influence users towards increased alcohol consumption. Compared to non-industry-funded tools, AIF tools provided significantly less accurate health feedback (33% vs 100%), omitted crucial information about cancer and cardiovascular disease, and promoted industry-friendly narratives. Moreover, these tools employed techniques like priming nudges and social norming to encourage consumption, offering fewer behavior change techniques and restricted user input options. The study concludes that AIF tools may function as covert marketing channels, misleading users and promoting continued alcohol consumption through manipulative "Dark Apps" design strategies.
In recent years, concerns have intensified regarding the influence that digital architectures exert on individuals and societies, particularly through the use of dark patterns—manipulative interface designs embedded in digital services. While these tactics can affect anyone, traditional legal frameworks often focus on certain vulnerable groups, such as children. However, empirical research indicates that vulnerability to dark patterns is shaped by various factors, beyond the traditional group-based classifications. This article aims to deepen the understanding of this issue by offering a multidisciplinary analysis of the factors contributing to vulnerability, evaluating the feasibility of risk assessments in three EU legal frameworks—the GDPR, the Digital Services Act, and the Artificial Intelligence Act—and proposing strategies to enhance resilience to manipulative digital designs.
Recent research highlights the growing use of dark patterns—malicious interface design strategies that push users into making decisions against their best interests—across apps and websites. While previous studies have primarily focused on how these manipulative tactics affect adults, children are increasingly exposed to them. To explore how dark patterns impact younger audiences, researchers conducted a study with 66 fifth-grade students (aged 10–11) at a German school. The study revealed that many children could recognize simple manipulative tactics, such as complex wording and color-based manipulations, but most struggled to identify bad privacy defaults, indicating a particular vulnerability to privacy-related dark patterns.
This literature review explores the complex interactions between dating app ecosystems, advertising strategies, and the integration of dark patterns—subtle design techniques that influence user behavior. Through an analysis of existing research, the review highlights how dating apps strategically use advertising to boost user engagement, shape interactions, and increase revenue. It also addresses the ethical concerns surrounding the use of dark patterns, questioning the fine line between guiding user decisions and manipulating behavior. Additionally, the review examines user data privacy, focusing on how dating apps collect and utilize personal data for targeted advertising, while raising concerns about potential privacy risks and the regulatory measures in place to protect users. By synthesizing these themes, the article contributes to an ongoing discussion about responsible tech design, emphasizing user well-being, transparency, and ethical standards in the digital space.
The growing prevalence of cyberbullying on online social platforms has highlighted the need for effective detection and mitigation strategies. This study introduces a comprehensive approach to preprocessing and analyzing Twitter data to identify instances of cyberbullying. The research begins with loading a dataset of tweets labeled as positive or negative and conducting exploratory data analysis to understand sentiment distribution. The text data is then preprocessed through steps such as noise removal (eliminating URLs, mentions, punctuation), stopword removal, and lemmatization, which enhances the quality of the dataset. The study also examines word and character count distributions to gain insights into tweet lengths. This methodology lays the groundwork for further investigation into the patterns of cyberbullying, contributing to the creation of data-driven solutions to combat such behavior and fostering a safer online environment.
The increased time spent by users in virtual environments and the influence of AI-driven choice architectures have raised concerns about how AI systems can subtly persuade users' actions. This issue is examined through the concept of "dark patterns," which are interface designs that manipulate user behavior in ways that may not be beneficial to them. Although regulatory measures have been introduced in various regions, including India, the article argues that these regulations are insufficient to address AI-powered dark patterns. Such patterns operate at a deeper behavioral level, making users' choices appear uninfluenced by manipulation. The article concludes by advocating for a more comprehensive approach to tackle the persuasive tactics employed by AI systems.
The California Consumer Privacy Act (CCPA) mandates that businesses offer consumers a clear way to opt out of the sale and sharing of their personal information. However, many businesses leverage control over the opt-out process to impose obstacles on consumers, including the use of dark patterns. The enactment of the California Privacy Rights Act (CPRA) aims to strengthen the CCPA and explicitly prohibits certain dark patterns in these processes. Despite these regulatory efforts, research shows that websites continue to employ a variety of dark patterns, some of which exploit legal loopholes, highlighting the need for further action by policymakers.
This book examines the evolution of legal design, which uses design methods to make legal systems more accessible. Initially focused on problem-solving, legal design now incorporates speculative design, proactive law, and insights from fields like cognitive science and philosophy. Featuring twelve essays from the 2023 Legal Design Roundtable, the book offers diverse perspectives from academics and professionals, exploring new approaches and practical applications. It’s a valuable resource for those interested in innovative, human-centered approaches to law.
Dark patterns are deceptive design practices that impair users' ability to make autonomous and informed decisions, as defined by the European Digital Services Act (DSA). These patterns manipulate users into actions that benefit service providers, such as accepting disadvantageous terms, making unwanted financial transactions, or disclosing personal information. Despite increased regulatory attention and scholarly interest, dark patterns continue to proliferate across digital platforms. This thesis examines dark patterns across various digital contexts, including web and mobile interfaces, IoT devices, and social robots, combining human-computer interaction studies with legal analysis to identify opportunities for mitigating their negative impact.
Technological progress has led to the blending of market techniques from various economic sectors, resulting in the use of diverse rules and instruments and the emergence of novel market practices. This disruptive innovation, especially evident in the crypto-asset market, challenges the application and interpretation of existing rules against market malpractice. After analyzing digital market practices like gamification and dark patterns, this thesis examines the compatibility and adaptability of rules on unfair commercial practices within the evolving digital landscape of the crypto market.
This thesis explores how dark patterns in cookie consent dialogues influence user behavior. An experiment using eye tracking revealed that while wording on button labels significantly affected task completion times, visual design changes did not. Participants generally read from left to right, but individual habits played a larger role in their interactions, indicating that established design conventions are more influential than minor visual tweaks in guiding user decisions.
This study examines the hidden costs and benefits of zero-price digital services, highlighting that while users derive significant value from "free" apps, they also face challenges like procrastination, sleep deprivation, and reduced focus. Based on survey data from 196 participants in Linköping, Sweden, the research reveals a growing consumer preference for paid services over free ones, suggesting a shift in attitudes towards digital payment models. The findings underscore the need for greater corporate transparency and user awareness about non-monetary costs, as well as a balanced approach to user protection and innovation in the digital economy.
In the world of online shopping, "dark patterns" are deceptive design tactics that manipulate consumers into unintended actions like purchases or subscriptions. This study examines their prevalence on fashion websites, finding that 78.4% use these tactics, with Nagging and Limited-time Messages being the most common. Fashion sites use more visual dark patterns than general e-commerce sites, and there's a link between these tactics and website popularity. While user reactions to dark patterns vary, prior trust in a brand can reduce their negative impact.
This chapter explores the challenges posed by data-driven technologies to consumer consent, particularly in the context of manipulative practices known as dark patterns. It begins by examining various dark design strategies that exploit consumer vulnerabilities, especially cognitive biases. The chapter then critiques the limitations of the European information-based approach, which forms the foundation of data protection and consumer law, arguing that current regulations are ill-equipped to address digital vulnerabilities. The analysis highlights the inadequacy of a "one-size-fits-all" model like the GDPR and advocates for a more holistic approach to consumer consent protection. It suggests that integrating legal considerations into the design phase of technological architectures could enhance the protection of consumers' authentic choices and prevent manipulative practices.
This study examines the impact of "dark patterns," interface designs that nudge consumers into sharing data under regulations like the GDPR. A field experiment shows that, even without dark patterns, consumers accept cookies more than half the time. Dark patterns, especially those hiding consent options behind extra clicks, significantly influence choices. Larger, well-known firms see slightly higher consent rates, but site popularity doesn't affect the impact of dark patterns. The study also finds no evidence of choice fatigue from repeated pop-ups.
"Dark patterns," deceptive designs that lead users to benefit service providers, are common in digital marketing. While deceived users often face financial or time costs, non-deceived users—those who recognize and avoid these patterns—may also experience stress and frustration due to the extra effort required. This study focuses on these non-deceived users, exploring how the effort to avoid dark patterns can negatively impact usability and potentially erode trust in service providers.
Advancements in Mixed Reality (MR) technology have made it more accessible to consumers, but this has also led to an increase in dark patterns—manipulative design tactics that deceive users. This research examines these tactics in MR environments, analyzing 80 applications and identifying five key dark patterns: Hidden Costs, Misinformation, Button Camouflage, Forced Continuity, and Disguised Ads. The study highlights the harmful impact of these patterns on user trust and decision-making.
Although numerous Human-Computer Interaction (HCI) studies have empirically investigated the harms caused by dark patterns, and policymakers and regulators acknowledge these harms as significant, they have yet to be thoroughly examined from a legal perspective. This paper addresses this gap by identifying and analyzing the harms associated with dark patterns (DP), focusing on their role in the emerging European 'dark patterns acquis'. The paper organizes existing knowledge on dark pattern harms from HCI research and proposes a taxonomy of these harms. It also bridges the discussion of dark pattern harms in HCI with the legal frameworks for assessing harms under European data protection, consumer law, and competition law.
This research highlights that the growing number and diversity of dark patterns means that each specific form, and its legal assessment, must be evaluated based on how it is used and the intentions behind its use.
This book contains the refereed proceedings of the 12th Annual Privacy Forum on Privacy Technologies and Policy (APF 2024), held in Karlstad, Sweden, on September 4–5, 2024. The conference featured 12 full papers, carefully selected from 60 submissions, and aimed to bring together experts from policy, academia, and industry to discuss privacy and data protection. The 2024 conference particularly focused on the General Data Protection Regulation (GDPR) and emerging legislation around European Data Spaces and Artificial Intelligence. Chapters 3, 9, and 12 are licensed under the Creative Commons Attribution 4.0 International License, as detailed in the respective chapters.
As video games have become a mainstream form of entertainment, they have sparked new media concerns, including gaming addiction, screen time effects, gambling-like mechanics, dark patterns, and online toxicity. Additionally, issues like harassment, discrimination, and poor working conditions in the gaming industry are gaining attention. To address these concerns, the first Ethical Games Conference in 2024 brought together research on ethical issues in gaming, aiming to create evidence-based guidelines for the industry and regulators. This special issue features selected papers and opinion pieces, highlighting challenges and exploring how games can be leveraged for positive social impact.
"Light Up" is an educational game designed to expose the prevalence of UX dark patterns on websites. It simulates real-world scenarios where users are manipulated into actions they might avoid, helping players identify and understand these deceptive tactics. The game aims to raise awareness of the harm caused by dark patterns and empower users to resist them, protecting their privacy, finances, and autonomy online.
Digital nudging has gained prominence as a research topic in information systems, typically viewed as a positive engagement strategy. However, this paper critically examines how digital nudging can offend users' dignity. Using CARE theory, which suggests people react negatively to dignity affronts, the study analyzes 42 interviews from a three-month data collection involving a mobile app with daily digital nudges. The findings highlight that digital nudges can provoke forfeit, flight, or fight responses, and may even become dark patterns under certain conditions, despite responsible design. The paper contributes theoretically by conceptualizing digital nudging, offers empirical insights into its dual nature, and provides practical design guidelines to avoid dignity affronts.
As digital interfaces grow more prevalent, ethical concerns, particularly around dark patterns—manipulative design tactics used to influence user behavior—have become a critical area of study. This research introduces the Dark Pattern Analysis Framework (DPAF), which offers a taxonomy of 64 dark patterns. Current detection tools and datasets only cover 50% of these patterns, revealing significant gaps. The findings underscore the need for improvements in the classification and detection of dark patterns, offering key insights for future research.
The widespread use of manipulative designs, or dark patterns, in everyday applications and their impact on users is raising concerns among policymakers and scholars. These designs employ techniques that nudge users into making decisions they might not choose if fully informed, causing various types of harm. The integration of these mechanisms with other platform features makes it difficult for users to recognize the manipulation. Understanding the effects of manipulative designs is crucial for developing protective countermeasures, but researchers face significant methodological challenges. Investigating the impact of manipulative designs is complicated by the fact that users often do not perceive the manipulation. This paper reflects on these challenges through three case studies, highlighting key issues and providing methodological insights for the empirical study of manipulative designs.
Nowadays, websites commonly use two concepts to influence user behavior: deceptive patterns and nudges. In the literature, these concepts are distinguished by their goals and effects—deceptive patterns manipulate users, while nudges encourage better decision-making. However, from a technical perspective, it is unclear if they differ in their implementation. This paper presents a methodology developed to determine whether it is possible to automatically differentiate between deceptive patterns and nudges when crawling a web page. Our findings suggest that there is no need to distinguish between the two concepts, as they are implemented using the same techniques.
Navigating the web has become increasingly difficult for users due to manipulative UI design patterns known as "dark patterns," which lead users to act against their best interests. These tactics are prevalent, yet users remain largely unaware of them. Existing detection methods, including machine learning algorithms, struggle to generalize across all dark patterns due to their varied definitions and implementations. This paper proposes crowdsourcing as a solution to detect and flag dark patterns. Crowdsourcing leverages users' collective experiences to identify these manipulative designs more effectively. The authors introduce Neighborhood Watch, a Chrome extension that allows users to tag dark patterns on websites and view tags submitted by others. This system promotes more conscientious browsing and reduces susceptibility to dark patterns. Despite some limitations, the study concludes that crowdsourcing can effectively protect users from manipulative interfaces.
Dark patterns in user interfaces have attracted global attention from various disciplines. This study highlights a meta-level issue: demographic biases in DP research. It examines the origins of published research and participant demographics, revealing a bias favoring English-speaking North America and Europe. Addressing these biases is crucial for ensuring inclusivity and rigor in the field.
Commercial health apps have become more accessible and popular, serving purposes such as enhancing health literacy, enabling continuous health tracking, and facilitating community engagement. However, concerns have arisen about the privacy, commodification, and exploitation of data generated by these apps. Less is known about deceptive design patterns and coercive practices from the users' perspective. This study uses pregnancy tracking apps as a case study and presents preliminary findings on user experiences. We argue that health apps require a nuanced consideration of deceptive design practices because (1) these patterns can uniquely intersect with users' vulnerabilities in the health context, and (2) the implications can extend beyond financial losses and privacy invasion, impacting users' health and well-being.
Privacy dark patterns are design tactics used by online services to reduce users' online privacy. These patterns either facilitate institutional data collection or increase others' access to personal data. This study examines how social networking sites popular with teens—Snapchat, TikTok, Instagram, Twitter, and Discord—use these tactics to steer users into reducing their social privacy. We analyzed recordings of account registrations, settings configurations, and logins/logouts for each SNS. Our content analysis identified two major dark pattern types—Obstruction and Obfuscation—and seven subtypes. We discuss why social media companies promote social sharing through design and the challenges of regulating these privacy dark patterns.
The use of persuasive designs to influence user behavior is now ubiquitous in digital contexts, giving rise to ethically questionable practices known as 'dark patterns.' While various taxonomies of dark patterns exist, there is a lack of frameworks that address how these designs are embedded not only in user interfaces but also in the functionality and strategy of digital systems. This paper proposes a framework for a Layered Analysis of Persuasive Designs, grounded in Garrett’s five-layer model of user experience (UX) design and Fogg’s Behavior Design Model. The framework identifies a toolkit of 48 design elements that can be used to operationalize problematic persuasion in digital contexts, highlighting the autonomy impact of each element. This framework aims to assist designers and policymakers in identifying and evaluating (potential) dark patterns within digital systems from an autonomy perspective.
The issue of Dark Patterns, or "Deceptive Design," is gaining recognition in literature. However, their widespread presence across various domains complicates interdisciplinary communication and collaboration. Existing taxonomies of these patterns often overlap and address them at different levels of abstraction, hindering cross-domain discourse. This is problematic given the growing evidence of the adverse effects of such designs on users. Additionally, the fine line between manipulative dark patterns and intuitive, protective, and defensive interface designs further complicates the issue. Current taxonomies primarily define patterns but struggle to distinguish between manipulative and benevolent implementations in specific contexts. This work proposes a method to differentiate between these applications by analyzing previously identified patterns for their properties, consequences, and contexts of application. This paper presents our progress toward creating a taxonomy-independent evaluation process for identifying and describing Dark Patterns.
Regulatory responses to dark patterns often depend on expert evaluations of design interfaces to determine if users are being manipulated or deceived. This article unpacks expert assessments of dark patterns used to solicit user consent and argues that regulatory actions should explicitly address whose expertise is being consulted. It concludes by discussing the value of deliberative mechanisms in broadening the range of both experts and expertise modes for identifying, evaluating, and regulating dark patterns.
Dark patterns refer to design practices that trick or manipulate users into making certain choices. One in four internet users encounter dark patterns. This paper examines vital guidelines issued by government commissions or authorities worldwide, including those from the United States, South Korea, India, the European Union, California, Australia, the United Kingdom, Kenya, and Argentina. A comparative analysis of these guidelines highlights national standards, types of dark patterns, and adherence norms. The study reveals minimal enforcement efforts by the relevant authorities to counter dark patterns. It advocates for a global collaboration to establish universal guidelines against dark patterns, overseen by an international authority or commission.
As of January 2024, 5.35 billion people, or 66.2 percent of the world's population, are online. With attention spans reduced to 8 seconds, digital businesses struggle to acquire, engage, and retain users. Many companies use dark patterns—deceptive UI elements that trick users into actions like signing up for services or making purchases. This thesis investigates various dark patterns, such as roach motel, malicious nudging, urgency/scarcity, bait and switch, and confirm-shaming, categorized into “pressure” and “trickery” tactics. While these methods help companies meet business goals, they undermine user trust. To combat this, the thesis proposes developing a Chrome extension to detect and highlight scarcity and urgency dark patterns. This tool aims to raise user awareness and promote a more transparent internet experience.
Internet users are constantly bombarded with digital nudges like defaults, friction, and reinforcement. When these nudges lack transparency, are not optional, or do not benefit the user, they become 'dark patterns', categorized under the acronym FORCES (Frame, Obstruct, Ruse, Compel, Entangle, Seduce). Psychological principles like negativity bias, the curiosity gap, and fluency are exploited to make social content viral, while covert tactics such as astroturfing, meta-nudging, and inoculation are used to create false consensus. The power of these techniques is poised to grow with advances in predictive algorithms, generative AI, and virtual reality. Although digital nudges can be used altruistically to protect against manipulation, their effectiveness remains inconsistent.
As technology advances, the regulation of dark pattern practices has become crucial. These deceptive tactics manipulate users into unfavorable actions, like complicating service unsubscribes or highlighting consent buttons to obscure transparency. The European Union has responded with several legislative acts, including the GDPR, the Digital Markets Act, and the Digital Services Act. However, these regulations often overlap, creating ambiguities and redundancies. This article examines these challenges and proposes solutions, such as harmonization and centralization, to streamline the regulatory framework. The goal is to protect users from manipulative practices and ensure informed decision-making in the digital realm.
This cumulative thesis investigates the intentions behind digital interfaces, focusing on exploitative "dark patterns" that manipulate user behavior. While existing HCI research has identified various dark patterns, this work synthesizes findings to develop comprehensive frameworks and tools for better understanding and mitigating their effects. Through qualitative and quantitative studies, the thesis explores dark patterns in social networking sites (SNS), examines user perceptions and challenges in recognizing these patterns, and contributes to design theory by identifying where dark patterns manifest. The Responsible Design Triangle model is introduced, highlighting the interdependencies between design, user behavior, and guidelines, to promote ethical digital interface design.
This research investigates the dark patterns users encounter when subscribing to or unsubscribing from online services. While previous studies have described dark patterns in digital contexts, this study provides a detailed analysis of such patterns in online subscriptions. By examining ten case studies of sign-up and cancellation processes on streaming platforms and software services, the research identifies deceptive designs and asymmetric efforts through user flow data and visual artifacts. The findings highlight the prevalence and complexity of dark patterns, offering insights for stakeholders, future design standards, and policy recommendations to improve consumer protection in online subscriptions.
This paper examines the role of dark patterns within TikTok, a rapidly growing social media platform. Utilizing principles from behavioral economics and the existing literature on online choice architecture (OCA), the study investigates how TikTok employs dark patterns to engage users and explores the implications for data protection, algorithmic practices, and market dynamics. The paper uses the "walkthrough method" to conduct a case study, detailing the TikTok user experience and identifying potential dark patterns used by the app. It discusses the challenges in distinguishing dark patterns from legitimate commercial practices, especially considering the user impact. Additionally, the paper examines how dark patterns and OCA are addressed in the Latin American legal landscape and proposes next steps for the ongoing debate based on the study's findings.
Over the past two decades, the focus of design patterns has shifted from encouraging best practices to discouraging harmful ones. Dark and deceptive UX patterns that monetize engagement while perpetuating structural inequities are now prevalent. This study uses a visual case study of a childcare worker platform to critically examine these patterns. Through Care Layering, a form of critical documentation, the study highlights how UX patterns, when viewed as culturally-situated resources, reveal both limitations and opportunities in gig work platform engagement. The discussion emphasizes how Care Layering can help designers achieve greater accountability in UX design.
This research paper explores the widespread use of dark patterns in UI and UX design, uncovering the ethical implications of these manipulative practices. Dark patterns are deceptive design elements that influence user behavior for the benefit of designers or third parties. By examining various examples and their impact on user decision-making, the paper emphasizes the importance of recognizing and understanding these patterns to make informed choices in the digital landscape.
Current online contract practices often involve situations where parties do not understand their rights and obligations under these contracts. This article examines and discusses how complex online contracts complicate and sometimes impede people from making strategic, autonomous decisions. It also addresses how legal design approaches can shed light on complexity and foster tackling of dark patterns in online contracting so as to reduce transaction costs, increase legal quality, business sustainability, and competitive business advantage.
This summarises a roundtable on ongoing and emerging consumer risks associated with dark commercial patterns online organised as part of the 99th Session (Part 2) of the Committee on Consumer Policy (CCP) on 6 November 2020. It featured panellists from academia, consumer protection authorities, and a consumer organisation, the Norwegian Consumer Council. It begins with an overview of the main themes that emerged from the discussion, including examples and categories of dark commercial patterns and their defining attributes; evidence of their prevalence online; consumer vulnerability; and tools and approaches available to consumer protection authorities and policy makers to identify and mitigate them. It then provides details of the presentations by each of the panellists, before concluding with suggested next steps.
The white paper examines the ethical issues in popular apps (Android and iOS) in categories including education, gaming, communication, social and dating; used by adolescents. Additional categories of apps, including music and audio, entertainment, and movies & series, were also covered in subsequent parts of the study. Ethical issues regarding the apps were also researched on the other apps such as Twitter, Reddit, Quora, and Google. In this white paper, the author also examines ethical issues associated with four key sections, namely privacy, age- appropriateness, human-in-the-loop, and user interface. In conclusion, ethical considerations in developing and deploying apps for children and adolescents are found to be necessary and cannot be undermined, considering mobile apps' influence on them.
The article reviews recent work on dark patterns and demonstrates that the literature does not reflect a singular concern or consistent definition, but rather, a set of thematically related considerations. Drawing from scholarship in psychology, economics, ethics, philosophy, and law, the authors articulate a set of normative perspectives for analyzing dark patterns and their effects on individuals and society and show how future research on dark patterns can go beyond subjective criticism of user interface designs and apply empirical methods grounded in normative perspectives
The authors present automated techniques that enable experts to identify dark patterns on a large set of websites. Using these techniques, they study shopping websites, which often use dark patterns to influence users into making more purchases or disclosing more information than they would otherwise. They examine these dark patterns for deceptive practices, and find 183 erring websites . They also uncover 22 third-party entities that offer dark patterns as a turnkey solution. Finally, they develop a taxonomy of dark pattern characteristics that describes their underlying influence and their potential harm on user decision-making. Based on these findings, they make recommendations for stakeholders including researchers and regulators to study, mitigate, and minimize the use of these patterns.
With the nascent rise of the voice intelligence industry, consumer engagement is evolving. The expected shift from navigating digital environments by a “click” of a mouse or a “touch” of a screen to “voice commands” has set digital platforms for a race to become leaders in voice-based services. The European Commission's inquiry into the consumer IoT sector revealed that the development of the market for general-purpose voice assistants is spearheaded by a handful of big technology companies, highlighting the concerns over the contestability and growing concentration in these markets. This article posits that voice assistants are uniquely positioned to engage in dynamically personalized steering – hypernudging – of consumers toward market outcomes. It examines hypernudging by voice assistants through the lens of abuse of dominance prohibition enshrined in article 102 TFEU, showcasing that advanced user influencing, such as hypernudging, could become a vehicle for engaging in a more subtle anticompetitive self-preferencing.
In this paper, the authors examine the extent to which common UI dark patterns can be automatically recognized in modern software applications. They introduce AIDUI, a novel automated approach that uses computer vision and natural language processing techniques to recognize a set of visual and textual cues in application screenshots that signify the presence of ten unique UI dark patterns, allowing for their detection, classification, and localization. To evaluate this approach, they constructed CONTEXTDP, the current largest dataset of fully-localized UI dark patterns that spans 175 mobile and 83 web UI screenshots containing 301 dark pattern instances. Overall, this work demonstrates the plausibility of developing tools to aid developers in recognizing and appropriately rectifying deceptive UI patterns.
“How does the end user perceive, experience, and respond to dark patterns?” This is the research question which drives this inquiry. The paper contributes to an increased awareness of the phenomenon of dark patterns by exploring how users perceive and experience them. Hence, the authors chose a qualitative research approach, with focus groups and interviews for this. Their analysis shows that participants were moderately aware of these deceptive techniques, several of which were perceived as sneaky and dishonest. They further expressed a resigned attitude toward such techniques and primarily blamed businesses for their occurrence. Users also considered their dependency on services employing these practices, thus making it difficult to avoid fully dark patterns.
Many tech companies exploit psychological vulnerabilities to design digital interfaces that maximize the frequency and duration of user visits. Consequently, users often report feeling dissatisfied with time spent on such services. Prior work has developed typologies of damaging design patterns (or dark patterns) that contribute to financial and privacy harms, which has helped designers to resist these patterns and policymakers to regulate them. However, there is a missing collection of similar problematic patterns that lead to attentional harms. To close this gap, the authors conducted a systematic literature review for what they call ‘attention capture damaging patterns’ (ACDPs). They analyzed 43 papers to identify their characteristics, the psychological vulnerabilities they exploit, and their impact on digital wellbeing. They propose a definition of ACDPs and identify eleven common types, from Time Fog to Infinite Scroll. The typology offers technologists and policymakers a common reference to advocate, design, and regulate against attentional harms.
This article discusses the results of the authors’ two large-scale experiments in which representative samples of American consumers were exposed to dark patterns. The research also showed the susceptibility of certain groups- particularly those less educated, to dark patterns and identified the dark patterns that seem most likely to nudge consumers into making decisions that they are likely to regret or misunderstand. Hidden information, trick question, and obstruction strategies were shown to be particularly likely to manipulate.
This article analyzes the definition of dark patterns introduced by the California Privacy Rights Act (CPRA), the first legislation explicitly regulating dark patterns in the United States. The authors discuss the factors that make defining and regulating privacy-focused dark patterns challenging, review current regulatory approaches, consider the challenges of measuring and evaluating dark patterns, and provide recommendations for policymakers. They argue that California’s model offers the opportunity for the state to take a leadership role in regulating dark patterns generally and privacy dark patterns specifically, and that the CPRA’s definition of dark patterns, which relies on outcomes and avoids targeting issues of designer intent, presents a potential model for others to follow.
This submission assesses the extent to which extant and forthcoming legislation is equipped to address the pernicious practice of dark patterns through the prism of the digital fairness review.
In light of the regulation of manipulative interfaces by the United States, questions have been raised about the advisability of national or even European regulation of the exploitation of our cognitive biases by designers of digital interfaces. The article looks at the extent then to which the legislation regulates abusive practices, i.e. dark patterns which exploit cognitive biases. Finally, the author proposes that consideration could be given to a principle of the purpose of capturing attention (in particular the collection of attention for specific, explicit and legitimate purposes and the absence of further processing in a manner incompatible with the purposes initially intended).
The author in this paper takes an interesting stance relating to dark patterns and the subject of online privacy- that the current online consent mechanisms do not permit data subjects to think, decide, and choose according to their internal beliefs, therefore impairing essential individual freedoms or capabilities. Cognitive limitations, information overload, information sufficiency, lack of intervenability and lack of free choice are identified as major shortcomings of consent in privacy. Based on these findings, the author proposes a methodology to evaluate old or new design measures to improve consent and reinstall freedoms of thought, decision and choice.
Dark patterns are common in everyday digital experiences, and they present a new challenge to emerging global privacy laws, particularly the European Union (EU) data protection framework and the General Data Protection Regulation (GDPR). The author contends that while there is an apparent lack of legal tools to deal with dark patterns, the current framework can be amended to identify and curb them, especially through a refinement of the requisites for lawfulness of consent and the reformulation of the fairness principle in data protection.
This article outlines and explores the limits of dark patterns- which it describes as the specific ethical phenomenon which supplants user value in favour of shareholder value. It also analyses the corpus of practitioner-defined dark patterns and determines the ethical concerns raised in these examples. Additionally, it identifies the examples which simply fall under a wide range of ethical issues raised by practitioners that were frequently conflated under the umbrella term of dark patterns. At the same time, the researchers acknowledge that UX designers may be complicit in these manipulative or unreasonably persuasive techniques and concludes then, with implications of educating user experience designers and a proposal for broadening research on the ethics of user experience.
In two preregistered online experiments the authors investigated the effects of three common design nudges (default, aesthetic manipulation, obstruction) on users’ consent decisions and their perception of control over their personal data in these situations. In the first experiment (N = 228) they explored the effects of design nudges towards the privacy-unfriendly option (dark patterns) and in the second, they reversed the direction of the design nudges towards the privacy-friendly option, titled “bright patterns”. Through this and overall, findings suggest that many current implementations of cookie consent requests do not enable meaningful choices by internet users, and are thus not in line with the intention of the EU policymakers. They also explore how policymakers could address the problem.
In the past years, regulators around the world found a new focal point in addressing online information asymmetries. This focal point is dark patterns. As this inspired a widespread interest that brings together human-computer interaction, web measurement, data protection, consumer protection, competition law, and behavioral economics – to name a few relevant disciplines – the authors decided to focus this consumer update on this topic. With the help of Cristiana Santos, who is an expert in the conceptualization and detection of dark patterns as privacy violations, they have written a short summary of the research in the field, regulatory concerns, as well as brief critical reflections showing where more attention should be paid.
The research conducted for this study is significant because it shows that dark patterns are prevalent and increasingly used by traders of all sizes and not only large platforms. According to the mystery shopping exercise, 97% of the most popular websites and apps used by EU consumers deployed at least one dark pattern and more importantly identified that it was rare to find one dark pattern used in isolation as multiple were often featured on a site. Altogether, it sheds light on the prevalence of dark patterns in the digital world and how a large number of internet players have a hand in this phenomenon.
The world seemingly becomes more consumer-friendly with each generation as businesses take more and more measures to ensure customers are left with a positive experience. Despite our technological advancement and understanding of human nature, it’s still common to see deception in digital products. To understand digital deception (and better define it), we surveyed 536 people to measure and discuss people’s understanding of this matter.
Default decisions-prevalent and influential in areas varying from retirement program designs and organ donation policies to consumer choice. While past research has shown the reason for these no-action defaults mattering due to effort and implied endorsement, there is a dearth of this in relation to reference dependence, i.e. how the default choice can serve as a reference for determining whether the other choices will be positively or negatively evaluated. In the article, the researchers demonstrate how reference dependence can increase the effectiveness of default decisions.
The concept of dark patterns is still ill-conceptualized in the Digital Services Act, the first EU legislative act to tackle dark patterns head-on. The author identifies these concerning aspects to the DSA's prohibition of dark patterns, notably its reference to manipulation as a source of consumer harm. ‘Manipulation’ is not defined in the regulation, and is a new EU legal term. Many philosophers have reflected on the meaning of manipulation and how it manifests in digital environments, and the jury is still out on this question. This article assesses this and related shortcomings, as well as others in relation to the IMCO Committee's proposed amendment of the Unfair Commercial Patterns Directive to better target dark patterns.
The central idea of this working paper is that issues of safety and fairness can no longer be regulated using consumer choice as the primary protection. Instead, consumers need a privacy law that stops harmful business practices before they cause significant harm. Two concepts are explored in this working paper to address both current and emerging data harms: duty of care or best-interests duty and the privacy safety regime. Borrowing concepts from product intervention powers and product safety interventions, the CPRC proposes options that would allow governments and regulators to stop or limit obviously harmful uses of data as well as a process for regulators to proactively restrict and test new harmful practices as they evolve. It concludes that the law needs to require more effort on the part of businesses to assess whether and how they collect, share, and use data that results in fair outcomes for their customers.