A collection of research and articles about dark patterns, user behaviour and law
Elevate your business with 10x more data, advanced filters, and exclusive insights.
The concept of "design patterns," originating in architecture, has evolved into digital design fields, with "anti-patterns" emerging to describe ineffective or harmful solutions. In 2010, UX expert Harry Brignull introduced "dark patterns" to define intentionally deceptive UI designs that manipulate users for corporate gain, launching darkpatterns.org to raise awareness. Research in Human-Computer Interaction (HCI) has since documented their prevalence across platforms like e-commerce and social media, highlighting their evolving nature and the challenges they pose for regulation. This chapter examines dark patterns, their mechanisms, and their impact on user behavior, focusing on e-commerce "shopping dark patterns" and exploring regulatory hurdles and potential solutions.
Lior Strahilevitz, the Sidley Austin Professor of Law at the University of Chicago, has revisited the issue of dark patterns—manipulative online tactics designed to exploit consumer behavior—in his latest paper, "Can Consumers Protect Themselves Against Dark Patterns?" Building on his highly influential 2021 work that helped shape the public discourse around these deceptive practices, Strahilevitz now shifts focus toward potential solutions. Collaborating with experts in computer science and social psychology, he developed a realistic Netflix-like platform to study consumer responses to manipulative interfaces under real-world conditions. The research investigates critical questions, including the effectiveness of dark patterns in persuading users to share private information, particularly when they are multitasking or under cognitive load. Coauthored with Matthew Kugler, Marshini Chetty, and Chirag Mahapatra, the paper highlights UChicago’s hallmark interdisciplinary and empirical research approach, offering fresh insights into combating increasingly sophisticated dark patterns.
A recent study explored the challenges users face in resisting dark patterns—manipulative design tactics used in e-commerce to influence user decisions. By analyzing expert opinions from 17 industry professionals and employing grey influence analysis (GINA), the research ranked the significance of 11 identified challenges. Key findings revealed that lack of user awareness, limited control over cognitive biases, and preference for short-term benefits are the most influential barriers. On the other hand, factors like the normalization of aggressive marketing, lack of collective user action, and legal challenges were found to have minimal impact. The study emphasizes the importance of targeted consumer education to mitigate cognitive biases and advocates for ethical design practices in e-commerce to foster trust and transparency. It is among the first to apply GINA in this context, offering unique insights into the cascading effects of user challenges in manipulative digital environments.
A recent study compared the prevalence of Deceptive Patterns—manipulative interface elements that can lead to financial, temporal, and privacy-related losses—across iOS and Android mobile apps. Despite the perception of iOS as a safer platform due to Apple's ecosystem controls, the analysis of 143 apps revealed that iOS apps exhibited more instances of Deceptive Patterns than their Android counterparts (1477 vs. 1398). The study identified four specific types of Deceptive Patterns with significant discrepancies between the platforms, suggesting potential influences from app store guidelines, developer tools, and the increased use of A/B testing. These findings underscore the need for further examination of platform-specific user protections to address risks associated with Deceptive Patterns.
Dark patterns are a global issue, but Indonesia has yet to address them effectively. This article analyzes dark patterns on digital platforms through the lens of Indonesia’s consumer protection laws, arguing that existing regulations are insufficient. While some laws indirectly prohibit certain dark patterns, their application is limited and difficult to enforce. The article recommends explicitly prohibiting the use of manipulative design interfaces that undermine consumer autonomy or exploit behavioral biases. These regulations need not define “dark patterns” directly but should clearly target such deceptive practices to strengthen consumer protection in Indonesia.
This book examines the rising influence of dark patterns—manipulative digital designs that exploit personal data to influence consumer behavior—through the lens of consumer autonomy and legal protections. Divided into three sections, it explores tools for safeguarding autonomy, types of dark patterns, and the role of ethical design in empowering consumers. While offering valuable insights into issues like manipulation, personalization, and algorithmic discrimination, the book’s limited space occasionally restricts depth. Nonetheless, it provides a timely analysis of how businesses increasingly leverage these tactics, highlighting the importance of addressing them in law and design.
The concept of "bad design" is widely discussed but lacks a universally accepted definition, encompassing everything from aesthetic flaws to exploitative dark patterns. This study uses netnography to analyze public opinions on bad designs collected from Reddit forums, revealing that such designs can arise from unintentional negligence or intentional exploitation for specific benefits. Bad designs, spanning industrial products to government systems, often harm users, the environment, or businesses. To address this, the study introduces the "Bad Design Canvas," a tool to help designers and students identify and evaluate design flaws, encouraging a more mindful approach to design. While bad design may never be entirely eradicated, the responsibility lies with practitioners to recognize and mitigate its impact.
Dark patterns, also known as deceptive patterns, are manipulative tactics used by companies to trick users into agreements or actions that benefit the company, often at the user's expense. These tactics include making it difficult to cancel services, promoting manipulative agreements, and employing intrusive pop-ups or sponsored ads. Both small companies and multinational tech giants have been accused of using dark patterns, which are widely criticized but hold significant value for businesses. This paper examines the scope of dark patterns, their role in business transactions, and recent regulatory developments in the European Union, the United States, and India. While India’s laws are more consumer-centric, this analysis highlights the balance between user protection and the business importance of dark patterns.
The evolving field of digital nudges has gained significant attention over the past decade, highlighting the need to understand how these practices adapt to changing digital environments. This paper presents a systematic narrative literature review on the evolution of digital nudges, identifying key themes such as AI-powered nudges, dark nudges, and the importance of ethics, transparency, and accountability. The findings propose a revised definition of digital nudges, introduce a taxonomy wheel to clarify terminology, and offer actionable insights for practitioners to design and implement digital nudges responsibly.
Dark patterns are deceptive interface techniques that manipulate users, undermining their autonomy and decision-making. These tactics compel users to share personal data, pay higher prices, or make canceling subscriptions and exercising rights difficult. Exploiting quick, low-effort decision-making (System 1, Kahneman), they erode trust in the digital space and threaten its sustainability. While Europe has established clear regulations, the U.S. remains more restrained. This study, part of the TTLF project "Towards an Empowerment-Based Regulation of Dark Patterns," examines these practices, their effectiveness, and existing regulatory frameworks, highlighting gaps and setting the stage for further research.
This article examines the need to include the issue of dark patterns in Brazil's regulatory agenda, considering their harmful potential and international experiences. The research employs an explanatory methodology through the hypothetical-deductive method, based on bibliographic, legislative, and jurisprudential analysis. The findings reveal that dark patterns violate consumer rights, privacy, competition, and the protection of minors, while also causing market failures by exploiting the lack of adequate information and individuals' cognitive limitations. These practices undermine both market efficiency and fundamental rights, highlighting the urgency of addressing this issue within Brazil's regulatory framework.
This thesis explores the use of dark patterns—deceptive user interface designs that exploit cognitive biases to influence user behavior—by examining which types of companies employ them and their motivations. Through case studies, academic research, and content analysis of selected websites, the study identifies patterns in companies' use of these tactics. The research highlights that neither company size nor founding date reliably predicts the likelihood of employing dark patterns. Instead, a company’s brand identity and marketing strategy significantly influence its decision to use these deceptive practices. The thesis also discusses global regulatory efforts to address dark patterns and includes examples of high-profile cases that have drawn public and media attention.
A literature review explores the growing popularity of mindfulness applications and the potential threat posed by dark patterns embedded within them. Using Fogg’s Behavior Model and Seven Persuasive Technologies as a framework, the review examines how dark patterns are defined, classified, and integrated into these apps, as well as their impact on users. Findings reveal a significant risk of dark patterns exploiting users without their awareness, raising ethical concerns. The review also highlights limitations in current research and suggests directions for future studies to address these issues and promote ethical design in mindfulness applications.
A recent dissertation investigates vulnerability to manipulative designs, or "dark patterns," which undermine users' autonomy in digital environments. Framing vulnerability as a power imbalance influenced by social, contextual, and agency-related factors, the research explores how specific groups—teenagers, older adults, and young adults with limited digital skills—experience and resist these designs. Through workshops and qualitative studies, it highlights the social and contextual drivers of harm and identifies challenges in designing countermeasures. The work provides insights for UX/UI practitioners, legal scholars, and policymakers, offering guidelines and intervention strategies to address user vulnerability and reframe online manipulation as a critical issue of user protection.
Dark patterns, which manipulate users into decisions that may not align with their best interests, have prompted efforts to counteract their impact through legislation, detection tools, and user education. However, the potential of serious games to teach users about identifying and classifying these deceptive designs has been largely unexplored. To address this, researchers developed a web-based prototype game targeting adults and conducted a comprehensive user study to evaluate its effectiveness. The study revealed that participants improved their ability to detect dark patterns, becoming more aware of manipulative design practices. While results regarding their classification skills were mixed, game log data confirmed significant improvement in detection abilities, highlighting the game's potential as an educational tool.
Deceptive and manipulative design patterns, often used to mislead users or prioritize third-party interests, are prevalent in software, web, and applications. A common example is burying data-sharing clauses in obscure text boxes, undermining user experience and privacy. Despite their widespread adoption, ethical concerns persist, and comprehensive studies on the topic remain limited. To bridge this gap, a systematic literature review characterizes deceptive patterns as a sociotechnical issue involving both human and technical dimensions. This research utilizes the Semiotic Framework to dissect a deceptive pattern and introduces a catalog designed to help users identify and understand these practices. Evaluations with HCI specialists highlight the catalog's potential to raise awareness, while adopting a sociotechnical perspective offers deeper insights into the impact and analysis of deceptive patterns.
A recent study examines the critical role of cybersecurity in Industrial Internet of Things (IIoT) platforms and Industrial Cyber-Physical Systems (ICPS), focusing on the influence of Persuasive and Dark Patterns on user interaction. By analyzing user behavior within two versions of the same application—one utilizing Dark Patterns and the other employing Persuasive Patterns—the research provides valuable insights into operator interactions in industrial environments. The findings reveal significant differences in user experience and navigation efficiency, emphasizing the profound impact of interface design choices on industrial process monitoring and control.
A recent study delves into the pervasive issue of dark patterns—deceptive user interface designs crafted to manipulate users into unintended actions—highlighting their impact on both users and developers. Using a mixed-method approach, the research surveyed 66 end users and 38 developers, while also analyzing 2,556 GitHub discussions. Findings reveal that users frequently encounter dark patterns online, often feeling negative emotions and facing limited options to avoid them. Developers, on the other hand, cite external pressures as a driving factor in implementing such designs, acknowledging their detrimental effects on trust and user experience. GitHub discussions predominantly focus on identifying and preventing dark patterns, with a generally negative sentiment towards their use. The study underscores the need for greater awareness and ethical practices in UI design to curb the proliferation of deceptive patterns in digital environments.
The rise of manipulative web design techniques poses significant challenges to ethical UI/UX design, influencing user behavior in ways that lead to privacy breaches, unwanted purchases, and other adverse outcomes. These techniques often exploit psychological biases, making them difficult to detect. This paper categorizes and details 10 types of manipulative design strategies, highlighting those requiring user action or awareness to coerce unintended decisions. Use cases and countermeasures are also explored, laying a foundation for assessing their impact and advancing ethical web design practices.
Dark patterns are manipulative UI design tactics that push users toward decisions against their best interests. While countermeasures like legislation, content modification, and user education have been explored, serious games for teaching dark pattern detection remain underexplored. This study presents a web-based game prototype for adults and evaluates its effectiveness through a user study. Findings show improved detection skills among participants, indicating increased awareness of manipulative designs. Although classification abilities showed mixed results, game log data confirmed enhanced detection accuracy, highlighting the game's potential as an educational tool.
The growing reliance on web interfaces for activities like shopping and socializing underscores the need for ethical UI/UX design. However, manipulative design techniques exploit users' psychological biases, leading to privacy breaches, unintended purchases, and other harmful outcomes. This paper identifies and categorizes 10 types of manipulative techniques in UI/UX, detailing how they coerce users into unintended actions or decisions. It also explores use cases and countermeasures, offering a foundation for assessing their impact and promoting ethical web design practices.
Les médias sociaux, utilisés par 5,04 milliards de personnes en 2024, sont omniprésents, avec une consommation moyenne de 2 heures et 23 minutes par jour (We Are Social, 2024). Bien qu'ils procurent une satisfaction apparente, ils sont liés à des effets négatifs sur la santé mentale et au développement d’une dépendance, comparable à celle des substances addictives, amplifiée par des facteurs comme l'anxiété et le Fear Of Missing Out (FOMO) (Allcott et al., 2022; Erhel et al., 2024). Les plateformes renforcent cette dépendance via des dark patterns, des stratégies manipulant les biais cognitifs pour capter l’attention. Le projet DARKUPI explore ces mécanismes, proposant une typologie de dark patterns sur les médias sociaux et analysant leur lien avec la dépendance numérique à partir d'une enquête menée auprès de 2 547 utilisateurs en France.
The digital economy, in Russia and globally, increasingly draws lawmakers' attention, particularly regarding "dark patterns"—manipulative design tactics influencing user behavior. BigTech companies, like TikTok, leverage such mechanisms to retain users, exemplified by TikTok's endless, dynamically adjusted recommendation feed, creating an "immersion effect." This article analyzes Russian and international regulatory approaches to these practices, with a focus on legislative initiatives protecting consumer rights and combating manipulative designs. While Russian law emphasizes consumer protection and unfair competition, Western countries have introduced specialized norms targeting dark patterns. The article aims to evaluate existing regulations and propose adaptations of effective foreign practices to the Russian legal framework.
Dark patterns, manipulative design tactics, are prevalent in video game monetization systems, particularly those relying on microtransactions. These tactics exploit psychological behaviors to maximize spending, raising ethical concerns and contributing to the negative perception of in-game purchases. This study examines how dark patterns in games differ from other industries, advocating for revised taxonomies that account for gaming’s unique mechanics. It highlights the potential link between these patterns and gambling-like addiction disorders, especially among vulnerable populations, and calls for deeper research to understand their scale, effectiveness, and harm compared to ethical monetization practices.
The opaque nature of transformer-based models, particularly in detecting dark patterns—deceptive design tactics undermining user autonomy—calls for integrating uncertainty quantification to enhance trust. This study proposes a differential fine-tuning approach using uncertainty quantification techniques, such as Spectral-normalized Neural Gaussian Processes (SNGPs) and Bayesian Neural Networks (BNNs), at the classification head of pre-trained models. Evaluating these methods against a dense neural network baseline, the research examines performance, prediction certainty, and environmental impact. Findings show that uncertainty quantification maintains model performance, improves transparency, and provides measurable confidence in predictions without uniformly increasing environmental costs. By enhancing explainability and trust, this approach supports informed decision-making and helps mitigate the influence of dark patterns in user interfaces, highlighting its importance for ethically sensitive applications.
Dark patterns, deceptive design techniques in technology, challenge policymakers due to their impact on user autonomy and financial well-being. This study addresses gaps in research by identifying and quantifying dark patterns across shopping, health and fitness, and education apps in Canada. Analyzing the top three apps per domain, darkness scores were assigned based on identified patterns. Temu (7.5), Yuka-Food & Cosmetic Scanner (5), and Duolingo (6.5) had the highest scores in their categories, with shopping apps showing the highest average darkness scores. The most common tactic was modifying decision space. This study highlights the prevalence and variation of dark patterns across domains, emphasizing the need for quantifiable metrics to guide regulatory efforts and inform researchers and practitioners.
Social media (SM), a key feature of the digital age, has evolved from its Web 2.0 roots into a powerful tool for communication, consumerism, and cultural exchange. Initially limited to static internet pages and basic functionalities like email, SM now boasts 4.6 billion users globally, transforming individual and organizational life. It plays a critical role in digital transformation, enabling remote work, fostering virtual collaboration, and replacing traditional office spaces with technology-driven environments. The COVID-19 pandemic further accelerated SM's use in onboarding, training, and community building, enhancing workplace belonging and productivity. Despite its growing importance, research on SM in human resource development (HRD) remains limited. This chapter explores SM’s integration into HRD, addressing its role in wellness, visibility, and virtual workforce development, while bridging gaps in understanding its impact on modern workplaces.
Dark patterns, manipulative design tactics often found in social media, push users toward oversharing or excessive use, while bright patterns guide users toward personal goals. This thesis explores bright patterns as a solution to social media issues by inverting 15 dark pattern types specific to the platform. Surveys revealed that more deceptive patterns were less attractive, with Forced Access rated as the worst. Three bright patterns were designed to counter Nagging, False Hierarchy, and Toying with Emotion, aiming to reduce screen time, slow fake news, and promote diverse content. User tests showed bright patterns were less annoying and scored better overall, highlighting their potential to repurpose dark patterns for user benefit, though ethical concerns about targeted manipulation remain.
Dark patterns, manipulative design elements in digital interfaces, raise ethical concerns by prioritizing service providers' interests over users. A recent study introduces the Dark Pattern Analysis Framework (DPAF) to tackle classification inconsistencies, detection tool limitations, and dataset gaps. It identifies 68 dark pattern types with detailed annotations and highlights detection tools' low coverage (45.5%) and datasets' limited comprehensiveness (44%). By standardizing classifications and merging datasets, the research offers critical insights to improve dark pattern detection and drive future advancements in the field.
Digital marketing often faces ethical challenges, particularly with the use of manipulative "dark patterns" such as trick questions, sneak-into-basket, and privacy zuckering, which erode trust and harm reputations. Ethical design emphasizes transparency and user consent, aligning with principles like privacy, accuracy, property rights, and accessibility outlined by Mason. Regulations such as GDPR and PECR enforce responsible data handling and protect users from deceptive practices, while intellectual property laws safeguard content and designs. Companies must ensure ethical data collection, genuine online reviews, and clear communication to maintain trust and compliance. Prioritizing transparency, addressing negative feedback, and avoiding dark patterns are vital for building customer loyalty and managing a positive reputation in the digital landscape.
n contemporary e-commerce, cognitive exploitation has emerged as a critical issue, with suppliers leveraging techniques like nudges, dark patterns, and cognitive biases to manipulate consumer decisions. This article explores the impact of informational asymmetry, fatigue, and selective attention manipulation on consumer vulnerability. It advocates for updated regulations, supplier accountability, consumer education, and proactive roles for regulators and the judiciary. Ultimately, the article calls for a multidisciplinary approach to foster a fairer, more transparent digital marketplace and ensure robust consumer protection.
The rise of big data, machine learning, and AI has transformed digital advertising, enabling precise targeting and tailored content based on user behaviors and demographics. While innovative, these practices raise significant legal and ethical concerns regarding privacy, non-discrimination, and digital autonomy. In response, EU lawmakers have introduced regulations to address these issues. This chapter explores the evolution of EU digital advertising laws, key challenges in their enforcement, and ongoing debates, offering insights and recommendations for ensuring fair and responsible advertising practices.
A recent study examines the influence of dark patterns in online shopping malls, focusing on their impact on consumer attitudes, brand trust, and continued usage intentions after recognizing these manipulative tactics. The findings reveal notable differences in how men and women perceive dark patterns and their subsequent attitudes as consumers. Brand trust emerged as a significant factor, influencing both consumer attitudes and the likelihood of continued use despite awareness of dark patterns. Moreover, consumer attitudes acted as a mediator between brand trust and retention intentions. Interestingly, the level of experience with dark patterns did not moderate the relationship between brand trust and consumer attitudes, suggesting that trust remains a critical determinant regardless of familiarity with manipulative design strategies.
A recent exploration into deceptive design highlights the innovative use of Trickery, a narrative-based serious game designed to expose and educate players about manipulative interface tactics. Incorporating seven gamified deceptive patterns, the game provides players with firsthand experience of the consequences these patterns impose. Through an explorative gameplay study and an accompanying online survey, researchers examined player behavior and the perceived effectiveness of the gamified deceptive patterns in raising awareness. The findings revealed diverse player motivations and justifications when navigating deceptive patterns, as well as critical factors for successfully integrating such patterns into gameplay. This approach offers a promising avenue for enhancing user understanding and resistance to manipulative design practices.
A recent study sheds light on the pervasive use of dark patterns in mobile games, revealing their exploitation of players through temporal, monetary, social, and psychological mechanisms. By analyzing user-generated data from 1,496 games, researchers identified a troubling presence of these manipulative strategies not only in games commonly viewed as problematic but also in those perceived as harmless. This quantitative research underscores ethical concerns surrounding current revenue models in gaming, particularly their impact on vulnerable populations. Highlighting the importance of ethical design, the study advocates for community-based efforts and collaboration among users and industry professionals to foster healthier gaming environments.
E-commerce's growing popularity brings increased challenges for consumers, particularly with the rise of dark patterns—manipulative design elements that prioritize business profits over consumer well-being. While EU consumer protection laws, such as the Unfair Commercial Practices Directive and Consumer Rights Directive, aim to curb exploitative practices, dark patterns present unique challenges. These include exploiting behavioral biases that current laws, focused on information remedies for rational consumers, fail to address; low compliance in digital environments due to the laws' technologically-neutral design; and a gap between the widespread use of dark patterns and limited enforcement efforts. Strengthening digital market oversight with technical solutions, such as computational detection of dark patterns, could enhance consumer protection. This thesis examines how EU law can effectively regulate dark patterns through both legal and technical innovations.
Mobile apps play a critical role in daily life but frequently employ dark patterns—manipulative design techniques like visual tricks or coercive language—to influence user behavior. While current research relies on manual methods to detect these patterns, the process is slow and cannot keep pace with the rapid evolution of apps. AppRay, a novel system, addresses these challenges by combining task-oriented app exploration with automated dark pattern detection. Utilizing large language models and contrastive learning, AppRay identifies both static and dynamic dark patterns efficiently, significantly reducing manual effort. Supported by two comprehensive datasets, AppRay demonstrates strong performance in uncovering deceptive practices across various app interfaces.
A recent study examined the risks of political chatbots as peer-to-peer propaganda tools, focusing on a Facebook Messenger chatbot from Benjamin Netanyahu’s 2019 campaign. Using the Walkthrough Method, researchers identified “dark cycles” involving Reconnaissance (data collection), Training (repetitive messaging and dark patterns), and Activation (task directives). The findings reveal the power dynamics these chatbots exploit, raising concerns about their evolving role in political influence and surveillance.
A recent study examined how Large Language Models (LLMs) adjust language to create personalized persuasive content based on personality traits. By analyzing outputs across 19 LLMs, researchers found distinct patterns: more anxiety-related words for neuroticism, achievement-focused language for conscientiousness, and fewer cognitive process words for openness. The findings reveal LLMs' potential to tailor persuasive communication, with varying success across personality traits.
The 14th Design Thinking Research Symposium (DTRS14), hosted by Mälardalen University, focuses on how design can drive sustainable futures by addressing global challenges like climate change, social inequality, and resource management. Aligned with the UN's Sustainable Development Goals, it emphasizes interdisciplinary collaboration and human-centered design methods such as co-design and systems thinking. The symposium showcases global insights on using design to foster sustainability and societal progress.
A recent study of 16 U.S. gig work platforms reveals extensive data collection, sharing with up to 60 third parties, and privacy dark patterns that heighten risks for workers. Findings include reversible Social Security Number (SSN) hashes and off-platform nagging tactics. Platforms addressed disclosed SSN issues, confirmed by independent audits. The study calls for stronger privacy protections and offers its data for further research.
A recent study explores how web developers using GPT-4 may unintentionally create deceptive designs (DD) in website features. Involving 20 participants, the study tasked users with generating product and checkout pages for a fictitious webshop using ChatGPT, followed by adjustments to boost sales. All 20 websites ended up containing DD patterns, averaging 5 per site, with GPT-4 offering no warnings. Notably, only 4 participants expressed ethical concerns, with most viewing the designs as effective and acceptable. These findings underscore potential ethical and legal risks in relying on LLM-generated design recommendations without careful oversight.
Antitrust laws often struggle to curb Big Tech’s anticompetitive business models, where dark patterns complicate enforcement. Austrian criminal law may fill this gap by addressing potentially deceptive practices. Amazon’s Prime and BuyBox, Google’s Search and Bard, and mechanisms like Activision Blizzard’s lootboxes and Engagement Optimised Matchmaking (EOMM) algorithms may constitute commercial fraud under §§ 146, 148 StGB, as they appear designed to mislead consumers for profit.
This special issue aims to deepen research on deceptive designs—manipulative digital features that exploit cognitive biases to influence user choices without full consent. As these designs proliferate on digital platforms and are enhanced by AI and immersive technologies, regulatory efforts across the EU, US, and India seek to curb their use. This issue invites interdisciplinary studies on the behavioral impacts of deceptive designs, effects on vulnerable users, regulatory strategies, and theoretical advancements in cognitive biases. Insights from psychology, economics, HCI, and law will support a framework for understanding and mitigating these designs in evolving digital contexts.
This chapter presents research on user experience in video games, aiming to identify core parameters that address user needs. Key factors—usability, engageability, motivation, emotion, immersion, and commitment—were analyzed as essential elements of a positive gaming experience. A specific heuristic evaluation was developed, incorporating these principles along with targeted questions to enhance video game assessments. This extended evaluation, optional and tailored for gaming, provides a refined tool for analyzing user experience in video games.
As the data ecosystem becomes increasingly intertwined with daily life, every action generates value within the data economy, raising the need for authorities to protect diverse and sometimes conflicting stakeholder interests. Despite existing regulations, effective compliance remains challenging due to a lack of suitable technical and organizational tools and adequately prepared oversight bodies. This paper explores how FACT principles, FAIR principles, and Open Personal Data Stores could guide ethical pathways for the data economy. Through ethical, economic, and legal lenses, it presents findings from literature review and content analysis, offering recommendations to help stakeholders better integrate ethics into the data economy.
Social robots, designed to foster trust and companionship, often incorporate elements that can lead to user manipulation and addiction. To address these risks, this thesis proposes using Vulnerability Theory to assess Human-Robot Interaction by balancing dependency with resilience, ensuring ethical, user-centered design. The "Safety and Intimacy in Human-Robot Interaction Pipeline" serves as a tool to analyze these interactions, supporting regulations aimed at protecting users from manipulation.
This study presents the design of a field experiment examining the effects of online tracking, targeting, ad-blocking, and anti-tracking technologies on consumer behavior and economic outcomes. While the online data industry promotes the benefits of tracking for targeted advertising, privacy concerns persist regarding the extensive tracking of consumers without their knowledge. The experiment aims to analyze how these technologies influence online browsing, shopping behavior, and purchasing decisions, focusing on metrics like spending, product prices, search time, and purchase satisfaction. The study outlines the rationale, experimental design, and data collection plans for these investigations.
Protecting the rights of algorithmic consumers is crucial for the healthy growth of the digital economy. Dark patterns, which manipulate consumer decision-making through platform interfaces, disrupt traditional marketing methods. These patterns exploit cognitive biases, reducing consumer autonomy and rationality. They can be categorized as inducing, deceptive, or manipulative, each with varying impacts, such as compromising personal information and undermining consumer autonomy. The U.S. and the EU tackle dark patterns through legislation, with the U.S. focusing on specialized laws and the EU combining rules and standards. A governance approach tailored to local contexts, involving regulation, protection, and collaboration, is needed to balance platform operators' interests and safeguard consumers, fostering a healthier digital economy.
A study on dark patterns in social media reveals their influence on user engagement and impulse buying. Based on 492 participants from PSUT in Jordan, the research found that time, effort, pleasure, and social acceptance positively affect engagement, while non-routine behavior has a negative impact. Engagement, in turn, boosts impulse buying. The study emphasizes the need for ethical marketing, as dark patterns can erode trust and harm long-term relationships.
This thesis explores how dark design patterns in casual mobile games exploit cognitive biases. While ethical concerns have been discussed, the cognitive mechanisms behind these patterns are less understood. Through interviews with mobile game players, the study finds that monetary tactics like hidden costs are more easily recognized than social or temporal patterns, with loss aversion being a key bias exploited. Many players express frustration with these manipulative designs. The thesis calls for greater awareness and discussions on ethical design and regulatory measures in mobile gaming.
A recent study explores the concept of Interactive Dark Patterns (IDPs), a subset of dark patterns that engage users through multiple steps or screens to manipulate their choices. While dark patterns have been widely studied since Harry Brignull's identification of them in 2010, the interactive nature of some patterns has been overlooked. This research aims to empower ordinary mobile application users by increasing their ability to recognize and avoid these deceptive design tactics through targeted training interventions. The study found that such training significantly reduced user entrapment, with the entrapment rate dropping from 79% to 27% post-training, highlighting the effectiveness of awareness-building in mitigating the impact of IDPs. These findings have broader implications for mobile app developers, UX/UI designers, and digital ethicists, emphasizing the importance of ethical design practices.
Dark patterns, which manipulate consumer behavior, have become widespread in digital markets, influencing how online consumers perceive, behave, and make purchases. A novel empirical study reveals that individuals across all demographic groups are vulnerable to these manipulative tactics, with little evidence suggesting that common markers of consumer vulnerability, such as income, education, or age, significantly reduce susceptibility. These findings support broad restrictions on the use of dark patterns, as proposed in the EU’s Digital Services Act, to protect all consumers. The study also highlights that dark patterns are less effective when added friction, such as a required payment step, is introduced. This indicates that dark patterns are most effective in scenarios where no further user action, such as additional payment steps, is required, particularly when online providers store payment information for 'single-click' purchases.
A recent evaluation of alcohol-industry-funded (AIF) digital tools, such as blood alcohol calculators and consumption trackers, has revealed significant concerns regarding misinformation and manipulative design tactics. These tools, distributed by organizations like Drinkaware and Drinkwise, were found to contain health misinformation and "dark patterns" that potentially influence users towards increased alcohol consumption. Compared to non-industry-funded tools, AIF tools provided significantly less accurate health feedback (33% vs 100%), omitted crucial information about cancer and cardiovascular disease, and promoted industry-friendly narratives. Moreover, these tools employed techniques like priming nudges and social norming to encourage consumption, offering fewer behavior change techniques and restricted user input options. The study concludes that AIF tools may function as covert marketing channels, misleading users and promoting continued alcohol consumption through manipulative "Dark Apps" design strategies.
In recent years, concerns have intensified regarding the influence that digital architectures exert on individuals and societies, particularly through the use of dark patterns—manipulative interface designs embedded in digital services. While these tactics can affect anyone, traditional legal frameworks often focus on certain vulnerable groups, such as children. However, empirical research indicates that vulnerability to dark patterns is shaped by various factors, beyond the traditional group-based classifications. This article aims to deepen the understanding of this issue by offering a multidisciplinary analysis of the factors contributing to vulnerability, evaluating the feasibility of risk assessments in three EU legal frameworks—the GDPR, the Digital Services Act, and the Artificial Intelligence Act—and proposing strategies to enhance resilience to manipulative digital designs.
Recent research highlights the growing use of dark patterns—malicious interface design strategies that push users into making decisions against their best interests—across apps and websites. While previous studies have primarily focused on how these manipulative tactics affect adults, children are increasingly exposed to them. To explore how dark patterns impact younger audiences, researchers conducted a study with 66 fifth-grade students (aged 10–11) at a German school. The study revealed that many children could recognize simple manipulative tactics, such as complex wording and color-based manipulations, but most struggled to identify bad privacy defaults, indicating a particular vulnerability to privacy-related dark patterns.
This literature review explores the complex interactions between dating app ecosystems, advertising strategies, and the integration of dark patterns—subtle design techniques that influence user behavior. Through an analysis of existing research, the review highlights how dating apps strategically use advertising to boost user engagement, shape interactions, and increase revenue. It also addresses the ethical concerns surrounding the use of dark patterns, questioning the fine line between guiding user decisions and manipulating behavior. Additionally, the review examines user data privacy, focusing on how dating apps collect and utilize personal data for targeted advertising, while raising concerns about potential privacy risks and the regulatory measures in place to protect users. By synthesizing these themes, the article contributes to an ongoing discussion about responsible tech design, emphasizing user well-being, transparency, and ethical standards in the digital space.
The growing prevalence of cyberbullying on online social platforms has highlighted the need for effective detection and mitigation strategies. This study introduces a comprehensive approach to preprocessing and analyzing Twitter data to identify instances of cyberbullying. The research begins with loading a dataset of tweets labeled as positive or negative and conducting exploratory data analysis to understand sentiment distribution. The text data is then preprocessed through steps such as noise removal (eliminating URLs, mentions, punctuation), stopword removal, and lemmatization, which enhances the quality of the dataset. The study also examines word and character count distributions to gain insights into tweet lengths. This methodology lays the groundwork for further investigation into the patterns of cyberbullying, contributing to the creation of data-driven solutions to combat such behavior and fostering a safer online environment.
The increased time spent by users in virtual environments and the influence of AI-driven choice architectures have raised concerns about how AI systems can subtly persuade users' actions. This issue is examined through the concept of "dark patterns," which are interface designs that manipulate user behavior in ways that may not be beneficial to them. Although regulatory measures have been introduced in various regions, including India, the article argues that these regulations are insufficient to address AI-powered dark patterns. Such patterns operate at a deeper behavioral level, making users' choices appear uninfluenced by manipulation. The article concludes by advocating for a more comprehensive approach to tackle the persuasive tactics employed by AI systems.
The California Consumer Privacy Act (CCPA) mandates that businesses offer consumers a clear way to opt out of the sale and sharing of their personal information. However, many businesses leverage control over the opt-out process to impose obstacles on consumers, including the use of dark patterns. The enactment of the California Privacy Rights Act (CPRA) aims to strengthen the CCPA and explicitly prohibits certain dark patterns in these processes. Despite these regulatory efforts, research shows that websites continue to employ a variety of dark patterns, some of which exploit legal loopholes, highlighting the need for further action by policymakers.
This book examines the evolution of legal design, which uses design methods to make legal systems more accessible. Initially focused on problem-solving, legal design now incorporates speculative design, proactive law, and insights from fields like cognitive science and philosophy. Featuring twelve essays from the 2023 Legal Design Roundtable, the book offers diverse perspectives from academics and professionals, exploring new approaches and practical applications. It’s a valuable resource for those interested in innovative, human-centered approaches to law.
Dark patterns are deceptive design practices that impair users' ability to make autonomous and informed decisions, as defined by the European Digital Services Act (DSA). These patterns manipulate users into actions that benefit service providers, such as accepting disadvantageous terms, making unwanted financial transactions, or disclosing personal information. Despite increased regulatory attention and scholarly interest, dark patterns continue to proliferate across digital platforms. This thesis examines dark patterns across various digital contexts, including web and mobile interfaces, IoT devices, and social robots, combining human-computer interaction studies with legal analysis to identify opportunities for mitigating their negative impact.
Technological progress has led to the blending of market techniques from various economic sectors, resulting in the use of diverse rules and instruments and the emergence of novel market practices. This disruptive innovation, especially evident in the crypto-asset market, challenges the application and interpretation of existing rules against market malpractice. After analyzing digital market practices like gamification and dark patterns, this thesis examines the compatibility and adaptability of rules on unfair commercial practices within the evolving digital landscape of the crypto market.
This thesis explores how dark patterns in cookie consent dialogues influence user behavior. An experiment using eye tracking revealed that while wording on button labels significantly affected task completion times, visual design changes did not. Participants generally read from left to right, but individual habits played a larger role in their interactions, indicating that established design conventions are more influential than minor visual tweaks in guiding user decisions.
This study examines the hidden costs and benefits of zero-price digital services, highlighting that while users derive significant value from "free" apps, they also face challenges like procrastination, sleep deprivation, and reduced focus. Based on survey data from 196 participants in Linköping, Sweden, the research reveals a growing consumer preference for paid services over free ones, suggesting a shift in attitudes towards digital payment models. The findings underscore the need for greater corporate transparency and user awareness about non-monetary costs, as well as a balanced approach to user protection and innovation in the digital economy.
In the world of online shopping, "dark patterns" are deceptive design tactics that manipulate consumers into unintended actions like purchases or subscriptions. This study examines their prevalence on fashion websites, finding that 78.4% use these tactics, with Nagging and Limited-time Messages being the most common. Fashion sites use more visual dark patterns than general e-commerce sites, and there's a link between these tactics and website popularity. While user reactions to dark patterns vary, prior trust in a brand can reduce their negative impact.
This chapter explores the challenges posed by data-driven technologies to consumer consent, particularly in the context of manipulative practices known as dark patterns. It begins by examining various dark design strategies that exploit consumer vulnerabilities, especially cognitive biases. The chapter then critiques the limitations of the European information-based approach, which forms the foundation of data protection and consumer law, arguing that current regulations are ill-equipped to address digital vulnerabilities. The analysis highlights the inadequacy of a "one-size-fits-all" model like the GDPR and advocates for a more holistic approach to consumer consent protection. It suggests that integrating legal considerations into the design phase of technological architectures could enhance the protection of consumers' authentic choices and prevent manipulative practices.
This study examines the impact of "dark patterns," interface designs that nudge consumers into sharing data under regulations like the GDPR. A field experiment shows that, even without dark patterns, consumers accept cookies more than half the time. Dark patterns, especially those hiding consent options behind extra clicks, significantly influence choices. Larger, well-known firms see slightly higher consent rates, but site popularity doesn't affect the impact of dark patterns. The study also finds no evidence of choice fatigue from repeated pop-ups.
"Dark patterns," deceptive designs that lead users to benefit service providers, are common in digital marketing. While deceived users often face financial or time costs, non-deceived users—those who recognize and avoid these patterns—may also experience stress and frustration due to the extra effort required. This study focuses on these non-deceived users, exploring how the effort to avoid dark patterns can negatively impact usability and potentially erode trust in service providers.
Advancements in Mixed Reality (MR) technology have made it more accessible to consumers, but this has also led to an increase in dark patterns—manipulative design tactics that deceive users. This research examines these tactics in MR environments, analyzing 80 applications and identifying five key dark patterns: Hidden Costs, Misinformation, Button Camouflage, Forced Continuity, and Disguised Ads. The study highlights the harmful impact of these patterns on user trust and decision-making.
Although numerous Human-Computer Interaction (HCI) studies have empirically investigated the harms caused by dark patterns, and policymakers and regulators acknowledge these harms as significant, they have yet to be thoroughly examined from a legal perspective. This paper addresses this gap by identifying and analyzing the harms associated with dark patterns (DP), focusing on their role in the emerging European 'dark patterns acquis'. The paper organizes existing knowledge on dark pattern harms from HCI research and proposes a taxonomy of these harms. It also bridges the discussion of dark pattern harms in HCI with the legal frameworks for assessing harms under European data protection, consumer law, and competition law.
This research highlights that the growing number and diversity of dark patterns means that each specific form, and its legal assessment, must be evaluated based on how it is used and the intentions behind its use.
This book contains the refereed proceedings of the 12th Annual Privacy Forum on Privacy Technologies and Policy (APF 2024), held in Karlstad, Sweden, on September 4–5, 2024. The conference featured 12 full papers, carefully selected from 60 submissions, and aimed to bring together experts from policy, academia, and industry to discuss privacy and data protection. The 2024 conference particularly focused on the General Data Protection Regulation (GDPR) and emerging legislation around European Data Spaces and Artificial Intelligence. Chapters 3, 9, and 12 are licensed under the Creative Commons Attribution 4.0 International License, as detailed in the respective chapters.
As video games have become a mainstream form of entertainment, they have sparked new media concerns, including gaming addiction, screen time effects, gambling-like mechanics, dark patterns, and online toxicity. Additionally, issues like harassment, discrimination, and poor working conditions in the gaming industry are gaining attention. To address these concerns, the first Ethical Games Conference in 2024 brought together research on ethical issues in gaming, aiming to create evidence-based guidelines for the industry and regulators. This special issue features selected papers and opinion pieces, highlighting challenges and exploring how games can be leveraged for positive social impact.
"Light Up" is an educational game designed to expose the prevalence of UX dark patterns on websites. It simulates real-world scenarios where users are manipulated into actions they might avoid, helping players identify and understand these deceptive tactics. The game aims to raise awareness of the harm caused by dark patterns and empower users to resist them, protecting their privacy, finances, and autonomy online.
Digital nudging has gained prominence as a research topic in information systems, typically viewed as a positive engagement strategy. However, this paper critically examines how digital nudging can offend users' dignity. Using CARE theory, which suggests people react negatively to dignity affronts, the study analyzes 42 interviews from a three-month data collection involving a mobile app with daily digital nudges. The findings highlight that digital nudges can provoke forfeit, flight, or fight responses, and may even become dark patterns under certain conditions, despite responsible design. The paper contributes theoretically by conceptualizing digital nudging, offers empirical insights into its dual nature, and provides practical design guidelines to avoid dignity affronts.
As digital interfaces grow more prevalent, ethical concerns, particularly around dark patterns—manipulative design tactics used to influence user behavior—have become a critical area of study. This research introduces the Dark Pattern Analysis Framework (DPAF), which offers a taxonomy of 64 dark patterns. Current detection tools and datasets only cover 50% of these patterns, revealing significant gaps. The findings underscore the need for improvements in the classification and detection of dark patterns, offering key insights for future research.
The widespread use of manipulative designs, or dark patterns, in everyday applications and their impact on users is raising concerns among policymakers and scholars. These designs employ techniques that nudge users into making decisions they might not choose if fully informed, causing various types of harm. The integration of these mechanisms with other platform features makes it difficult for users to recognize the manipulation. Understanding the effects of manipulative designs is crucial for developing protective countermeasures, but researchers face significant methodological challenges. Investigating the impact of manipulative designs is complicated by the fact that users often do not perceive the manipulation. This paper reflects on these challenges through three case studies, highlighting key issues and providing methodological insights for the empirical study of manipulative designs.
Nowadays, websites commonly use two concepts to influence user behavior: deceptive patterns and nudges. In the literature, these concepts are distinguished by their goals and effects—deceptive patterns manipulate users, while nudges encourage better decision-making. However, from a technical perspective, it is unclear if they differ in their implementation. This paper presents a methodology developed to determine whether it is possible to automatically differentiate between deceptive patterns and nudges when crawling a web page. Our findings suggest that there is no need to distinguish between the two concepts, as they are implemented using the same techniques.
Navigating the web has become increasingly difficult for users due to manipulative UI design patterns known as "dark patterns," which lead users to act against their best interests. These tactics are prevalent, yet users remain largely unaware of them. Existing detection methods, including machine learning algorithms, struggle to generalize across all dark patterns due to their varied definitions and implementations. This paper proposes crowdsourcing as a solution to detect and flag dark patterns. Crowdsourcing leverages users' collective experiences to identify these manipulative designs more effectively. The authors introduce Neighborhood Watch, a Chrome extension that allows users to tag dark patterns on websites and view tags submitted by others. This system promotes more conscientious browsing and reduces susceptibility to dark patterns. Despite some limitations, the study concludes that crowdsourcing can effectively protect users from manipulative interfaces.
Dark patterns in user interfaces have attracted global attention from various disciplines. This study highlights a meta-level issue: demographic biases in DP research. It examines the origins of published research and participant demographics, revealing a bias favoring English-speaking North America and Europe. Addressing these biases is crucial for ensuring inclusivity and rigor in the field.
Commercial health apps have become more accessible and popular, serving purposes such as enhancing health literacy, enabling continuous health tracking, and facilitating community engagement. However, concerns have arisen about the privacy, commodification, and exploitation of data generated by these apps. Less is known about deceptive design patterns and coercive practices from the users' perspective. This study uses pregnancy tracking apps as a case study and presents preliminary findings on user experiences. We argue that health apps require a nuanced consideration of deceptive design practices because (1) these patterns can uniquely intersect with users' vulnerabilities in the health context, and (2) the implications can extend beyond financial losses and privacy invasion, impacting users' health and well-being.
Privacy dark patterns are design tactics used by online services to reduce users' online privacy. These patterns either facilitate institutional data collection or increase others' access to personal data. This study examines how social networking sites popular with teens—Snapchat, TikTok, Instagram, Twitter, and Discord—use these tactics to steer users into reducing their social privacy. We analyzed recordings of account registrations, settings configurations, and logins/logouts for each SNS. Our content analysis identified two major dark pattern types—Obstruction and Obfuscation—and seven subtypes. We discuss why social media companies promote social sharing through design and the challenges of regulating these privacy dark patterns.
The use of persuasive designs to influence user behavior is now ubiquitous in digital contexts, giving rise to ethically questionable practices known as 'dark patterns.' While various taxonomies of dark patterns exist, there is a lack of frameworks that address how these designs are embedded not only in user interfaces but also in the functionality and strategy of digital systems. This paper proposes a framework for a Layered Analysis of Persuasive Designs, grounded in Garrett’s five-layer model of user experience (UX) design and Fogg’s Behavior Design Model. The framework identifies a toolkit of 48 design elements that can be used to operationalize problematic persuasion in digital contexts, highlighting the autonomy impact of each element. This framework aims to assist designers and policymakers in identifying and evaluating (potential) dark patterns within digital systems from an autonomy perspective.
The issue of Dark Patterns, or "Deceptive Design," is gaining recognition in literature. However, their widespread presence across various domains complicates interdisciplinary communication and collaboration. Existing taxonomies of these patterns often overlap and address them at different levels of abstraction, hindering cross-domain discourse. This is problematic given the growing evidence of the adverse effects of such designs on users. Additionally, the fine line between manipulative dark patterns and intuitive, protective, and defensive interface designs further complicates the issue. Current taxonomies primarily define patterns but struggle to distinguish between manipulative and benevolent implementations in specific contexts. This work proposes a method to differentiate between these applications by analyzing previously identified patterns for their properties, consequences, and contexts of application. This paper presents our progress toward creating a taxonomy-independent evaluation process for identifying and describing Dark Patterns.
Regulatory responses to dark patterns often depend on expert evaluations of design interfaces to determine if users are being manipulated or deceived. This article unpacks expert assessments of dark patterns used to solicit user consent and argues that regulatory actions should explicitly address whose expertise is being consulted. It concludes by discussing the value of deliberative mechanisms in broadening the range of both experts and expertise modes for identifying, evaluating, and regulating dark patterns.
Dark patterns refer to design practices that trick or manipulate users into making certain choices. One in four internet users encounter dark patterns. This paper examines vital guidelines issued by government commissions or authorities worldwide, including those from the United States, South Korea, India, the European Union, California, Australia, the United Kingdom, Kenya, and Argentina. A comparative analysis of these guidelines highlights national standards, types of dark patterns, and adherence norms. The study reveals minimal enforcement efforts by the relevant authorities to counter dark patterns. It advocates for a global collaboration to establish universal guidelines against dark patterns, overseen by an international authority or commission.
As of January 2024, 5.35 billion people, or 66.2 percent of the world's population, are online. With attention spans reduced to 8 seconds, digital businesses struggle to acquire, engage, and retain users. Many companies use dark patterns—deceptive UI elements that trick users into actions like signing up for services or making purchases. This thesis investigates various dark patterns, such as roach motel, malicious nudging, urgency/scarcity, bait and switch, and confirm-shaming, categorized into “pressure” and “trickery” tactics. While these methods help companies meet business goals, they undermine user trust. To combat this, the thesis proposes developing a Chrome extension to detect and highlight scarcity and urgency dark patterns. This tool aims to raise user awareness and promote a more transparent internet experience.
Internet users are constantly bombarded with digital nudges like defaults, friction, and reinforcement. When these nudges lack transparency, are not optional, or do not benefit the user, they become 'dark patterns', categorized under the acronym FORCES (Frame, Obstruct, Ruse, Compel, Entangle, Seduce). Psychological principles like negativity bias, the curiosity gap, and fluency are exploited to make social content viral, while covert tactics such as astroturfing, meta-nudging, and inoculation are used to create false consensus. The power of these techniques is poised to grow with advances in predictive algorithms, generative AI, and virtual reality. Although digital nudges can be used altruistically to protect against manipulation, their effectiveness remains inconsistent.
As technology advances, the regulation of dark pattern practices has become crucial. These deceptive tactics manipulate users into unfavorable actions, like complicating service unsubscribes or highlighting consent buttons to obscure transparency. The European Union has responded with several legislative acts, including the GDPR, the Digital Markets Act, and the Digital Services Act. However, these regulations often overlap, creating ambiguities and redundancies. This article examines these challenges and proposes solutions, such as harmonization and centralization, to streamline the regulatory framework. The goal is to protect users from manipulative practices and ensure informed decision-making in the digital realm.
This cumulative thesis investigates the intentions behind digital interfaces, focusing on exploitative "dark patterns" that manipulate user behavior. While existing HCI research has identified various dark patterns, this work synthesizes findings to develop comprehensive frameworks and tools for better understanding and mitigating their effects. Through qualitative and quantitative studies, the thesis explores dark patterns in social networking sites (SNS), examines user perceptions and challenges in recognizing these patterns, and contributes to design theory by identifying where dark patterns manifest. The Responsible Design Triangle model is introduced, highlighting the interdependencies between design, user behavior, and guidelines, to promote ethical digital interface design.
This research investigates the dark patterns users encounter when subscribing to or unsubscribing from online services. While previous studies have described dark patterns in digital contexts, this study provides a detailed analysis of such patterns in online subscriptions. By examining ten case studies of sign-up and cancellation processes on streaming platforms and software services, the research identifies deceptive designs and asymmetric efforts through user flow data and visual artifacts. The findings highlight the prevalence and complexity of dark patterns, offering insights for stakeholders, future design standards, and policy recommendations to improve consumer protection in online subscriptions.
This paper examines the role of dark patterns within TikTok, a rapidly growing social media platform. Utilizing principles from behavioral economics and the existing literature on online choice architecture (OCA), the study investigates how TikTok employs dark patterns to engage users and explores the implications for data protection, algorithmic practices, and market dynamics. The paper uses the "walkthrough method" to conduct a case study, detailing the TikTok user experience and identifying potential dark patterns used by the app. It discusses the challenges in distinguishing dark patterns from legitimate commercial practices, especially considering the user impact. Additionally, the paper examines how dark patterns and OCA are addressed in the Latin American legal landscape and proposes next steps for the ongoing debate based on the study's findings.
Over the past two decades, the focus of design patterns has shifted from encouraging best practices to discouraging harmful ones. Dark and deceptive UX patterns that monetize engagement while perpetuating structural inequities are now prevalent. This study uses a visual case study of a childcare worker platform to critically examine these patterns. Through Care Layering, a form of critical documentation, the study highlights how UX patterns, when viewed as culturally-situated resources, reveal both limitations and opportunities in gig work platform engagement. The discussion emphasizes how Care Layering can help designers achieve greater accountability in UX design.
This research paper explores the widespread use of dark patterns in UI and UX design, uncovering the ethical implications of these manipulative practices. Dark patterns are deceptive design elements that influence user behavior for the benefit of designers or third parties. By examining various examples and their impact on user decision-making, the paper emphasizes the importance of recognizing and understanding these patterns to make informed choices in the digital landscape.
Current online contract practices often involve situations where parties do not understand their rights and obligations under these contracts. This article examines and discusses how complex online contracts complicate and sometimes impede people from making strategic, autonomous decisions. It also addresses how legal design approaches can shed light on complexity and foster tackling of dark patterns in online contracting so as to reduce transaction costs, increase legal quality, business sustainability, and competitive business advantage.
This summarises a roundtable on ongoing and emerging consumer risks associated with dark commercial patterns online organised as part of the 99th Session (Part 2) of the Committee on Consumer Policy (CCP) on 6 November 2020. It featured panellists from academia, consumer protection authorities, and a consumer organisation, the Norwegian Consumer Council. It begins with an overview of the main themes that emerged from the discussion, including examples and categories of dark commercial patterns and their defining attributes; evidence of their prevalence online; consumer vulnerability; and tools and approaches available to consumer protection authorities and policy makers to identify and mitigate them. It then provides details of the presentations by each of the panellists, before concluding with suggested next steps.
The white paper examines the ethical issues in popular apps (Android and iOS) in categories including education, gaming, communication, social and dating; used by adolescents. Additional categories of apps, including music and audio, entertainment, and movies & series, were also covered in subsequent parts of the study. Ethical issues regarding the apps were also researched on the other apps such as Twitter, Reddit, Quora, and Google. In this white paper, the author also examines ethical issues associated with four key sections, namely privacy, age- appropriateness, human-in-the-loop, and user interface. In conclusion, ethical considerations in developing and deploying apps for children and adolescents are found to be necessary and cannot be undermined, considering mobile apps' influence on them.
The article reviews recent work on dark patterns and demonstrates that the literature does not reflect a singular concern or consistent definition, but rather, a set of thematically related considerations. Drawing from scholarship in psychology, economics, ethics, philosophy, and law, the authors articulate a set of normative perspectives for analyzing dark patterns and their effects on individuals and society and show how future research on dark patterns can go beyond subjective criticism of user interface designs and apply empirical methods grounded in normative perspectives
The authors present automated techniques that enable experts to identify dark patterns on a large set of websites. Using these techniques, they study shopping websites, which often use dark patterns to influence users into making more purchases or disclosing more information than they would otherwise. They examine these dark patterns for deceptive practices, and find 183 erring websites . They also uncover 22 third-party entities that offer dark patterns as a turnkey solution. Finally, they develop a taxonomy of dark pattern characteristics that describes their underlying influence and their potential harm on user decision-making. Based on these findings, they make recommendations for stakeholders including researchers and regulators to study, mitigate, and minimize the use of these patterns.