Here is everything you need to know
about dark and fair patterns:
WHAT ARE DARK PATTERNS?
This term was first coined in 2010 by Harry Brignul, UX designer and PhD in neurosciences, referring to “tricks used in websites and apps that make you do things that you didn’t mean to”.
There is rich academic literature defining and
categorizing them, showing how they work,
spread, which risks they create and which laws
they breach.
Our R&D work namely focused on creating a “taxonomy”, i.e. a categorization of dark patterns that would be easy to use by all stakeholders to solve the issue. Based on 16 existing taxonomies, we identified 7 categories of dark patterns. For each category, we provide a definition, the main cognitive biases manipulated and the main risks.
Cf CM. Gray: Interface Interference, Nagging
DEFINITION
Visual interface is trapping users.
MAIN COGNITIVE BIASES
Contrast effect, Functional fixedness
MAIN RISKS
Cognitive
Cf CM. Gray: Forced Action
DEFINITION
Sequence of events, clicks, or flows that force the user to do or give more than they intended.
MAIN COGNITIVE BIASES
Loss aversion, Hyperbolic updating effect
MAIN RISKS
Control, freedom, monetary, personal data
Cf CM. Gray: Social Engineering, Nagging
DEFINITION
Emotional, social, time or other triggers to induce/push a given behavior.
MAIN COGNITIVE BIASES
Restreint bias, Hyperbolic updating effect, Bandwagon effect
MAIN RISKS
Control, freedom, monetary, personal data, cognitive
Cf CM. Gray: Sneaking
DEFINITION
Selective disclosure of information.
MAIN COGNITIVE BIASES
Framing effect, Anchoring bias
MAIN RISKS
Control, cognitive, monetary, personal data
Cf CM. Gray: Interface Interference
DEFINITION
Default setting are against the interests of the user.
MAIN COGNITIVE BIASES
Default effect, Optimistic bias
MAIN RISKS
Control, freedom, monetary, personal data
Cf CM. Gray: Obstruction
DEFINITION
Make the user tasks, path to information, preferences or choices unnecessarily complex or long.
MAIN COGNITIVE BIASES
Overchoice
MAIN RISKS
Control, monetary, freedom, cognitive, personal data
Cf CM. Gray: Sneaking
DEFINITION
Language is confusing, manipulating or impeding.
MAIN COGNITIVE BIASES
Framing effect
RISKS
Cognitive, control, freedom, personal data
Here are very concrete examples
of what dark and fair patterns look like.
Our fair patterns are interfaces that empower users
to make informed and free choices.
They are currently being assessed
by 10 independent experts in neurosciences,
UX, privacy, behavioral economy…before user testing.
For example: pre-checked boxes, having to accept a set of
parameters without being able to individualize, wording such
as “Accept all” without the possibility to delineate
For example: empty boxes, individualized settings that allow
the user to choose for each single case
For example: sneaking into basket (adding products without
user’s action), hidden costs, forced continuity (automatic
renewal after the free trial period), camouflage advertising.
For example: explaining the consequences of a choice, clearly distinguishing between optional and mandatory,
objectively presenting options including when it’s not in the user’s interest,
information to compare similar options, no hidden costs
For example: the path to cancel is way more difficult than the path to buy or subscribe, another interface channel is required to revert an action
For example: empty boxes, individualized settings that allow the user to choose for each single case
For example: social testimonials, scarcity (low-stock
messages), limited time offers, repetitive incentive, urgency
messages.
For example: sufficient information for users to perform the
action they intended to do, always giving them the choice.
For example: language discontinuity, words that influence the
user’s decision, trick questions, leaving the user in the dark,
ambiguous wording or information.
For example: short and clear sentences, a neutral tone that will
not trigger any emotion in the user, and will not affect the
choice they wish to make.
For example: end up buying a service they did not intend to
(seat selection) or end up subscribing when they thought
they were making a one-off purchase or “trying for free”, or
“consenting” to more data sharing than they wanted.
For example: clearly explaining with whom the personal dat
a will be shared, whether an action is optional or required,
whether a given service is added as a suggestion
For example: attention diversion, hacking users’ reflexes to
influence action, false hierarchy, sending the wrong signals
For example: symbols and design elements are in line with the user’s expectations and habits,
salience is equivalent between options – including when a option is not in the company’s interest,
format adequately reflects content, ergonomics, accessibility and legibility rules are respected.
Our brain is a real “decision machine”, from the simplest to the most sophisticated. Nobel Prize in economics Daniel Kahneman identified 2 main “systems” through which our brain makes decisions: “System 1”, fast, intuitive and with low cognitive efforts, and “System 2”, slower, with analytical strategies but high cognitive effort. System 1 is based on heuristics and “cognitive biases”, which are like “mental shortcuts” to make decisions.
Cognitive biases are systematic, irrational patterns impacting our judgement and the way we make decisions. Cognitive biases can lead to more effective actions or faster decisions in some contexts. They reduce the amount of information to be considered and simplify the processes of judgment and decision-making.
Overall, there are 180 biases described to date, categorized in 4 main types:
The trouble is that because they’re a predictable pattern for decision-making, they can be manipulated. That’s precisely the case for dark patterns: they manipulate some of our cognitive biases to make us act in a certain, predictable way – sometimes against our own interest.
Sounds like conspiracy theory? Unfortunately, there is solid scientific evidence of how dark patterns manipulate our biases.
Our R&D Lab analyzed 16 existing taxonomies in 2022, all of which have a lot of merit: some are more focused on describing the problem, others aim at identifying the legal and regulatory grounds to fight them, others assess the types and severity of harms. Amazing work done by researchers over about a decade, but quite complex to dive into, and fairly difficult to use without spending a lot of time.
Our categories aim at fighting dark patterns and solving the issue:
We did not stop there. Actually we collaborated with Colin Gray, Cristiana Santos, Nataliia Bielova and Thomas Milder to match their ontology of dark patterns with our countermeasures. This way, each dark pattern in the ontology has a corresponding fair patterns.
This visualization clarifies how we created a new
taxonomy for dark patterns, based on the work of
scientists who had previously created their own
classification. Its formalization is inspired by the work of
Michael Kritsch.
© Amurabi 2023