Here is everything you need to know
about dark and fair patterns:

WHAT ARE DARK PATTERNS?
This term was first coined in 2010 by Harry Brignul, UX designer and PhD in neurosciences, referring to “tricks used in websites and apps that make you do things that you didn’t mean to”.

There is rich academic literature defining and

categorizing them, showing how they work,

spread, which risks they create and which laws

they breach.

We defined 7 categories of dark patterns

Our R&D work namely focused on creating a “taxonomy”, i.e. a categorization of dark patterns that would be easy to use by all stakeholders to solve the issue. Based on 16 existing taxonomies, we identified 7 categories of dark patterns. For each category, we provide a definition, the main cognitive biases manipulated and the main risks.

Here are very concrete examples
of what dark and fair patterns look like.
Our fair patterns are interfaces that empower users
to make informed and free choices. 

They are currently being assessed
by 10 independent experts in neurosciences,
UX, privacy, behavioral economy…before user testing.

Dark patterns

Harmful default

Default settings are against the interests of the user
(Cf CM. Gray: Interface Interference)

For example: pre-checked boxes, having to accept a set of

parameters without being able to individualize, wording such

as “Accept all” without the possibility to delineate

Fair patterns

Neutral or protective default

For adults: default settings are neutral
For minors: default settings are protective.

For example: empty boxes, individualized settings that allow

the user to choose for each single case

Missing information

Selective disclosure of information
(Cf CM. Gray: Sneaking)

For example: sneaking into basket (adding products without

user’s action), hidden costs, forced continuity (automatic

renewal after the free trial period), camouflage advertising.

Adequate information

Sufficient information for the users’ intended action.
Additional suggestions clearly identified as such.

For example: explaining the consequences of a choice, clearly distinguishing between optional and mandatory,

objectively presenting options including when it’s not in the user’s interest,

information to compare similar options, no hidden costs

Maze

Make the user tasks, path to information, preferences, or choices unnecessarily complex or long
(Cf CM. Gray: Obstruction)

For example: the path to cancel is way more difficult than the path to buy or subscribe, another interface channel is required to revert an action

Seamless path

Ensure users tasks, path to information, preferences or choices are as easy when they’re in the users’ interest than when they’re in the company’s interest

For example: empty boxes, individualized settings that allow the user to choose for each single case

Push & pressure

Emotional, time, social or other triggers to induce / push a given behavior
(Cf CM. Gray: Social Engineering, Nagging)

For example: social testimonials, scarcity (low-stock

messages), limited time offers, repetitive incentive, urgency

messages.

Non-instrusive information

Absence of trigger to push a behavior not initiated by the user

For example: sufficient information for users to perform the

action they intended to do, always giving them the choice.

Misleading or obstructing language

Language is confusing, manipulating or impeding
(Cf CM. Gray: Sneaking)

For example: language discontinuity, words that influence the

user’s decision, trick questions, leaving the user in the dark,

ambiguous wording or information.

Plain and empowering language

Language so clear that users easily find what they need, understand it upon first reading and understand the consequences of their choices

For example: short and clear sentences, a neutral tone that will

not trigger any emotion in the user, and will not affect the

choice they wish to make.

More than intended

Make users do more and/or share more than they intended
(Cf CM. Gray: Forced Action)

For example: end up buying a service they did not intend to

(seat selection) or end up subscribing when they thought

they were making a one-off purchase or “trying for free”, or

“consenting” to more data sharing than they wanted.

Free action

Empower users to understand the consequences of their
choices (especially in terms of spending more
or sharing more personal data)

For example: clearly explaining with whom the personal dat

a will be shared, whether an action is optional or required,

whether a given service is added as a suggestion

Distorted UX

Visual interface is trapping users
(Cf CM. Gray: Interface Interference, Nagging)

For example: attention diversion, hacking users’ reflexes to

influence action, false hierarchy, sending the wrong signals

Fair UX

Visual interface respects users’ intended actions and choices

For example: symbols and design elements are in line with the user’s expectations and habits,

salience is equivalent between options – including when a option is not in the company’s interest,

format adequately reflects content, ergonomics, accessibility and legibility rules are respected.

If you believe
you’re not one
to fall into these traps,
think again

Dark patterns and cognitive biases:
What you need to know

Our brain is a real “decision machine”, from the simplest to the most sophisticated. Nobel Prize in economics Daniel Kahneman identified 2 main “systems” through which our brain makes decisions: “System 1”, fast, intuitive and with low cognitive efforts, and “System 2”, slower, with analytical strategies but high cognitive effort. System 1 is based on heuristics and “cognitive biases”, which are like “mental shortcuts” to make decisions.

Cognitive biases are systematic, irrational patterns impacting our judgement and the way we make decisions. Cognitive biases can lead to more effective actions or faster decisions in some contexts. They reduce the amount of information to be considered and simplify the processes of judgment and decision-making.

Overall, there are 180 biases described to date, categorized in 4 main types:

  • Information overload: biases that arise from too much information
  • Lack of meaning: brain needs to make sense
  • Need to act fast: gain time and resources
  • Figuring out what needs to be remembered for later: the biases relating to memory limits.

The trouble is that because they’re a predictable pattern for decision-making, they can be manipulated. That’s precisely the case for dark patterns: they manipulate some of our cognitive biases to make us act in a certain, predictable way – sometimes against our own interest.

Sounds like conspiracy theory? Unfortunately, there is solid scientific evidence of how dark patterns manipulate our biases.

Here are all the reasons
why you should care

How we built
these categories

From scientific taxonomies
to usable categories for problem-solving

Our R&D Lab analyzed 16 existing taxonomies in 2022, all of which have a lot of merit: some are more focused on describing the problem, others aim at identifying the legal and regulatory grounds to fight them, others assess the types and severity of harms. Amazing work done by researchers over about a decade, but quite complex to dive into, and fairly difficult to use without spending a lot of time.
Our categories aim at fighting dark patterns and solving the issue:

  • easy to understand, memorize and use by designers, developers, marketers, lawyers, regulators…
  • easy to act upon: we created fair patterns for each of the dark ones
  • future-proof: covering the next forms of dark patterns which will be invented, thanks to a rooting on cognitive biases on which dark patters could play one way or another.

 

We did not stop there. Actually we collaborated with Colin Gray, Cristiana Santos, Nataliia Bielova and Thomas Milder to match their ontology of dark patterns with our countermeasures. This way, each dark pattern in the ontology has a corresponding fair patterns.

This visualization clarifies how we created a new

taxonomy for dark patterns, based on the work of

scientists who had previously created their own

classification. Its formalization is inspired by the work of

Michael Kritsch.

How to read it

To download the latest version
of this visualization,
based on C.M Gray's Ontology

To see our sources

Our clients