de / en

Privacy Icons

Privacy policies are difficult for many people to understand. This project aims to develop symbols to simplify these guidelines.


The usage of various digital services results in the accumulation of a multitude of personal data. According to the General Data Protection Regulation (GDPR), information about the processing of this data must be provided in a "concise, transparent, intelligible, and easily accessible form" (Art. 12(1)). This is typically done in the form of privacy policies. However, these are barely intelligible.


The Privacy Icons Project, based on GDPR Art. 12(7), aims to develop risk-based icons to "provide a meaningful overview of the intended processing".

To develop effective icons, it is necessary to first clarify which concepts need to be conveyed and  visualised accordingly. In this regard, a risk-based approach differs from other icon projects, as it centers the occurrence of potential negative impacts of data processing (risks) and thus effectively deals with the design of warnings.

The following primary research questions arise for the effective development of warnings:

  1. What are relevant information categories for risk communication in the processing of personal data online?
  2. Which potentially adverse consequences can arise from specific events in the processing of personal data online?
  3. How can consequences in the processing of personal data be avoided or mitigated?

Further research questions concern the specific visualisation of corresponding information categories as well as the design of an effective warning system.


To adequately address the research questions mentioned above, the use of different methods was necessary:

Literature Review & Model Development to establish a theoretical basis for understanding the context-dependent process of risk formation in the processing of personal data and to formalise it for further analysis.

Systematic Qualitative Content Analysis of GDPR and expert interviews to identify relevant information categories.

Delphi Study to sort and weigh the different information categories.

Quantitative Analysis of code frequencies (from the content analysis) as well as evaluations collected in the Delphi study.


In the latest project publication, several results are presented:

1. Contextual Model of Perceived Privacy Risk that extends the concept of perceived risks with Helen N.issenbaum's theory of Contextual Integrity:


2. Overview of possible causes and negative consequences in the processing of personal data, distinguishing between latent (dashed line) and tangible (gray shaded) consequences ( > While tangible consequences are anchored to specific contexts and clearly perceptible, latent consequences are often not immediately known to or occur without the awareness of those affected):


3. Overview of relevant information categories for communicating privacy risks:



The results of the project are intended to be used in the future for the development of a warning system that visually represents relevant information without overwhelming users. It will rely on automatic analysis of privacy policies and utilize the developed categories to train a Language Model (LLM) for the automatic and scalable analysis of privacy policies.