Skip to content

Contesting Automated Decisions:

A View of Transparency Implications

Emre Bayamlioglu


Keywords: Algorithmic Transparency, Automated Decisions, GDPR Article 22, Explainable AI, Techno-Regulation

This paper identifies the essentials of a ‘transparency model’ which aims to scrutinise automated data-driven decision-making systems not by the mechanisms of their operation but rather by the normativity embedded in their behaviour/action. First, transparency-related concerns and challenges inherent in machine learning are conceptualised as ‘informational asymmetries’, concluding that the transparency requirements for the effective contestation of automated decisions go far beyond the mere disclosure of algorithms. Next, essential components of a rule-based ‘transparency model’ are described as: i) the data as ‘decisional input’, ii) the ‘normativities’ contained by the system both at the inference and decision (rule-making) level, iii) the context and further implications of the decision, and iv) the accountable actors.
Keywords: Algorithmic Transparency, Automated Decisions, GDPR Article 22

Emre Bayamlioglu, is a researcher at the Tilburg Institute for Law, Technology, and Society (TILT) and also an external fellow of the Research Group on Law Science Technology & Society (LSTS) at Vrije Universiteit Brussels. For correspondence: <>.


Lx-Number Search

(e.g. A | 000123 | 01)

Export Citation