Skip to content
  • «
  • 1
  • »

The search returned 2 results.

Machine Learning in Medicine: journal article open-access

Opening the New Data Protection Black Box

Agata Ferretti, Manuel Schneider, Alessandro Blasimme

European Data Protection Law Review, Volume 4 (2018), Issue 3, Page 320 - 332

Artificial intelligence (AI) systems, especially those employing machine learning methods, are often considered black boxes, that is, systems whose inner workings and decisional logics remain fundamentally opaque to human understanding. In this article, we set out to clarify what the new General Data Protection Regulation (GDPR) says on profiling and automated decision-making employing opaque systems. More specifically, we focus on the application of such systems in the domain of healthcare. We conducted a conceptual analysis of the notion of opacity (black box) using concrete examples of existing or envisaged medical applications. Our analysis distinguishes among three forms of opacity: (i) lack of disclosure, (ii) epistemic opacity, and (iii) explanatory opacity. For each type of opacity, we discuss where it originates from, and how it can be dealt with according to the GDPR in the context of healthcare. This analysis can offer insights regarding the contested issue of the explainability of AI systems in medicine, and its potential effects on the patient-doctor relationship. Keywords: Artificial Intelligence, Machine Learning, Black Box, Medicine, GDPR, Transparency


Contesting Automated Decisions: journal article

A View of Transparency Implications

Emre Bayamlioglu

European Data Protection Law Review, Volume 4 (2018), Issue 4, Page 433 - 446

This paper identifies the essentials of a ‘transparency model’ which aims to scrutinise automated data-driven decision-making systems not by the mechanisms of their operation but rather by the normativity embedded in their behaviour/action. First, transparency-related concerns and challenges inherent in machine learning are conceptualised as ‘informational asymmetries’, concluding that the transparency requirements for the effective contestation of automated decisions go far beyond the mere disclosure of algorithms. Next, essential components of a rule-based ‘transparency model’ are described as: i) the data as ‘decisional input’, ii) the ‘normativities’ contained by the system both at the inference and decision (rule-making) level, iii) the context and further implications of the decision, and iv) the accountable actors. Keywords: Algorithmic Transparency, Automated Decisions, GDPR Article 22

  • «
  • 1
  • »