Skip to content

Machine Learning in Medicine:

Opening the New Data Protection Black Box

open-access


Agata Ferretti, Manuel Schneider, Alessandro Blasimme

DOI https://doi.org/10.21552/edpl/2018/3/10

This work is distributed under the Creative Commons Licence Attribution 4.0 International (CC BY 4.0).

Keywords: artificial intelligence, machine learning, black box, medicine, GDPR, transparency


Artificial intelligence (AI) systems, especially those employing machine learning methods, are often considered black boxes, that is, systems whose inner workings and decisional logics remain fundamentally opaque to human understanding. In this article, we set out to clarify what the new General Data Protection Regulation (GDPR) says on profiling and automated decision-making employing opaque systems. More specifically, we focus on the application of such systems in the domain of healthcare. We conducted a conceptual analysis of the notion of opacity (black box) using concrete examples of existing or envisaged medical applications. Our analysis distinguishes among three forms of opacity: (i) lack of disclosure, (ii) epistemic opacity, and (iii) explanatory opacity. For each type of opacity, we discuss where it originates from, and how it can be dealt with according to the GDPR in the context of healthcare. This analysis can offer insights regarding the contested issue of the explainability of AI systems in medicine, and its potential effects on the patient-doctor relationship.
Keywords: Artificial Intelligence, Machine Learning, Black Box, Medicine, GDPR, Transparency

Agata Ferretti, equal contributor, PhD candidate at the Health Ethics and Policy Lab, ETH Zurich. Manuel Schneider, equal contributor, PhD candidate at the Health Ethics and Policy Lab, ETH Zurich. Dr Alessandro Blasimme, Senior Researcher in Bioethics at the Health Ethics and Policy Lab, ETH Zurich. For correspondence: <mailto:alessandro.blasimme@hest.ethz.ch>.

Share


Lx-Number Search

A
|
(e.g. A | 000123 | 01)

Export Citation