- Volume 4 (2018), Issue 3
- Vol. 4 (2018), No. 3
- >
- Pages 320 - 332
- pp. 320 - 332
Machine Learning in Medicine:
Opening the New Data Protection Black Box
open-access
This work is distributed under the Creative Commons Licence Attribution 4.0 International (CC BY 4.0).
Artificial intelligence (AI) systems, especially those employing machine learning methods, are often considered black boxes, that is, systems whose inner workings and decisional logics remain fundamentally opaque to human understanding. In this article, we set out to clarify what the new General Data Protection Regulation (GDPR) says on profiling and automated decision-making employing opaque systems. More specifically, we focus on the application of such systems in the domain of healthcare. We conducted a conceptual analysis of the notion of opacity (black box) using concrete examples of existing or envisaged medical applications. Our analysis distinguishes among three forms of opacity: (i) lack of disclosure, (ii) epistemic opacity, and (iii) explanatory opacity. For each type of opacity, we discuss where it originates from, and how it can be dealt with according to the GDPR in the context of healthcare. This analysis can offer insights regarding the contested issue of the explainability of AI systems in medicine, and its potential effects on the patient-doctor relationship.
Keywords: Artificial Intelligence, Machine Learning, Black Box, Medicine, GDPR, Transparency