Skip to content

Creating Transparency in Algorithmic Processes

open-access


Bernadette Boscoe

DOI https://doi.org/10.21552/delphi/2019/1/5

This work is distributed under the Creative Commons Licence Attribution 4.0 International (CC BY 4.0).



In recent years, innovations such as self-driving cars and image recognition have brought attention to artificial intelligence (AI). Machine learning (ML) is an application of artificial intelligence, made up of algorithms that analyse data and make predictions. Machine learning is increasingly being used to make critical decisions about individuals, such as whether they should be granted parole or whether they are deserving of a bank loan. These new technologies are unregulated, and their processes are insufficiently transparent. Often the predictive algorithms at the core of these technologies are created by private companies and black-boxed, meaning their internal workings are neither subject to nor even open to external oversight. What is worse, even the engineers designing these algorithms do not fully understand how the processes work. As modern society becomes more compute-oriented and data-driven, with computers being used to make critical decisions affecting individuals, greater transparency in algorithmic processes is more important than ever. Any ethical and just society requires some degree of openness and access in its technologies. While it may not be possible to expect an explanation for the entirety of an algorithmic process, we can employ what I will call checkpoints to advance our understanding at various stages of the machine-learning process. Transparent checkpoints can afford policymakers, sociologists, philosophers, information scientists and other non-computer scientists the opportunity to critically evaluate algorithmic processes in the interest of ethical concerns such as fairness and neutrality. The goal of this article is to explain machine learning concepts in a way that will be useful to policymakers and other practitioners. I will give examples of ways biases can be embedded in, introduced into, and reinforced by the original data, and I will introduce six checkpoints wherein black-boxed algorithms can be made transparent, taking care to show how these checkpoints advance our understanding of ethical concerns in machine learning systems.

Bernadette Boscoe is a PhD Candidate at the Department of Information Studies, University of California, Los Angeles. This article was adapted from a presentation delivered at the Herrenhausen Conference: Transparency and Society—Between Promise and Peril (June 2018). An extended version of this article was published earlier this year in Henning Blatt et al (eds), Jahrbuch für Informationsfreiheit und Informationsrecht 2018 (Lexxion Publisher 2019).

Share


Lx-Number Search

A
|
(e.g. A | 000123 | 01)

Export Citation