Weiter zum Inhalt

How (Un)intelligent Is Our Collaboration With Artificial Intelligence?


I. Introduction

In 1950, computer scientist Alan Turing developed the Turing Test: a key test for intelligence in a computer, requiring that an individual should be unable to distinguish a machine from a human being by using the replies to questions asked to both. This has since become an important method to evaluate AI. The test has also become an object of criticism, questioning whether it is a proper way of measuring machine intelligence, and questing what we actually mean by intelligence.

Singularity theorists like Ray Kurzweil predict that ‘artificial superintelligence’ will surpass all human intelligence and trigger a runaway technological growth, resulting in unfathomable changes to human civilization. According to critical experts in the field of artificial intelligence, our expectations of AI are predominantly based on misguided conceptions of the potential of the machine and not on its technical performance. As a consequence, human intelligence is too quickly assigned to computers. Philosopher Luciano Floridi even claims that true artificial intelligence does not exist.1 Some AI-experts prefer the term ‘statistics on steroids’ or ‘statistics 2.0’, pointing out that computer intelligence has not increased, they just have more computing power, more access to data and are more interconnected due to the Internet of Things.2

Discussing whether a machine is intelligent is relevant, but more urgent is the question whether our collaborations with machines are intelligent. An intelligent collaboration with AI requires complementary traits, since there is no point in teamwork when all actors possess similar qualities. However, this is precisely what Singularity theorists predict: that the differences between human brains and artificial brains will disappear because human intelligence is not more special than computer intelligence. Every scientific breakthrough makes our species less unique. With the advent of the telescope we turned out not to be the center of the universe, with a better understanding of geology we proved not to be creations from God, and now we ought to believe that artificial intelligence is ‘bumping us from our throne’.3

Aside from a hurt ego, there are more important reasons why humans feel the urge to distinguish themselves from computers. Consciousness, common sense, intuition, willpower, intentionality, creativity, imagination, morality, emotional intelligence and phenomenological experience are capacities that computers don’t have. Few computer scientists have taken AI’s lack of evolutionary history as physical beings into account: contrary to computers, humans have the ability to learn things that natural selection did not pre-program us with.4 Distinctive human capacities are extremely important, not to put ourselves on a throne, if we want to complement and collaborate with AI and to guide further development of AI applications.

II. Unintelligent Collaborations with AI

Politicians and lawyers argue in favor of more research and supervision on the self-learning aspect of artificial intelligence, so we can ensure that computers keep doing what we want. But our computers already stopped doing what we want since surveillance capitalism5 knocked at our door, reducing us to commodities for the data market. The moment we switch on our devices, the algorithms of the Big Five6 are in charge, gluing us to our screens, keeping us clicking, liking and swiping: generating data-fuel to train their artificial intelligence.

In her book The Age of Surveillance Capitalism (2019) philosopher Shoshana Zuboff shows how monetization of data, captured through monitoring and predicting people's movements and behaviors on and offline, is shaping our environments, behaviors and choices without us being aware of it. Algorithmic predictions about the things we want to buy, watch, read and click, are not successful because big data knows us better than our friends -as is often suggested, the predictions are correct because they restrict and guide our choices. Algorithms are often trained with data from users already exposed to algorithmic recommendations, this creates pernicious feedback loops.7 Based on our clicks, algorithms pin us down on a few categories, for example ‘white-depressed-xenophobic-heterosexual-cat-lover desiring to have children’, forming the basis for all results that are shown to this person. As a result, our self-perception is changed and the capacity to explore and re-define our identity gets narrowed down.

Of course, algorithms can do wonderful things, such as predicting the development of cancer enabling better treatment. But when it comes to predicting human behavior there are many proven pitfalls, since not all aspects are suited for quantification. Mathematician Cathy ‘O Neil describes algorithms as ‘opinions embedded in code’. Her bestseller Weapons of Math Destruction (2016) has put algorithmic injustice on the international agenda. Algorithmic models can suffer from bias and mix-up correlation and causality. The models are based on majorities and averages, excluding minority perspectives and minority traits. Societal inequalities and stereotypes are not only reflected in algorithms, they are also hardwired into these systems and spread on a larger scale. By solely prioritizing measurable aspects of behavior and scramming non-measurable aspects into simplified algorithmic models, we lose ambiguity and diversity out of sight. These problems have not stopped governments from implementing algorithmic predictions in smart cities, predictive policing and social welfare systems. Instead of judging humans on the basis of what they are doing, Western governments and companies are increasingly judging humans on the basis of what they might do. Although these predictions are not always accurate, they guide how citizens are approached in the online and offline world.

According to Zuboff, humans are slowly transforming into automates: becoming just as predictable, controllable and programmable as machines. Alongside with Zuboff data-driven technologies are problematised by a growing number of mathematicians, lawyers, historians, media-theorists and social scientists.8 Their advice is to stop worrying about a Superintelligence that will replace us, and start worrying about the devices, sensors and algorithms that are replacing human decision making, without having proper understandings, mental states, interpretations, emotional intelligence, semantic skills, consciousness and self-awareness.

Political scientist Virginia Eubanks investigated the impacts of data mining, algorithms, and predictive risk models on poor and working-class people in America. With examples from their everyday lives she describes how government data have imposed a regime of surveillance, profiling, exclusion and punishment. While data technologies are often praised by policymakers as a way to deliver services to the poor more efficiently, Eubanks shows that it worsens inequality. The ‘digital poorhouse’ as she calls it, allows managing the poor in order to escape our responsibility for eradicating poverty. Instead of more data, they need better resources.9

In order to prevent that we rely too easily on dysfunctional AI-systems like the ones Eubanks describes, we need a better understanding of the capacities and weaknesses of humans as well as computers. A complementary collaboration requires a better understanding of the differences between both.

III. Human-like Machines

Search engines and computer systems calculating when someone is eligible for a specific insurance or medical treatment are generally qualified as artificially intelligent. According to critics, these types of computer systems are not necessarily intelligent since there is more to intelligence than pattern recognition and computation. When we look at language, reasoning ability, consciousness, planning and common sense, AI is reserved for science fiction movies like Ex-Machina and Her. Even though popular media give us the impression that we are already surrounded by computers that can think, feel and understand in the same ways we do.

An example is image recognition, in which computers ought to be equally skilled as humans. In popular media, we don’t hear about computers mistaking a road sign with stickers for a refrigerator. Computers become better at categorising images, but they don’t understand what they see and recognise images in the advanced manner people do. Humans are able to distinguish relevant from irrelevant information and are therefore not distracted by a few stickers. Our common sense enables us to react adequately in situations we have never seen before. Computers are better at playing chess and Go, but when it comes to intuitively understanding social contexts, a two-year old surpasses a computer.10

Although categorisation and recognising things are not the same things, they are often mixed-up. An example is the ‘AI gaydar’ research where an algorithm ‘recognised’ all men with accentuated eyebrows as homosexual. In practice, homosexuality turned out to be more complex than detecting deviating eyebrows. Also the iPhone X, that unlocks your phone through image recognition, doesn't appear to recognise all faces, for example those of children and people who look physically alike.

Robot engineers intentionally design robots with face, speech and emotional recognition technology in such a way that it is easy for us to forget the mechanical aspects. The Japanese robot engineer Pascale Fung claims to build ‘robots with a heart and soul'. According to Fung, robots are empathic and able to understand human language and emotions.11 These types of claims are misleading because social robots don’t have empathy, they simulate empathy. Also, they don’t understand our language; they simulate to understand our language.

Although simulated robot empathy can be as valuable as human empathy, these differences are relevant because they can help us to establish a complementary collaboration with computers. Moreover, in dealing with robotic look- and talk-a-likes, we get to know ourselves and are challenged to refine our definition of what a human being is.

IV. Machine-like Humans

The most distinctive and hotly debated characteristic of humans compared to computers is consciousness. Although neuro-scientists don’t know what consciousness is and how it exactly works, Singularity theorists predict that one day computers will have it. Whether this is the case is not so much determined by progress in AI, but more by our understandings of these concepts. It is easy to claim that machines will have consciousness if you define consciousness as accepting new information, storing and retrieving old information and cognitive processing of it all into perceptions and actions. This definition leaves no room for the perspective that creativity, empathy and our sense of freedom do not come from logic or calculation.

Computer scientists who predict that computers will have consciousness argue that there is nothing more to humans than computable aspects. Where Zuboff argues that we are slowly transforming into automates, they argue that we are not becoming machines; we simply are machines. Robot engineers like Fung claim for example that simulating empathy is the same as having empathy and decoding language is the same as understanding language. Consequently robots are considered suitable candidates for social jobs, such as being a waiter. Of course robots are capable of taking orders, but there’s more to the job than that: a good waiter can gauge the atmosphere, empathise with guests and respond to unexpected situations. However, to let a serving machine do his work, customers need to adjust their behavior in advance with structured movements, clear facial expressions and univocal language. To make sure we are properly understood by a machine, we need to adjust our behavior and environment to the standards of the machine, becoming more machine-like ourselves.

Whether you consider human beings computable machines or not, we are increasingly surrounded by data-driven technologies and robots that are standardizing and addressing rational behavior. Smartphones, health apps, wearable technologies, digital assistants, social robots and smart toys are not neutral devices. They represent social regimes, emotional regimes and health regimes that eliminate irrational behavior and encourages us to behave in accordance with the moral standards that are programmed into these devices: don’t eat another cookie your sugar intake is 25 grams, call a friend you haven’t been socially active on your smartphone for two days, watch this funny cat clip your facial expression has a sadness score of 9 points. These technologies promote the cultural ideal of, what sociologist Arlie Hochschild calls ‘the manageable heart’. It implies that we can use these technologies to continuously control our emotions and behavior in a rational manner. To determine which neighborhoods are unsafe, what we want to buy, watch and listen, who we want to date or hire for a job, we don’t have to think or rely on our senses, we outsource it to algorithms that guide our decisions and confirm our own worldviews. Why opt-out if you can spend your entire life in a warm bath with filter bubbles and quantified simplifications?

V. Intelligent Collaborations with AI

The answer is because it makes us more machine-like and less able to establish intelligent, complementary collaborations with AI. The differences between human brains and artificial brains might indeed decline, not because of artificial superintelligence, but because we are becoming more programmable. Of course, we are by nature technological creatures and have always been shaped by technology. However, we have not always been shaped by data-driven technologies run by surveillance capitalism; never in the history of mankind, human beings have been quantified, measured and monitored on such a big scale.

Presupposing we stop nudging humans to behave more machine-like and start investing in machine learning and human learning, AI will not create a useless class of humans. It rather enables us to outsource repetitive and dull tasks, creating more room for human value and significance in our own work. We need to stop optimizing humans for the data-market and explore when and how humans are becoming more programmable. Who is doing the thinking as we depend on data-driven technologies and who controls these technologies? We need to consider which human skills are lost — differentiating between situations where this is problematic and non-problematic — and which new skills are gained. Humane technology movements are already asking those questions, developing civil technologies that guarantee inclusion, serendipity and autonomy. An example of technology that allows complementary collaboration with AI is Debater: a computer that shows us divergent viewpoints about a particular topic so we can sharpen our thinking process.

Additionally, we need more nuanced understandings of what artificial intelligence is and how our perceptions of AI are shaped by marketing discourses from Silicon Valley and the Big Five. To do so, we need an interdisciplinary debate about what we think of as intelligence and consciousness, taking into account that surveillance capitalism and AI-systems are also changing our (perceptions of) intelligence and consciousness.

In order to establish an intelligent collaboration with AI we should neither underestimate nor overestimate the ability of computers. In a number of contexts AI systems outperform human experts. Stanford researchers have for example developed an algorithm that can diagnose pneumonia better than radiologists. But, as Cathy ‘O Neil and Virginia Eubanks have demonstrated, not all problems, decisions and predictions are suited for quantifications and AI solutions. The field of AI mainly derived from a mathematical and technological-engineering approach where humans don’t have a central position. Since AI is used in many non-mathematical domains, such as communication, social welfare and the service industry, AI studies and technological engineers need to focus more on divergent cultural logics and irrationalities, respecting our ambiguous world in everyday contexts.

Lastly, and this is rarely mentioned in discussions about the future of AI, we need to consider the impact of our expanding digital ecosystem on the environment. IoT devices contribute to energy savings, but the total sum of data storage centers, home appliances and high-tech consumerism — consider Amazon’s patent on algorithmic predictions to offer anticipatory shipping — will contribute to the world’s energy bill. Experts are divided on whether AI-applications spell doom or salvation for the environment, but they agree that we should not wait too long to find out.

Notes

[1] Luciano Floridi, ‘True AI Is Both Logically Possible and Utterly Implausible’ (Aeon, 9 May 2016) <https://aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible accessed 20 March 2019

[2] Agrawal et al, Prediction Machines, The Simple Economics of AI (Harvard Business Review Press 2018)

[3] Gijsbert Werner, ‘De menselijke geest uniek dat had u gedacht’ NRC-Handelsblad (Amsterdam, 13 October 2017)

[4] Eliezer Yudkowsky, ‘Making Sense with Sam Harris #116 - AI: Racing Toward the Brink’ <https://www.youtube.com/watch?v=MPiNkhdiNUI accessed 22 March 2019

[5] Monetization of data captured through monitoring people's behaviors on- and offline.

[6] Alphabet, Amazon, Apple, Facebook, Microsoft

[7] Chaney et al, ‘How algorithmic confounding in recommendation systems increases homogeneity and decreases utility’ (12th ACM Conference on Recommender Systems, Vancouver, October 2018)

[8] Van Dijck, 'Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology' (2014) 12 Surveillance & Society 2, 197-208; Cathy O'Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Broadway Books 2017); Jamie Bartlett, The People Vs Tech: How the Internet is Killing Democracy and How We Save It (Random House 2018); James Bridle, New Dark Age: Technology, Knowledge and the End of the Future (Verso Books 2018); Edward Tenner, The Efficiency Paradox: What Big Data Can't Do (Knopf 2018); Brett Frischmann and Evan Selinger, Re-engineering Humanity (Cambridge University Press 2018); Douglas Rushkoff, Team Human (W. W. Norton & Company 2019)

[9] Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (St. Martin’s Press 2018)

[10] Bennie Mols, ‘Robots lijken best knap tot je dieper gaat graven’ NRC-Handelsblad (Amsterdam, 20 October 2017)

[11] Pascale Fung, ‘The Mind of the Universe’ (VPRO July 2017) <https://www.vpro.nl/programmas/the-mind-of-the-universe/kijk/wetenschappers/fung.html accessed 24 February 2019

Export Citation