Bits of Books - Books by Title


Automate This



How Algorithms Came To Rule Our World



Christopher Steiner

More books on Computers

(LA Review of Books)

'ALGORITHM' IS A WORD whose time has come.

Algorithms are changing the worlds we inhabit - and they're changing us. They pop up in op-eds on weighty topics like the future of labor in an increasingly automated world. Writing about how new trading algorithms are taking over Wall Street, a dismayed journalist wonders 'which office jobs are next?' - which of us, in other words, will be consigned to the dustbin of irrelevancy? The solution, others gamely counter, may be more algorithms: 'How do you find a job? Ask the algorithm.' Algorithms promise to bring reliability and objectivity to otherwise uncertain procedures. In 2007, a famous billboard for ASK.com happily capitalized on this promise: it announced to San Franciscans that 'the algorithm constantly finds Jesus.' Since then, most of us have adjusted our expectations. Algorithms, we have realized, can be carriers of shady interests and vehicles of corporate guile. And so, as a new batch of commentators urge, we must 'make algorithms accountable.'

Yes, as in the opening of a sci-fi novel, 'the algorithms' have arrived. Do they come in peace or are they here to enslave us? This, I argue, is the wrong question.

When we now imagine our future, we think of algorithms. They evoke mixed images of liberation and alienation. To some commentators, algorithms are sublime problem-solvers that will help us end famine and cheat death. They are so effective that their extreme boosters allege they will usher in the end of the human era and the emergence of artificial superintelligence - a hi-tech version of the theological 'union with God.' For others, the algorithms promise of power is fraught with ambiguity. In Homo Deus, historian Yuval Harari envisages the rise of 'Dataism,' a new universal faith in the power of algorithms; Dataism will, he contends, help people make sense of a world run by machines just as other religions have helped make sense of earlier dissonant realities. His book is not for readers who think the devil is in the details; it is centered rather on a sweeping Faustian bargain: 'humans agree to give up meaning in exchange for power.' Growing inequality and the eventual rule of machines might well be part of the price we pay.

Harari's book paints the future as a technological dystopia, which is anything but a new genre. Indeed, the rule of machines is a recurring fear in the modern period. In the 1970s, historian of technology Langdon Winner famously called it the 'technics-out-of-control' theme. For him, technology has become largely 'autonomous' because humans have ignored its political dimension, and they've done so at their peril. He pointed to the evolution of the very word 'technology' from the 19th to the mid-20th century as particularly revealing in this regard; he argued that the meaning of the word had morphed from something relatively precise, limited and unimportant to something vague, expansive and highly significant, laden with both utopic and dystopic import. The word had become amorphous in the extreme, a site of semantic confusion - surely a sign, he concluded, that the languages of ordinary life as well as those of the social sciences had [failed] to keep pace with the reality that needs to be discussed. The political implications included radicalization of the debate around technology, a proliferation of useless dichotomies, and the disappearance of public spaces for discussing technological change in an informed manner. What has happened to 'algorithm' in the past few years, I'd argue, is remarkably similar: we are experiencing a comparable moment of semantic and political inadequacy. But there is more: the term is trying to capture new processes of technological change. How we talk about algorithms can be vague and contradictory, but it's also evocative and revealing. Semantic confusion may in fact signal a threshold moment when it behooves us to revise entrenched assumptions about people and machines. In what follows, rather than engaging in a taxonomic exercise to norm the usage of the word 'algorithm,' I'd like to focus on the omnipresent figure of the algorithm as an object that refracts collective expectations and anxieties. Let's consider its flexible, ill-defined, and often inconsistent meanings as a resource: a messy map of our increasingly algorithmic life.

II.

As a historian of science, I have been trained to think of algorithms as sets of instructions for solving certain problems - and so as neither glamorous nor threatening. Insert the correct input, follow the instructions, and voila, the desired output. A typical example would be the mathematical formulas used since antiquity to calculate the position of a celestial body at a given time. In the case of a digital algorithm, the instructions need to be translated into a computer program - they must, in other words, be 'mechanizable.' Understood in this way - as mechanizable instructions - algorithms were around long before the dawn of electronic computers. Not only were they implemented in mechanical calculating devices, they were used by humans who behaved in machine-like fashion. Indeed, in the pre-digital world, the very term 'computer' referred to a human who performed calculations according to precise instructions - like the 200 women trained at the University of Pennsylvania to perform ballistic calculations during World War II. In her classic article 'When Computers Were Women,' historian Jennifer Light recounts their long-forgotten story, which takes place right before those algorithmic procedures were automated by ENIAC, the first electronic general-purpose computer.

Terse definitions have now disappeared, however. We rarely use the word 'algorithm' to refer solely to a set of instructions. Rather, the word now usually signifies a program running on a physical machine - as well as its effects on other systems. Algorithms have thus become agents, which is partly why they give rise to so many suggestive metaphors. Algorithms now do things. They determine important aspects of our social reality. They generate new forms of subjectivity and new social relationships. They are how a billion-plus people get where they're going. They free us from sorting through multitudes of irrelevant results. They drive cars. They manufacture goods. They decide whether a client is creditworthy. They buy and sell stocks, thus shaping all-powerful financial markets. They can even be creative; indeed, according to engineer and author Christopher Steiner, they have already composed symphonies 'as moving as those composed by Beethoven.'

Do they, perhaps, do too much? That's certainly the opinion of a slew of popular books on the topic, with titles like Automate This: How Algorithms Took Over Our Markets, Our Jobs, and the World.

Algorithms have captured the scholarly imagination every bit as much as the popular one. Academics variously describe them as a new technology, a particular form of decision-making, the incarnation of a new epistemology, the carriers of a new ideology, and even as a veritable modern myth - a way of saying something, a type of speech that naturalizes beliefs and worldviews. In an article published in 2009 entitled 'Power Through the Algorithm,' sociologist David Beer describes algorithms as expressions of a new rationality and form of social organization. He'’s onto something fundamental that's worth exploring further: scientific knowledge and machines are never just neutral instruments. They embody, express, and naturalize specific cultures - and shape how we live according to the assumptions and priorities of those cultures.

III.

Historians of science have shown how technological artifacts capture and transform people's imagination, becoming emblematic of certain eras. They are, at once, tools for doing and tools for thinking. The mechanical clock in Newton's time is a prime example. Consider the heuristic power and momentous implications - scientific, philosophical, and cultural - of seeing the universe as an immense piece of machinery, whose parts relate to one another like those of a sophisticated precision clock. A clockwork universe means, for example, that one can expect to discover regular and immutable laws governing phenomena. But the clock is not simply an image. A clock that can measure seconds and fractions of a second inevitably changes our perception of time. It turns time into something that can be broken down into small units, scientifically measured - and accurately priced. The precision clock helped spawn new temporalities as well as oceanic navigation and the industrial revolution. It was the industrial revolution's metronome. At the same time, the clock was taken to be the best representation of the world it was shaping: a mechanistic, quantifiable, and predictable world, made up of simple elementary components and mechanical forces.

Similarly, seeing the workings of the human mind as analogous to the operations of a hefty Cold War electronic computer signals a momentous cognitive and social shift. Historian of science Lorraine Daston describes it as the transition from Enlightenment reason to Cold War rationality, a form of cognition literally black-boxed in shiny-cased machines, such as the inhumanly perfect monolith in Stanley Kubrick's 2001: A Space Odyssey. Many sharp minds of the post-World War II era believed that the machine's algorithmic procedures, free of emotions and bias, could solve all kinds of problems, including the most urgent ones arising from the confrontation between the two superpowers. It did not work out that way. The world was too complicated to be reduced to game theory, and by the late 1960s the most ambitious dreams of automated problem-solving had been dragged into the mud of the Vietnam War.

Clocks and Cold War computers were, I'm suggesting, emblematic artifacts. They shaped how people understood and acted within the world. Clocks and computers also shaped how people understood themselves, and how they imagined their future.

It is in this sense that we now live in the age of the algorithm.

IV.

How accurate is it to say that algorithms do things?

The image of the algorithm-as-doer certainly captures a salient feature of our experience: algorithms can alter reality, changing us and the world around us. The Quantified Self movement, which promotes 'self knowledge through numbers,' is a prime example of how subjectivities can be reshaped algorithmically - in this case, by monitoring vital functions and processing data relative to lifestyles; 'the quantified self' engages in data-driven self-disciplining practices.

Algorithms do not just shape subjectivities. The world of our experience is pervaded by code that runs on machines. In the current, expanded sense of the word, algorithms generate infrastructures - like social media - that shape our social interactions. They don't just select information for us, they also define its degree of relevance, how it can be acquired, and how we can participate in a public discussion about it. As media scholar Ganaele Langlois aptly puts it, algorithms have the power to enable and assign 'levels of meaningfulness' - thus setting the conditions for our participation in social and political life.

The fact that algorithms create the conditions for our encounter with social reality contrasts starkly with their relative invisibility. Once we become habituated to infrastructures, we are likely to take them for granted. They become transparent, as it were. But there is something distinctive about the invisibility of algorithms. To an unprecedented degree, they are embedded in the world we inhabit. This has to do with their liminal, elusive materiality. In sociological parlance, we could say that algorithms are easily black-boxed, a term I used above to describe how Cold War rationality disappeared into computers. To black-box a technology is to turn it into a taken-for-granted component of our life - in other words, to make it seem obvious and unproblematic. The technology is thus shielded from the scrutiny of users and analysts, who cease seeing it as contingent and modifiable, accepting it instead as a natural part of the world. At this point the technology can become the constitutive element of other, more complex technological systems. Historically, black-boxing has been particularly effective when the technology in question depends on a high degree of logical and mathematical knowledge. Granted, black-boxing does not happen because mathematics is obvious to the general public, but because of the widespread assumption that mathematics consists of deductive knowledge that - as such - is merely instrumental. A technical project like that of a freeway system is, by contrast, saturated with interests; no one would argue for its being economically and politically neutral. But manipulating strings of numbers, or code, according to formal rules? What could possibly be social or indeed biased about that? Aren't these operations purely technical, and therefore neutral?

Not quite. Let me offer an example. Think about the algorithms that produce and certify information. In a 2012 programmatic article entitled 'The Relevance of Algorithms,' media scholar Tarleton Gillespie identified the many ways in which these algorithms have public relevance. As he points out, algorithms select information and assess relevance in very specific ways, and users then modify their practices in response to the algorithms' functioning. Indeed, algorithms produce new 'calculated publics' by presenting groups back to themselves. Their deployment is accompanied, observes Gillespie, by 'the promise of objectivity,' whereby 'the algorithm is positioned as an assurance of impartiality.' These algorithms play a role traditionally assigned to expert groups touting or channeling what might be termed a traditional editorial logic. How did the expert judgment of such groups translate into mechanized procedures? Was this translation straightforward?

Hardly, it turns out. The way algorithms manage information is not simply a mechanized version of that older logic. It is a new logic altogether, an algorithmic logic, which, to quote Gillespie again, 'depends on the proceduralized choices of a machine, designed by humans operators to automate some proxy of human judgment or unearth patterns across collected social traces.' Endorsing this logic and turning algorithms into trusted information tools is not an obvious or necessary transition: it is a collective choice that has important social implications.

If we want to understand the impact of these algorithms on public discourse, concludes Gillespie, it is not sufficient to know 'how they work.' We need to examine 'why [they] are being looked to as a credible knowledge logic' and which political assumptions condition their dissemination and legitimacy. In other words, we need to be aware of the entanglement of the algorithm with its ecology - with the mechanical and human environment within which that particular set of instructions is interpreted and put to work.

To be sure, understanding and visualizing algorithms as embedded in their ecologies is difficult. We tend to default to imagining them in isolation, as self-contained entities. The figure of the algorithm-as-doer reinforces an image of the algorithm as a tiny machine crunching data and mixing them up to produce the desired result. You can find its visual rendering in a tongue-in-cheek image of EdgeRank, the Facebook algorithm that decides which stories appear in each user's newsfeed; the image portrays a 19th-century set of three cast-iron grinders, one for each of its main 'ingredients': affinity, weight, and time. Such images suggest that algorithms exist independently of their ecology, invariably producing the same effects, wherever and whenever deployed.

That's not what happens with sets of instructions running on physical machines interacting with other systems. But the figure of the algorithm-as-a-tiny-machine is potent. When we look at such an idealized image of a machine, Ludwig Wittgenstein argues in Philosophical Investigations, 'everything else, that is its movements, seems to be already completely determined.' It's a rigid and therefore absolutely reliable and predictable object - the stuff of technological dreams, or nightmares.

Experience tells us that real machines do not behave like that. Rather, the idealized machine is a projection of the alleged rigidity of logical rules and mathematical reasoning. All the possible movements of the 'machine-as-symbol' are predetermined, writes Wittgenstein; they are already in it 'in some mysterious way' - just as, he implies, the correct result of 2 + 2 is already there, as a shadow, when one writes those three symbols.

Summing up, the deterministic view of the algorithm - the figure of the algorithm that does things - certainly helps us understand how, as a technological artifact, it can change the world we live in. In this type of speech, the term 'algorithm' functions as a synecdoche for software and larger sociotechnical systems. The algorithm-as-doer, however, is also misleading precisely because it hides its larger ecological context; it represents the algorithm as a self-contained mechanism, a tiny portable machine whose inner workings are fixed and whose outcomes are determined. By contrast, an empirical study of algorithms suggests that we can understand their functioning — and their meaning — only by considering the sociotechnical ecologies within which they are embedded.

V.

There is another important reason why the algorithm-as-doer is misleading: it conceals the design process of the algorithm, and therefore the human intentions and material conditions that shaped it.

Thus far, I've argued for the significance of the ecology of algorithms, which is primarily a spatial and synchronic notion. It emphasizes the algorithm’s relational properties - how it interacts with machines and human collectives. But we need something more. Consider the example of algorithms that produce and certify information. In exploring their ecology, we can address important questions about authority, trust, and reliability. But what about the logic that shaped their design in the first place? Who decided the criteria to be adopted and their relative weight in the decision-making process? Why were the algorithms designed in one particular way and not another? To answer these questions, we need to see the technical features of an algorithm as the outcome of a process. In other words, we need a historical - indeed genealogical - understanding of the algorithm. The notion of genealogy is rooted in temporality and diachronicity; it calls attention to the historical emergence of the algorithm's properties, their contingency and precarious stability. It invites us to question technical features that would otherwise seem obvious and self-explanatory.

A historical sensibility allows us to situate algorithms within a longer quest for mechanization and automation. Like clocks and the programs of early electronic computers before them, current digital algorithms embody an aspiration to mechanize human thought and action in order to make them more efficient and reliable. This is a familiar and yet also unsettling story, constitutive of our modernity. In the most iconic frame from the 1936 comedy Modern Times, Charlie Chaplin is literally consumed by the cogs of a mechanized factory. As machines become more efficient, the image suggests, they become more deadly. But what does a term like 'efficiency' even mean in this context? Theorizing the antagonistic relationship between labor and machines in 19th-century factories, Karl Marx argued that the new machines not only increased production and reduced costs, but were 'the means of enslaving, exploiting, and impoverishing the laborer.' Mechanization, he argued, was a weapon in class warfare.

Scholars in the Marxist tradition have continued to pay attention to mechanized production. In Labor and Monopoly Capital: the Degradation of Work in the Twentieth Century (1974), for instance, Harry Braverman argued that the kind of automation prevailing in American factories at the time was far from being an obvious imperative. Rather, numerical control machinery had been designed to reshape the relations between management and labor in order to wrest control over the production process from the workshop floor. Other kinds of automation were possible, claimed Braverman, but were not pursued because they would not have had the same social effects. Processes of mechanization and automation, in short, are not simply about productivity and profit. Design features are shaped by social relations. In the 1980s, a wave of social studies built on this intuition to argue that technological artifacts are inherently social - in other words: any technology bears the mark of the specific culture and interests that shaped it. The technical sphere is never truly separated from the social sphere: the technical is the social. Technological change is thus never neutral, let alone natural. To say a mechanized procedure is more 'efficient' than its predecessor is not an adequate historical explanation for its success. The notion of efficiency is always relative to a set of assumptions and goals. Making these assumptions and goals visible is thus a prerequisite for any informed discussion about technological change and its implications.

How exactly do these insights apply to the study of algorithms? Consider the work of sociologist Donald MacKenzie on the assumptions and negotiations that shaped certain algorithms now embedded into software used by financial traders worldwide. Their design could have been different; there is never just one way to automate a given financial transaction and, more generally, there is never just one way to regulate a market. Choices are made, and these choices do not follow a neutral universal logic; they are the outcome of contingent interests and negotiations. In a similar vein, media scholars Elizabeth Van Couvering and Astrid Mager have shown how algorithms behind for-profit search engines are shaped by specific business models, based mainly on user-targeted advertising. They have also shown how these search algorithms stabilize and reinforce the socioeconomic practices they embody.

Precisely because of their unique combination of pervasiveness and invisibility, algorithms can effectively embed and amplify existing social stratifications. The neutrality of algorithms is therefore an illusion. Or, in some cases, a powerful rhetorical tool.

VI.

Digital algorithms should be studied within a long history of mechanization and automation processes. I believe, however, that they also pose new challenges for social scientists. Toward the end of 'The Relevance of Algorithms,' Gillespie concedes that there might be 'something impenetrable about algorithms.' Not only do algorithms work with information on a hard-to-comprehend scale, but they can also be 'deliberately obfuscated.' In fact, certain 'algorithms remain outside our grasp, and they are designed to be.' How can we critically scrutinize algorithms if we cannot even grasp them?

The fact that many socially relevant algorithms lack perspicuity is well known. Choosing to trust them has even been compared to an act of faith - maybe the first step in the rise of Harari's new religion of Dataism. Malte Ziewitz, a science studies scholar, detects a 'strange tension' between power and comprehension in the current debate on algorithms. On the one hand, algorithms are continuously invoked as entities that possess precise powers: they judge, regulate, choose, classify, discipline. On the other hand, they are described as strange, elusive, inscrutable entities. It has even been argued, Ziewitz points out, that they are 'virtually unstudiable.' What if the algorithms that should set us free of bias are in fact simply better at obscuring how they pull our strings?

Social scientists have decried the difficulties inherent in empirically studying algorithms, especially proprietary ones. This problem is normally framed in terms of 'secrecy,' a notion that implies strategic concealment. We need, however, a more general concept, like sociologist Jenna Burrell's 'opacity.'

The reasons why an algorithm is opaque can indeed be multiple and different in kind.

The algorithms considered in these discussions usually use datasets to produce classifications. Burrell's 'opacity' refers to the fact that an output of this sort rarely includes a concrete sense of the original dataset, or of how a given classification was crafted. Opacity can be the outcome of a deliberate choice to hide information. Legal scholar Frank Pasquale, a leading critic of algorithmic secrecy, has advocated for new transparency policies, targeting the kind of opacity designed to maintain and exploit an asymmetric distribution of information that benefits powerful social actors.

Algorithms can also be opaque as a result of technical complexity - not in terms of their code per se, but in terms of the structure of the software system within which they are embedded. This point is clearly stated in 'Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms,' a paper presented in 2014. In this paper, Charles Sandvig and his co-authors point out that 'algorithms differ from earlier processes of harmful discrimination (such as mortgage redlining) in a number of crucial ways.' One of these is their 'complicated packages of computer code crafted jointly by a team of engineers.' It follows that 'even given the specific details of an algorithm, at the normal level of complexity at which these systems operate an algorithm cannot be interpreted just by reading it [emphasis mine].' Grasping how an algorithm actually works requires understanding the entire software structure. Making datasets transparent may be relatively straightforward, but making complex algorithmic processes transparent is not. The ultimate functioning of the algorithm is likely to remain inaccessible to the outsider. The technical choices of certain expert groups can thus have an enormous impact on the collective, but it might be that neither the public nor regulators have the expertise to evaluate their responsibilities and actions.

Still, the kinds of algorithmic opacity described thus far are the outcome of deliberate human decisions. This means that, at least in principle, if the necessary resources and expertise were deployed, it should be possible to understand the inner workings of the resultant algorithms.

In certain cases, however, we have to deal with a more profound kind of opacity, one that does not depend on an information deficit, but rather on the limitations of our cognitive structures.

VII.

This unprecedented type of limitation is a consequence of rapid advances in machine learning. Machine learning is an approach to artificial intelligence that focuses on developing programs that can grow and change when exposed to new data. Typically, it is applied to solve practical problems that cannot be effectively tackled through traditional programming - 'by hand,' in programming parlance. The programmer's role is thus given to a machine, which adjusts the algorithms based on feedback rather than on traditional logical considerations.

Think of anti-spam software, like the one in your email package. It learns to discern what is spam and what is not, and refines this ability through experience - i.e., your feedback. Now, even in the case of such a simple program, the machine can alter the algorithm to the point that its actual functioning becomes opaque. This means that it might be impossible to explain why the machine classified a particular item as spam or non-spam.

The source of this impossibility is easily located. Human programming follows an overarching logic. For example, the programmer introduces explicit criteria to discriminate between spam and non-spam messages. If the program is modified by a machine, however, the situation changes. First, the perspicuity of the program ceases to be a priority. Its overall structure no longer needs to be surveyable by a human. Second, a machine can increase the scale and the complexity of the decision criteria enormously and - most importantly - it can combine them automatically in ways that do not follow any discernible logic.

Changes in scale and nonlinear interactions between features produce the kind of opacity that computer scientists refer to as the problem of interpretability. There is a trade-off, though. Through machine learning, an algorithm can become more accurate at doing what it is supposed to do than an entirely human-designed algorithm could ever be. Choices need to be made. Not just with respect to accuracy versus interpretability, but also with what kind and degree of interpretability we should aim for. Like opacity, transparency is not a univocal term: we need to agree on what exposing the 'inner workings' of an algorithm actually means.

The opacity produced by machine learning raises new challenges for those interested in studying how algorithms work and affect our lives. Recent debates on cases of algorithms that have 'learned' to discriminate among users based on their racial or gender profile are a concrete example of the kind of issues that social scientists will have to confront. What constraints should be introduced in such algorithms? What is the optimal trade-off between accuracy and 'fairness'?

VIII.

Our life is increasingly shaped by algorithmic processes, from the fluctuations of financial markets to facial recognition technology. Manichean arguments for or against digital algorithms are hardly relevant. Rather, we need to understand how algorithms embedded in widespread technologies are reshaping our societies. And we should imagine ways to open them up to public scrutiny - thus grounding shared practices of accountability, trust, and transparency.

This is essential for the simple reason that algorithms are not neutral. They are emblematic artifacts that shape our social interactions and social worlds. They open doors on possible futures. We need to understand their concrete effects - for example, the kinds of social stratification they reinforce. We need to imagine how they might work if they were designed and deployed differently, based on different priorities and agendas - and different visions of what our life should be like.

Algorithms are powerful world-makers. Whose world will they make?










Books by Title

Books by Author

Books by Topic

Bits of Books To Impress