Smarter Together, to What Ends? A Review of AIQ
Education alone won’t dispel fear of artificial intelligence.
In 1979, sociologist Aaron Wildavsky surveyed his era’s technological sophistication and observed that the very technologies which provided freedom from want could spark new fears. “How extraordinary!” he said. “The richest, longest-lived, best-protected, most resourceful civilization, with the highest degree of insight into its own technology, is on its way to becoming the most frightened.”
While he was describing the anxiety surrounding nuclear energy and chemicals, those words could just as easily describe how many feel about artificial intelligence (AI) today. The dread has merely shifted from toxic spills and nuclear meltdowns to AI taking jobs and undermining human autonomy.
In their recent book AIQ: How People and Machines Are Smarter Together, data scientists Nick Polson and James Scott hope to provide an antidote to the anxiety around AI. “After you’ve read our book, you’ll be able to decide for yourself whether you think these worries are credible.”
But unlike Wildavsky who analyzed the risks of new technology, Polson and Scott’s medicament against AI anxiety is education. Indeed, AIQ is an entertaining exploration of some of the core ideas and a number of the key figures in the development of AI as a suite of technologies. They begin where any discussion of AI should, by noting that AI currently is just the application of simple mathematical models or algorithms to big data sets. The bulk of the text details the history of these models’ development.
For example, Polson and Scott recount the story of Abraham Wald, who is best remembered for his work on survivorship bias. During World War II, Wald was part of a team of statisticians that examined how aircraft were damaged in combat. At the time, planners wanted to provide greater protection to parts that received more damage, yet Wald correctly noted that only aircraft that had survived their missions were being assessed. Those that had been shot down couldn’t be checked. His survivability recommender system wouldn’t become known to the public until the 1980s, but pushed forward the science of recommender systems. Wald used these techniques to add armor to aircraft that couldn’t be observed in the same way that Netflix’s algorithms give recommendations to people for movies they have never seen.
They also discuss Henrietta Leavitt, who discovered how to measure the distance between earth and other galaxies using pulsating stars. It is through Leavitt’s method of applying a “series of visual concepts, chained together in a five-layer-deep hierarchy, to extract a useful feature from an imagine,” that the authors are able to illustrate the topic of deep learning. And John Craven’s work to find the USS Scorpion, a submarine that went missing in 1968, is used to explain how autonomous vehicles work.
Yet, the authors are explicit in keeping the most contentious topics off the table, even as they dance around them throughout the book. Near the beginning they ask the questions underlying popular concerns about artificial intelligence: “Will AI create a jobless world? Will machines make important decisions about your life, with zero accountability? Will the people who own the smartest robots end up owning the future?”
The call is met with a response a couple of lines later: “We should let you know up front that you won’t find the answers to these questions in our book, because we don’t know them. Like our students, we are ultimately optimistic about the future of AI, and we hope that by the end of the book you will share that optimism… We can teach you about AI, but we can’t tell you for sure what the future will bring.”
The book implicitly rests on an education centric model that is endemic in Silicon Valley. A simple version of this folk theory runs thus: A lack of understanding about technology encourages apprehension. To counteract this suspicion, education is needed, as education about technology will lead to understanding and optimism. Having both, people will embrace new technologies and worry less about their effects.
Whether AIQ accomplishes the broader goal it sets out depends entirely on the nature of the risk that AI poses. If the concerns with AI derive from its mysteriousness, then yes, the education about these mathematical techniques can help reduce concern.
But anxiety about AI isn’t solely a pedagogical problem that can be mitigated by better understanding. Sure, it helps to know what is going on inside the black box of algorithms, but most people don’t know how televisions work, and yet they don’t seem to view the small screen as an existential threat. Rather, AI concerns strike at the deeper issue of human dignity and technological control, which Polson and Scott only address in passing.
The most destructive conception of the AI revolution is the disappearance of all jobs. While this may seem outlandish, a subtler version of the AI revolution envisages machines subsuming worker autonomy and dignity. Both are part of a broader class of worries about the relationship between people and machines, namely that people will increasingly work in the service of machines.
But it is worth asking: Why do these worries seem to have legs when unemployment is historically low? The answer seems to be that the possibility of AI displacing people’s jobs inherently undermines the dignity of work. One of the greatest benefits of working is contributing to something greater that validates their individual worth, yet many sense that AI will reduce people to replaceable parts serving not people but machines.
Workplace dignity isn’t a new age version of the Marxian theory of labor alienation. It involves, according to workplace sociologist Andrew Sayer, autonomy, dependence, seriousness, and trust between both parties. Whether or not organizations name dignity as such, workplaces that focus on dignity understand the inherent power dynamics within the workplace and avoid taking advantage of workers while giving them the freedom and trust to accomplish complex tasks.
In one study detailing how a hospital turned around from poor quality of medical care, declining patient numbers, high staff turnover, and severe financial difficulties, researchers found that a shift in worker relations was an important component. Workers were given trust, granted autonomy, and recognized when they contributed to the organization. The emphasis on worker dignity helped to build a sense of ownership and pride among employees.
AIQ provides the background knowledge to counter the fears of AI while simultaneously granting some legitimacy in the anxiety. Chapter 4 focuses on the difficulty in modeling spoken language, a technique known as Natural Language Processing, ending with the observation that “language models will become personalized; the machines around you will adapt to the way you speak, just as they adapt to your movie-watching preferences.” In other words, the machines will serve humans, and not vice versa. In Chapter 6, the authors examine innovative attempts to utilize AI in medicine, while highlighting the limits of integrating data into the provision of medical services. The final chapter seems to set up a discussion of the relationship between humans and machines as it articulates the failures of Google Flu Trends, the Flash Crash in 2010, and the COMPAS risk recidivism model, which is used in sentencing hearings for felons.
Opportunities abound for the authors to illustrate that AI is being used to make better decisions in the service of human values, not in replacement of them. Indeed, the authors don’t try to push back against techno-supremacy, one of the most worrying trends coming from Silicon Valley and articulated most clearly in Wired editor Kevin Kelly’s book What Technology Wants. Techno-supremacy ascribes innate wants or goals to technologies, and privileges them in relation to human values and dignity.
AIQ tiptoes around this concern with AI, the value struggle between people and machines. The book provides a solid introduction to these technologies and how they came about, yet the foundation for a useful response to these legitimate concerns with AI is laid out here as well. AI has tremendous potential to aid humans, as AIQ shows, but it must always be in service of people.