In Baltimore and Philadelphia, software is already being used to predict which prisoners will reoffend if released. The software works on a crime database, and variables including geographic location, type of crime previously committed, and age of prisoner at previous offence. In so doing, according to a report in Wired in January this year, ‘The software aims to replace the judgments parole officers already make based on a parolee’s criminal record.’ Outsourcing this kind of moral judgment, where a person’s liberty is at stake, understandably makes some people uncomfortable. First, we don’t yet know whether the system is more accurate than humans. Secondly, even if it is more accurate but less than completely accurate, it will inevitably produce false positives — resulting in the continuing incarceration of people who wouldn’t have reoffended. Such false positives undoubtedly occur, too, in the present system of human judgment, but at least we might feel that we can hold those making the decisions responsible. How do you hold an algorithm responsible?
Still more science-fictional are recent reports claiming that brain scans might be able to predict recidivism by themselves. According to a press release for the research, conducted by the American non-profit organisation the Mind Research Network, ‘inmates with relatively low anterior cingulate activity were twice as likely to reoffend than inmates with high-brain activity in this region’. Twice as likely, of course, is not certain. But imagine, for the sake of argument, that eventually a 100 per cent correlation could be determined between certain brain states and future recidivism. Would it then be acceptable to deny people their freedom on such an algorithmic basis? If we answer yes, we are giving our blessing to something even more nebulous than thoughtcrime. Call it ‘unconscious brain-state crime’. In a different context, such algorithm-driven diagnosis could be used positively: according to one recent study at Duke University in North Carolina, there might be a neural signature for psychopathy, which the researchers at the laboratory of neurogenetics suggest could be used to devise better treatments. But to rely on such an algorithm for predicting recidivism is to accept that people should be locked up simply on the basis of facts about their physiology.
If we erect algorithms as our ultimate judges and arbiters, we face the threat of difficulties not only in law-enforcement but also in culture. In the latter realm, the potential unintended consequences are not as serious as depriving an innocent person of liberty, but they still might be regrettable. For if they become very popular, algorithmic systems could end up destroying what they feed on.
In the early days of Amazon, the company employed a panel of book critics, whose job was to recommend books to customers. When Amazon developed its algorithmic recommendation engine — an automated system based on data about what others had bought — sales shot up. So Amazon sacked the humans. Not many people are likely to weep hot tears over a few unemployed literary critics, but there still seems room to ask whether there is a difference between recommendations that lead to more sales, and recommendations that are better according to some other criterion — expanding readers’ horizons, for example, by introducing them to things they would never otherwise have tried. It goes without saying that, from Amazon’s point of view, ‘better’ is defined as ‘drives more sales’, but we might not all agree.
Algorithmic recommendation engines now exist not only for books, films and music but also for articles on the internet. There is so much out there that even the most popular human ‘curators’ cannot possibly keep on top of all of it. So what’s wrong with letting the bots have a go? Viktor Mayer-Schönberger is professor of internet governance and regulation at Oxford University; Kenneth Cukier is the data editor of The Economist. In their book Big Data (2013) — which also calls for algorithmic auditors — they sing the praises of one Californian company, Prismatic, that, in their description, ‘aggregates and ranks content from across the Web on the basis of text analysis, user preferences, social-network-related popularity, and big-data analytics’. In this way, the authors claim, the company is able to ‘tell the world what it ought to pay attention to better than the editors of The New York Times’. We might happily agree — so long as we concur with the implied judgment that what is most popular on the internet at any given time is what is most worth reading. Aficionados of listicles, spats between technology theorists, and cat-based modes of pageview trolling do not perhaps constitute the entire global reading audience.
So-called ‘aggregators’ — websites, such as the Huffington Post, that reproduce portions of articles from other media organisations — also deploy algorithms alongside human judgment to determine what to push under the reader’s nose. ‘The data,’ Mayer-Schönberger and Cukier explain admiringly, ‘can reveal what people want to read about better than the instincts of seasoned journalists’. This is true, of course, only if you believe that the job of a journalist is just to give the public what it already thinks it wants to read. Some, such as Cass Sunstein, the political theorist and Harvard professor of law, have long worried about the online ‘echo chamber’ phenomenon, in which people read only that which reinforces their currently held views. Improved algorithms seem destined to amplify such effects.