Research | Writing | Digital Humanities | Biblical Studies

Machine Override: Artificial Intelligence and Human Agency

Machine Override: Artificial Intelligence and Human Agency

One of the concerns that many share about training computers to perform certain tasks (like driving) is when or whether to allow a human to override the machine’s decision(s). I have encountered three recent discussions of this to serve as some food for thought. First, the example that gets trotted out frequently is a self-driving car that must decide between crashing itself and putting its human occupant(s) at risk versus risking the lives of, say, the children on a school bus stalled in an intersection. In stark terms, the car is in a sense programmed to sacrifice its occupants in certain scenarios.

Second, addressing a related issue, recent research published by joint researchers from Oxford and Google Deep Mind suggested ways in which a machine learning agent could be programmed to allow a human to override it without the (machine) agent ‘knowing’ it (i.e., altering its future behaviour based on the intervention of the human). Of course, this became the oft-used headline, ‘Google Creates AI Kill Switch’. But theoretically, without this feature an AI could learn how to avoid being ‘turned off’ or otherwise overridden.

Third, Joi Ito, Director of MIT’s Media Lab, recently reflected on a meeting between ‘technologists, economists and European philosophers and theologians’ in which they discussed ‘The Future of Work in the Age of Artificial Intelligence’ (initial summary here). One interesting example, discussed in a later post, was the possibility of using an AI to make legal decisions either in place of or with a human judge. In that same post, Ito raises what he feels—and I agree—is a crucial issue:

How machines will take input from and be audited and controlled by the public, may be one of the most important areas that need to be developed in order to deploy artificial intelligence in decision making that might save lives and advance justice. This will most likely require making the tools of machine learning available to everyone, have a very open and inclusive dialog and redistribute the power that will come from advances in artificial intelligence, not just figure out ways to train it to appear ethical.

As an aside: as an academic at a University Centre for ‘Digital Theology’, I am especially interested in the ways in which philosophical and theological discourses are brought into the discussion, and I commend MIT’s Media Lab for hosting a discussion with such a diverse panel.

Leave a reply