How to Be Good: Why You Can’t Teach Human Values to Artificial Intelligence

April 2016: Read article in Slate.

Artificially intelligent machines will soon be in situations where they have to make values-based decisions. For example, if a self-driving car must make a choice between protecting its passengers or pedestrians, the programmers of the self-driving vehicle must give it a definite answer. The question is not whether machines can be made to obey human values but which humans ought to decide those values. Ultimately, what gives Western, well-off, white male cisgender scientists the right to determine how the machine encodes and develops human values?