Popular media have recently reported a White House initiative asserting companies’ “moral obligation” to limit the risks of AI products. True enough, but the issues are far broader. At the core of the debate around AI — will it save us or destroy us? — are questions of values. Can we tell AI how to behave safely for humans, even if in the future it has a “mind of its own”? It is often said that AI algorithms should be “aligned with human values.”