I have always wondered whether it was possible for engineers to program morality into robots. By robots, I am referring to those who are defined as “artificial moral agents” or AMAs, that can make explicit ethical judgments and can justify them like an adult human being.

I had my doubts about such possibility ever taking place, in my lifetime at least. After all, the concept of morality is too fuzzy and context-sensitive to be reduced to mathematical equations. Nevertheless, Oliver Curry, in his article “Morality as Cooperation”, offers a compelling rejoinder to mitigate my doubts, and he does so convincingly.

Curry puts forward the idea of using game theory as a non-zero-sum game to implement morality as cooperation in AMAs. This reminds me of a deeply engaging game theory exercise I participated in at a negotiation class at business school. The scenario exercise was done in pairs between a recruiter and a job applicant discussing a job offer. It was fascinating to learn how what seemed like emotional intuitions and personal choices translated into mathematical scores. And the key take-away at the end of the exercise was those pairs who cooperated to maximize mutual interests (win-win) were the ones who ended increasing the pie and both came out ahead with greater financial and non-financial benefits than those who failed to cooperate well. Thus, I now understand how game theory can be used to implement mathematical game theories of morality as cooperation in AMAs.

For example, Curry’s Periodic Table of Ethics, akin to the periodic table of elements in chemistry, shows that the study of morality is “theory driven and empirically tested [and] is simply another branch of science.” This explains why game theory enables us to teach automated machines how to be moral. The reason is because it is “capable of making testable predictions about the nature of morality.” The game theory of morality as cooperation can mathematically translate cooperative behaviour to resolve cooperation problems as morally good and uncooperative behaviour to do the same as morally bad.

The hybrid approach of teaching morals to AMAs can provide the blueprint for how engineers can create the mathematical algorithms to address morality as cooperation being simultaneously universal and diverse. In this regard, Curry’s ideas about the universality and diversity nature of his theory of morality as cooperation relates to Wendel Wallach and Collin Allen’s ideas about the hybrid approach of designing effective AMAs as set out in their book “Moral Machines.”

In terms of universality, Curry’s theory of morality as cooperation holds that “cooperative behaviours will be considered morally good in every human culture, at all times and in all places”. In this case, the top-down approach to programming virtues can be used. According to Wallach and Allen, this approach accommodates ethically explicit rules of decision making for AMAs regarding what is morally good and morally bad.

In terms of diversity, Curry’s theory of morality as cooperation holds that “to the extent that different people and different societies face different portfolios of problems, different domains of morality will loom larger—diverse cultures will prioritize different moral values.” In this case, the bottom-up approach to programming virtues can be used. This approach allows for context-specific, ongoing reinforcement learning that enables AMAs to continuously recognize patterns and build moral categories as AMAs make decisions on how to behave in a morally praiseworthy manner.

He concludes convincingly why it is indeed possible to program morality into robots: “Morality is no mystery. We have a theory. Morality is a collection of biological and cultural solutions to the problems of cooperation and conflict recurrent in human social life; and game theory reveals what those problems and solutions are. Morality as cooperation explains what morality is, where it comes from, how it works, and what it is for.” In other words, morality is not as fuzzy and context sensitive as I thought it was. Morality, he argues, can be reduced to mathematical equations. Do you agree?