Virtuous AI: Three Core Questions

Virtuous AI: Three Core Questions

To understand the Virtuous AI project better, we will first present in some detail the current research being done today by some of our potential participants.  This section is organized around the three core questions.

 

Core question 1): How and to what extent will AI influence human cultural evolution?

We will first examine the increasingly complex integration of AI into society and how that will affect cultural evolution. Specifically, will the integration of AI into society change the basic premise that cultural evolution is human-driven?[1]

Cultural evolution to this point is the result of human decisions. But in the future could cultural evolution also be driven, in part, by AI? Dr. Braden Molhoek, Lecturer in Science, Technology and Ethics at the Graduate Theological Union, argues that, with the integration of AI into society, especially its infrastructure, we could reach a point where it is not just human actions or decisions that guide cultural evolution, but the actions and decisions of AI as well. While humans may initiate values for AI to prioritize, such as the reduction of traffic congestion or pollution, in our project Molhoek will propose that it will be the choices that AI makes in determining how to implement these values that will affect people and how they live.

Research exploring the question of how AI impacts cultural evolution currently ranges from how AI can influence humans in making ethical decisions to how we, facilitated by AI, create new forms of art and music previously unimagined. Dr. Boyoung Kim, a research fellow at George Mason University, has asked whether machines offering moral advice can modify the decisions or actions of people.  In her research, Kim used moral advice drawn from three different ethical systems. Two systems represented traditional Western approaches (deontology as a rule or principle-based ethics, and virtue ethics, with its emphasis on one’s character). The third drew on “Confucian role ethics” found throughout Eastern societies.  Here ethical choices are based not on an individual’s behavior but on one’s role in a family and society. In a study published from the virtual proceedings of “HRI '21: ACM/IEEE International Conference on Human-Robot Interaction,” [2] Kim et. al. sought to determine “how robots can successfully serve as moral advisors for humans. … Our findings suggest the importance of considering different ethical frameworks and cultural differences to design robots that can guide humans to comply with the norm of honesty.”[3]

There are numerous ways to approach the influence of AI on cultural evolution expressed through art and music. Intelligent tools could allow humans to expand their creative potentials in new ways.  Dr. Maya Ackermann, Professor of AI at Santa Clara University, has created WaveAI, a musical start-up that features Alysia: “an AI-based app that allows everyone to create original songs …  original lyrics, melodies, and vocals.” Using Alysia, people who are unfamiliar with the technical rules of musical composition can use AI to transpose their thoughts into a written musical score, with proper incidental notes, key and time signatures being completed by AI.[4]

The examples listed above are only a few of the many ways in which AI could affect/shape human culture. In order to provide more ideas/examples for people wanting to submit to the call for papers, another non-exhaustive list includes:

Algorithm bias

Deskilling

Lethal autonomous weapons

Autonomous vehicles

Automation in the workplace and its economic impact

Internet of Things

Smart Infrastructure

Medical technology and AI

 

Core question 2): Can AI assist humans in the acquisition of virtue?

Can AI help us both personally and as a society in the cultivation and practice of virtue and thus affect cultural evolution?  More than this, can AI produce changes in human nature that radically alter the purpose or telos of humankind to the point where additional or even new virtues are required? To put it succinctly, CRISPR can change our genes; can AI change how we cultivate virtue and embed this virtue into an evolving human culture? Because both genes and culture affect human nature, changes to genes and culture could lead to changes in what it means to be human, perhaps even to the point of producing a “post-human” species, as many transhumanists predict.

Currently, it is difficult to know when a modified human will become something categorically distinct from today’s humans, especially because change could be incremental. However, at some point in the not-too-distant future, human physical and spiritual capacities could be modified or enhanced to a degree where those individuals would be so different from that the average person today that the same rules or virtues would no longer apply to them.  Dr. Ademola Kazeem Fayemi, Professor of Philosophy at the University of Lagos, Nigeria, has studied this question in the context of the Yoruba people of West Africa. The notions of personhood held by the Yoruba people allow for a greater degree of flexibility than many western notions, Fayemi argued. If enhancements improved human moral capacities, then skepticism regarding radical technological advances would be a less defensible position.[5]

Western voices, particularly among transhumanists, have also argued that AI might help humans develop more virtuous habits. Theologian and ethicist Ted Peters writes:  "If we transpose love into distinctively Christian transhumanism and subordinate intelligence, then we must ask: can we employ GNR (genetics, nanotechnology, and robotics) to speed up the sanctification process? Can bio-nano-tech make us more virtuous? More loving? More sanctified? Deified?” … But Dr. Peters’s response to these questions is clear: "In short, moral action deriving from virtuous living is not technologically programmable, because it requires willful participation and sustained self-discipline over time. That level of personal participation cannot be governed by either genetic or neurotechnology."[6]

If human capacities are altered to a great extent, or new capacities are gained through the use of AI, it is possible that the “end,” or telos of humanity would change, and this might require additional and even new virtues.  Ethicist Brian Patrick Green is the Director of Technology, the Markkula Center for Applied Ethics at Santa Clara University.  Green discusses this idea:

“If our scope of action has changed over history due to culture, then perhaps our being (as bioculturally composed creatures) has in some sense changed as well. … As technology grows in power over biology, will it eventually edge biology out entirely? In these extreme scenarios, biological evolution ceases and cultural evolution becomes the sole form of transhuman evolution." [7] 

On the other hand, many prominent scholars have raised serious, even devastating, critiques of transhumanism, in light of which we should be wary of accepting it as a means by which humanity develops new forms of virtue.  According to Jewish philosopher Hava Tirosh-Samuelson:

I consider transhumanism to be misguided because its ultimate end is to make the biological human species obsolete … Transhumanism then is not about how we can flourish as biological, social, and political humans but a vision that denigrates our humanity, calling us to improve ourselves technologically so that we could voluntarily become extinct. As I see it, transhumanism calls us to commit collective suicide as a species. [8]

In addition to the examples and perspectives listed above, another non-exhaustive list for people wanting to submit to the call for papers includes:

Brain-Computer Interfaces

Implants, Body Modification, and Transhumanism

Virtual Reality

AI, wearable technology, biofeedback

 

Core question 3): Is AI capable of virtue? If so, are those virtues shared with or distinct from human virtues?

This third set of core questions explores several avenues of research involving AI and virtue.

  1. If strong AI, artificial general intelligence (AGI), or superintelligence is reached, can we speak of AI being able to acquire virtue? If AGI reasons in a way similar to humans, will those virtues be the same or similar to humans? Could AI have intellectual virtues but not moral virtues (if AI does not have emotions, desires, or appetites for example)? If AI reasons quite differently from humans, on the other hand, how would we conceive of AI having virtue and how might it relate to human virtue?
  2. Moral virtues are stable dispositions of character established through habituation. Theological notions of virtue, however, also refer to infused virtue, which are not acquired in the same way, but rather bestowed by God. Could AI be the recipient of infused virtue? How might our response to this question depend on how we think about the relationship between AI, humanity and the image of God?
  3. Scientists are using a virtue approach in machine learning. Instead of speculating about whether or not AI could be capable of virtue, this work is aimed at designing virtuous AI which we can treat in a pragmatic sense (as though it possessed virtue) while setting aside the question of whether it actually does possess virtue. Papers about and reflecting on this work are quite welcome.

Western perspectives on virtue have often distinguished between intellectual, moral, and theological virtues.[9]  Since the first two are based on reason, it is plausible that AI could acquire them - at least to the extent that AI reasons like humans do. But if AI possesses a different form of reasoning, would it then acquire different intellectual and moral virtues than human ones?  But Wendell Wallach offers a cautionary argument that is relevant here: “Viewing morality as a rational process performed by a self-contained system presumes an ontology that is inadequate for the appreciation that morality arises from humans who are embodied in their environment and culture and in relationship with many other beings, each with their own goals, values, and desires.”[10]

An even greater challenge to Christian ethics, at least, is that the theological virtues (faith, hope, and love) are traditionally seen as inaccessible to reason alone.  Instead, God reveals them and provides grace to infuse them in us, and this involves our uniqueness as humans created in the image of God (imago Dei).  Hence if AI came both to know and to be infused with the theological virtues this would profoundly challenge Christian ethics and theology in conceiving of being human as being uniquely created in God’s image.

Other virtue traditions, such as Confucianism and Buddhism, provide additional challenges for the core question of AI and virtue. Dr. Shannon Vallor is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the University of Edinburgh's Edinburgh Futures Institute, where she is also a Professor in Philosophy.  Dr. Vallor has written on the challenges of AI in the context of Eastern and Western ethical traditions. According to Vallor, Confucianism places an emphasis on family virtue as opposed to Aristotelian/Western virtue ethics, which sees flourishing as primarily connected to political life. Buddhist approaches de-emphasize both of these forms of virtue in favor of a desire to reduce the suffering of all life. And while both Aristotelian and Confucian ethics argue that people need to be empowered politically and have adequate material resources such as food and housing in order for flourishing to be possible, Buddhism eschews material attachment.[11] Would AI require political participation in order to be fulfilled, or does the lack of material needs beyond power and maintenance mean that AI is better suited to adopt virtues similar to those found in Buddhism?   

In order to identify what constitutes virtuous behavior, we often turn to those that our own particular culture identifies as “moral exemplars.” So who will serve as moral exemplars for AI? Will AI look to those exemplars that humans strive to follow, will AI look to earlier forms of machine intelligence as a basis for exemplars, or will AI develop its own exemplar? Like Fayemi, Grace Oluremilekun Akanbi and Alice Arinlade Jekayinfa have also studied virtuous behavior in the Yoruba culture of western Africa. Here the term “Omoluabi” refers to an upright person who combines all the virtues, but these virtues are not identical to Western virtues.[12] How can non-Western perspectives on a virtue such as these enhance the overall discussion of virtue and AI? One of the primary skills used in educating people to become Omoluabi is to recognize the importance of the community’s cultural heritage and to ensure its survival. Can this respect for the past and what exists now help AI maintain important cultural ideas and ideals while still being open to new forms of cultural evolution?

Finally, there may be yet another lesson we can learn from the question of AI and the imago Dei.  Noreen Herzfeld is a Professor of Theology and Computer Science at Saint John’s University, Minnesota.  She has written extensively on the relation between the theology of the imago Dei and strong and weak AI.  As Dr. Herzfeld writes, “It may well be that intelligence is, finally, not the most important aspect of human nature.  . . . If AI can help us see the world in a different way, as a place of inexhaustible relationship, then it will serve us well.”[13]

 

“Virtuous AI?: Cultural Evolution, Artificial Intelligence, and Virtue” is a project funded in part by the John Templeton Foundation. CTNS is a program of the Graduate Theological Union in Berkeley, California.

 

[1] Here culture is understood as socially transmitted information contrasted with genetically inherited information. Cultural evolution is the set of human-driven changes in culture over time by internal factors within society and by external factors impacting society.  In principle, at least, all of these factors can, in turn, introduce further changes in culture, in a continuous interaction of causes and effects. Our focus will be specifically on the complex integration of AI into society and the subsequent changes it will produce in human culture.

[2] Kim, Boyoung, Ruchen Wen, Qin Zhu, Tom Williams, and Elizabeth Phillips. "Robots as Moral Advisors." Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, 2021. doi:10.1145/3434074.3446908.

[3] Kim et. al., ibid., from the Abstract.

[4] https://maya-ackerman.com/ and https://www.wave-ai.net/

[5] Fayemi, Ademola Kazeem. "Personhood in a Transhumanist Context: An African Perspective." Filosofia Theoretica: Journal of African Philosophy, Culture and Religions 7, no. 1 (2018): 53-78. doi:10.4314/ft.v7i1.3.

[6] Ted Peters, “Boarding the Transhumanist Train: How Far Should the Christian Ride?”, The Transhumanism Handbook, Newton Lee, (Springer: 2019), Chapter 62, p. 795-804, see p. 800 - 801.

[7] Brian Patrick Green, “Transhumanism and Catholic Natural Law: Changing Human Nature and Changing Moral Norms,” in Religion and Transhumanism: The Unknown Future of Human Enhancement, Calvin Mercer and Tracy J. Trothen, eds. (Santa Barbara: Praeger, 2015), Ch. 13, 201-215, see p. 207-208.

[8] Hava Tirosh-Samuelson, “In Pursuit of Perfectionism: The Misguided Transhumanist Vision,” Theology and Science Vol. 16, Issue 2 (2018), p. 200-222, see p. 203-204.

[9] Aristotle distinguished between intellectual virtues (such as wisdom and self-evident truths) and moral virtues (prudence, justice, fortitude and temperance).

[10] Wendell Wallach, “Implementing moral decision making faculties in computers and robots,” AI & Soc (2008) 22:463–475, p. 469.

[11] Vallor, Shannon. Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. New York, NY: Oxford University Press, pg. 37-42, 2016.

[12] Grace Oluremilekun Akanbi, Alice Arinlade Jekayinfa. Reviving the African Culture of 'Omoluabi' in the Yoruba Race as a Means of Adding Value to Education in Nigeria. International Journal of Modern Education Research. Vol. 3, No. 3, 2016, pp. 13-19.

[13] Noreen L. Herzfeld, In Our Image: Artificial Intelligence and the Human Spirit (Minneapolis: Fortress Press, 2002), p. 94-95.