Can the Relationship Between Empathy and Trust Explain Our Distrust in AI?

Will Kidder received his PhD from the University at Albany, SUNY, and has taught philosophy at Hamilton College and Siena College. He works in moral psychology and aesthetics, focusing on empathy and moral imagination.

A post by Will Kidder

You trust someone to repay a loan, to give you honest and considerate advice, to remain faithful in a relationship, or simply to pick you up at the airport. How do you know whether they will honor your trust or make a fool of you? Do you need to “get inside their head” and imagine their perspective? Do you need to believe they have imagined yours?

In what follows, I would like to briefly outline a role for empathy, which I take to require imaginative simulation of other perspectives, in the assessment of trustworthiness. I will also explore how this relationship between empathy and trust might explain deep-seated distrust of AI-based decisions, particularly when those decisions involve AI’s assessment of a human being’s trustworthiness, as in the cases of parole decisions and credit scores.

I argue that empathy impacts the assessment of trustworthiness in two ways. First, empathy allows us to discover a potential trustee’s motives and assess whether those motives in fact justify trust. Empathy helps gather evidence relevant to trustworthiness. Second, a potential trustee can exhibit trustworthiness by making an effort to empathetically imagine the trustor’s perspective. Empathy can serve as evidence of trustworthiness.

Empathy as Evidence-Gathering Tool and Evidence

According to motivation-based accounts of trust, determining whether someone is trustworthy involves more than merely assessing the bare facts of their competence; we must also understand their motivations. Depending on the account, trustworthiness may involve, for example, motivation to fulfill a commitment (Hawley, 2014), an appreciation that the trustor is counting on us (Jones, 2012), or an interest in maintaining a relationship with the trustor (Hardin, 2002). If trust is motivation-based, then we will need to be able to understand the motivations of those whose trustworthiness we are assessing.

While empathy is not the only means of understanding others’ motivations, it is a particularly powerful means of doing so. This is perhaps best felt in imagining empathy’s absence. Suppose you are deciding whether a potential trustee should be trusted to pay back a loan. Now suppose you are unable to imagine what it is like to be that person. You can’t imagine their attitudes towards you. Trying to simulate these aspects of their perspective leaves you cold. Compare this to a situation in which you empathetically simulate the other’s feelings of respect and good will towards you. In which situation, ceteris paribus, are you more likely to trust? Intuitively, it seems that the added motivational evidence discovered via empathy plays a role in assessing trustworthiness in such a context.

This evidence-gathering role is not the only way empathy impacts trust. The trustee’s empathetic capacity can serve as relevant evidence of their trustworthiness. That is, when we assess whether someone is trustworthy, we consider whether they are in fact able and willing to imagine things from our perspective. We want the trustee to understand the nuances of our perspective, to understand how we value what we are entrusting them to do, and to see what acting appropriately would look like from our point of view.

Again, imagine the absence of this evidence when assessing trustworthiness. Suppose you are considering whether to trust a friend’s advice on an important career or relationship decision, but your friend exhibits no signs of understanding your career or relationship from your perspective. Their advice seems grounded in platitudes or in their own values and experiences, yet the question at hand is what you, with your particular experiences and values, should do. On the other hand, if your friend can empathetically engage with your experiences and values from an other-oriented, rather than self-oriented perspective (Coplan, 2011), they provide evidence that they have better understood the particulars of the domain of trust. They have provided evidence that they can be trusted to offer the sort of advice you are looking for, advice that is considerate of your values and goals.

Algorithm Aversion and Empathy Deficits

The hypothesis that empathy serves as both an evidence-gathering tool for a trustor and as evidence of trustworthiness for a trustee could help explain a puzzle that has emerged as AI has become more competent and more widely implemented in high-stakes domains: why is it that we tend to distrust AI, even when we have evidence of its competence?

This phenomenon, known as “algorithm aversion” (Dietvorst, Simmons, and Massey, 2015), is often explained in terms of the opacity of the machine-learning algorithm we are asked to trust. We cannot see how these algorithms arrive at decisions, and thus are hesitant to trust them, even if they have a track record of success. In addition, there is legitimate concern that using historical data to train AI to assess risk in areas such as parole eligibility and creditworthiness ingrains historical biases in these systems. Without a means of understanding exactly how the algorithm’s inputs are valued, we are wary of trusting that they are not absorbing human biases in the valuation of factors such as race (Chandler, 2016). 

An empathy-based account of trust provides an intuitive explanation for this sort of distrust in AI. Empathetically imagining the perspective of other humans makes them less opaque, and thus potentially easier to trust. By contrast, we cannot imaginatively place ourselves in the perspective of an algorithm to understand how and why it does what it does. Machine-learning algorithms remain impenetrable to the empathetic imagination that allows humans to better understand one another’s motivations when assessing trustworthiness.

But opacity is not the only cause of our distrust of algorithms. We also distrust the rigid nature of algorithmic decision making, particularly in morally salient contexts. For example, Jauernig, Uhl, and Walkowitz (2022) find that participants preferred human moral judgments made with discretion over algorithmic judgments that rigidly applied transparent, human-created fairness principles. Distrust of AI arises not only because we lack insight into its decision-making process, but also because we fear AI lacks the empathetic capacity to gain insight into the particulars of our situation, insight that could lead it to exercise discretion when it matters most.

We are thus left with two forms of distrust in AI that correspond to the two roles of empathy in assessing trustworthiness: (1) our aversion to the opacity of algorithms can be explained by our inability to use empathy as an evidence-gathering tool when assessing the trustworthiness of AI, and (2) our aversion to the rigidity of algorithmic decisions can be explained in terms of AI’s inability to present evidence of empathetic capacity, evidence that is often crucial in determining whether we trust that a judgment has appropriately considered the unique aspects of our situation.

This last point, that there is a sense in which we don’t feel empathetically “heard” by AI, is worth emphasizing, particularly as it relates to how we interpret the judgments of those we distrust. As D’Cruz (2019) and others (Jones, 2013; McGeer, 2002; Govier, 1992) have argued, trust and distrust tend to be self-perpetuating and self-confirming. Distrust can blind us to legitimate evidence of trustworthiness. And as D’Cruz argues, if we feel that others distrust us, we will be less likely to trust them in return. So, when an AI judges us to be untrustworthy, for example by assigning a low credit score, we don’t trust this judgment. We take it that the AI has not appreciated how our case is different, that its lack of empathy has led it to treat us as a statistic in a rule-based procedure rather than a being with trust-relevant motivations and experiences that are better understood through empathy.

Conclusion

Of course, this is only a brief sketch of a possible connection between empathy, trust, and algorithm aversion, but I believe pursuing this connection can draw out some interesting questions about how we should approach the calibration of trust.

I do not suggest that more empathy or more trust should always be the goal, or that empathy alone is sufficient to establish well-calibrated trust. A con artist can empathetically gather evidence or signal empathy to manipulate trust. And if empathy calibrates trust, then we ought to worry about our propensity for biased empathy (Bloom, 2016; Prinz, 2011) leading to a biased calibration of trust.

Furthermore, my account suggests that if we want to encourage trust in AI, we ought to program AI to appear more empathetic. This avenue is already being explored, for example, with empathetic chatbots (Liu and Sundar, 2018). But whether we should encourage such trust in AI is a complicated, open question that must consider context and the competence of the system in question.

Ultimately, what I have argued here is that empathy plays a role in calibrating trust. The correspondence of our extreme distrust in AI and AI’s extreme empathy deficits speaks to empathy’s involvement in cultivating trust more generally. Thus, accounts of appropriate calibration of trust, whether in human beings or in AI, should consider the role that empathy, or lack thereof, plays in such calibration.


References

Bloom, Paul (2016). Against Empathy: The Case for Rational Compassion. New York, NY: HarperCollins

Chander, Anupam (2017). “The Racist Algorithm?” Michigan Law Review, 115: 1023-1045.

Coplan, Amy (2011). “Will the Real Empathy Please Stand Up? A Case for a Narrow Conceptualization.” Southern Journal of Philosophy, 49: 40-65.

D’Cruz, Jason (2019). “Humble Trust” Philosophical Studies, 176(4): 933-953.

Dietvorst, Berkely J., Simmons, Joseph P., & Massey, Cade (2015). “Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err.” Journal of Experimental Psychology, 144(1), 114-126.

Govier, Trudy (1992). “Distrust as a Practical Problem,” Journal of Social Philosophy, 23(1): 52-63.

Hardin, Russell (2002). Trust and Trustworthiness, New York, NY: Russell Sage Foundation.

Hawley, Katherine (2014). “Trust, Distrust and Commitment.” Noûs, 48(1): 1-20.

Jauernig, Johanna, Uhl, Matthias, & Walkowitz, Gari (2022). “People Prefer Moral Discretion to Algorithms: Algorithm Aversion Beyond Intransparency.” Philosophical Technology, 35(2).

Jones, Karen (2012). “Trustworthiness.” Ethics, 123(1): 61-85. 

Jones, Karen (2013). “Distrusting the Trustworthy” in Reading Onora O’Neill, David Archard, Monique Deveaux, Neil Manson, and Daniel Weinstock (eds), New York: Routledge.

Liu, Benjie and Sundar, S. Shyam (2018). “Should Machines Express Sympathy and Empathy? Experience with a Health Advice Chatbot.” Cyberpsychology, Behavior, and Social Networking 21(10): 625-636.  

McGeer, Victoria (2002). “Developing Trust.” Philosophical Explorations, 5(1): 21-38.

Prinz, Jesse (2011). “Against Empathy.” Southern Journal of Philosophy, 49 (s1): 214-233