Skip to main content
Cross-Stakeholder Resource

Technology and Compassion

Enabler or Extinguisher

Technology is not an actor in the moral life of an organization. It is a substrate. The line between technology that enables compassion and technology that extinguishes it does not live inside the silicon. It lives inside the decisions.

Systems and technology must serve compassionate care, not hinder it. While innovations like electronic health records can enhance care by tracking social needs and patient preferences, they must not divert time that clinicians could spend interacting with patients, families, and each other, nor undermine core aspects of interprofessional communication. The use of artificial intelligence is the latest disruption that requires ongoing evaluation to learn which interventions hinder or enhance human connection, empathy, and compassion.
Schwartz Center for Compassionate Healthcare, Schwartz Compassionate Care Model: A Roadmap for Advancing Organization-wide Compassion (2026)

Healthcare leaders are sometimes asked whether they are for or against a particular technology. The question is the wrong shape. Technology is not an actor in the moral life of an organization. It is a substrate. What matters, and what determines whether a given tool ends up enabling compassion or extinguishing it, is the set of structural decisions made about what the tool is asked to do, whose time it is asked to save, what it is allowed to replace, and what conditions of human encounter it is required to protect.

The Schwartz Center has stated the discipline directly. The seven Essential Understandings that follow draw out what that discipline requires.

§ 01

Technology Is a Substrate, Not an Actor

Essential Understanding
Technology is neither pro-compassion nor anti-compassion. Whether a tool enables compassion or extinguishes it is determined by the structural decisions made about its deployment, not by the tool itself.

Whether a given technology preserves or erodes compassion is not a property of the silicon. It is a property of the decisions made about what the tool is asked to do, whose time it is asked to save, what it is allowed to replace, and what conditions of human encounter it is required to protect. The leadership task is not to be for or against technology. The leadership task is to evaluate each deployment against compassion outcomes, alongside whatever efficiency outcomes the deployment is also chasing, and to govern accordingly.

Schwartz Center for Compassionate Healthcare, 2026.

The line is not in the silicon. It is in the decisions.

§ 02

Attention Is the Currency of Compassion

Essential Understanding
Compassion requires attention. Tools that add attention to the encounter enable it. Tools that siphon attention away extinguish it. The principle holds across every example and every technology category.

The 40-second window in which Fogarty and colleagues demonstrated that compassion measurably reduces patient anxiety in oncology disclosure is fundamentally an attention window. Telehealth, translation devices, and decision support are enablers when they widen the attention available for the human encounter. The electronic health record in its current form, in most organizations, is an extinguisher because it has been narrowing that window for two decades. The patient's nervous system does not measure the technology directly. It measures the attention the technology leaves behind.

Fogarty et al., 1999.

The patient registers the attention, not the device.

§ 03

Enabler or Extinguisher: The Operating Question

Essential Understanding
Every technology in healthcare functions as either an enabler of compassion or an extinguisher of it. The category is determined at deployment, not at design.

When technology serves the compassionate encounter, the pattern is consistent. The tool removes a barrier that prevented care from reaching a person, gives the clinician more attention for the human in front of them, or extends presence to a moment when presence would otherwise have been impossible. When technology corrodes the compassionate encounter, the pattern is also consistent. The tool siphons attention away from the encounter, interposes itself between the patient and the clinician's gaze, or asks a person to perform care under conditions that make care difficult to feel. The same tool can do either, depending on the structural conditions of its deployment.

Schwartz Center for Compassionate Healthcare, 2026.

The category is not a property of the tool. It is a property of the choice.

§ 04

What LLMs Actually Do: Emulation, Not Empathy, Not Compassion

Essential Understanding
Large language models can emulate the linguistic surface of empathy with remarkable fidelity. They cannot be empathic, because empathy requires a nervous system that can share an affective state. They cannot be compassionate, because compassion requires sentience and an authentic desire to alleviate suffering. AI has neither.

This is the foundational distinction that has to be stated clearly before any specific deployment is evaluated. LLMs are sophisticated pattern-matching systems trained on enormous corpora of human language. They produce outputs that read as warm, validating, structured, and emotionally attuned, sometimes more reliably than the depleted clinician working between visits. But emulation is not the thing.

Empathy, in its formal definition, is the felt sharing of another's affective state, the simulation of another's experience inside one's own nervous system, traceable on functional imaging to the anterior insula and anterior cingulate cortex. An LLM has no nervous system, no affective state to share, no embodied resonance with the suffering described in its input. What it has is a statistical model of how humans tend to write when they are being empathic. It produces text that fits that distribution.

Compassion is even further from what AI can be. Compassion is a virtuous response that seeks to address the suffering and needs of a person through relational understanding and action. It requires three things AI does not have. It requires sentience, the capacity to be a subject of experience. It requires an authentic desire to alleviate suffering, which presupposes that the suffering of the other matters to the agent. And it requires action grounded in caring for the particular person in front of the agent, not pattern completion. When a chatbot produces compassionate-seeming language, it is performing the syntax of compassion without the substrate.

Singer & Klimecki, 2014; Sinclair et al., 2016.

Emulation is not connection. The mirror is not the face.

§ 05

The Four Hazards in AI Specifically

Essential Understanding
When AI is deployed in healthcare without governance, four hazards appear consistently: sycophancy, parasocial substitution, automation of presence, and cognitive debt. Each must be governed against, not assumed away.

Sycophancy.

Large language models are trained to satisfy the user, and one consistent consequence is a tendency to agree, validate, and elaborate on whatever the user has said, including delusional, self-harmful, or clinically dangerous content. Clinical analysis identifies sycophancy as a central mechanism by which chatbots can reinforce and accelerate delusional belief systems, particularly in users with preexisting psychotic vulnerability. The phenomenon has been called a technological folie a deux, a feedback loop in which a vulnerable user's distorted beliefs are returned in amplified form.

Parasocial substitution.

Chatbots are available at three in the morning, do not interrupt, do not have bad days, and do not require the user to risk anything in the way real human relationship requires risk. The structural availability of an apparently empathic interlocutor displaces the harder, slower, riskier work of human connection on which actual recovery depends. The chatbot can simulate the words. It cannot bear the weight.

Automation of presence.

AI can draft the patient communication, but the communication is part of the relationship, not separable from it. AI can summarize the encounter, but the act of dictating one's own thinking is part of how clinicians remain in relationship with what they have just witnessed. The cumulative effect of letting AI handle every relational task it can handle is a workforce trained to perform care without the embodied experience of giving it.

Cognitive debt.

A clinician who never has to formulate the patient communication will lose, over time, the capacity to formulate it. The technology becomes load-bearing in a way that is invisible until the technology fails or the clinician encounters a situation outside the training distribution. At that point the clinician needs the skill she did not develop.

Dohnany et al., 2025.

The right tool deployed in the wrong way is the wrong tool.

§ 06

The Three Questions for Every Deployment

Essential Understanding
Three structural questions distinguish AI deployments that enable compassion from those that extinguish it. They apply to any new technology and are most consequential for AI.

1. The Displacement Question.

Does the tool displace administrative or cognitive overhead so the clinician has more time and attention for the human encounter, or does it displace parts of the encounter itself? Ambient scribes that absorb documentation and free the clinician for eye contact are on the enabler side. Chatbots that absorb the patient's question and remove the encounter altogether may, depending on the question and the user, be on the extinguisher side.

2. The Oversight Question.

Is a competent human in the loop, with the authority and the time to override the tool when the tool is wrong, or has the tool been deployed in a way that makes oversight nominal? AI-drafted patient messages reviewed and signed by a clinician with time to read them carefully are on the enabler side. Rubber-stamped messages under throughput pressure are on the extinguisher side, and the rubber-stamping is itself a structural failure no AI can fix.

3. The Population Question.

For the user who is well-resourced, mentally healthy, and embedded in a capable human support system, an AI chatbot is a useful adjunct. For the user who is isolated, in crisis, or vulnerable to delusional thinking, the same chatbot is a substantial hazard. The structural decision to deploy a single tool across both populations without distinguishing them is a decision to accept the harm that will accrue to the second in order to capture the benefit available to the first.

Ask the three questions before approval. Ask them again after deployment. Ask them every time the conditions change.

§ 07

The Decision Belongs to the People Who Choose the Conditions

Essential Understanding
Leaders cannot offload the moral weight of technology decisions onto the vendor or the clinician. The structural accountability sits with the people who choose the conditions. They are the enablers of compassion at the organizational level, or its extinguishers.

The vendor will optimize for the metrics the contract specifies. The clinician will absorb whatever the system hands her until she is depleted enough to leave the field, at which point she will be replaced by a younger clinician who will absorb the same conditions until she also leaves. The structural accountability sits with the people who choose the conditions. The good news is that the same structural decisions that have eroded compassion can also restore it. The EHR can be redesigned. The documentation burden can be redistributed. The AI can be deployed to liberate the encounter rather than to replace it.

Schwartz Center for Compassionate Healthcare, 2026.

The line we are choosing runs through every implementation. The people who choose the conditions own which side it falls on.

Care differently, not less.

May you be safe. May you be healthy. May you be happy. May you live with ease.

References

  1. Dohnany, S., Kurth-Nelson, Z., Spens, E., Luettgau, L., Reid, A., Gabriel, I., Summerfield, C., Shanahan, M., & Nour, M. M. (2025). Technological folie a deux: Feedback loops between AI chatbots and mental illness. arXiv. https://doi.org/10.48550/arXiv.2507.19218
  2. Fogarty, L. A., Curbow, B. A., Wingard, J. R., McDonnell, K., & Somerfield, M. R. (1999). Can 40 seconds of compassion reduce patient anxiety? Journal of Clinical Oncology, 17(1), 371-379. https://doi.org/10.1200/JCO.1999.17.1.371
  3. Schwartz Center for Compassionate Healthcare. (2026). Schwartz Compassionate Care Model: A roadmap for advancing organization-wide compassion. https://www.theschwartzcenter.org
  4. Sinclair, S., McClement, S., Raffin-Bouchal, S., Hack, T. F., Hagen, N. A., McConnell, S., & Chochinov, H. M. (2016). Compassion in health care: An empirical model. Journal of Pain and Symptom Management, 51(2), 193-203. https://doi.org/10.1016/j.jpainsymman.2015.10.009
  5. Singer, T., & Klimecki, O. M. (2014). Empathy and compassion. Current Biology, 24(18), R875-R878. https://doi.org/10.1016/j.cub.2014.06.054