Technology and Compassion
Enabler or Extinguisher
Technology is not an actor in the moral life of an organization. It is a substrate. The line between technology that enables compassion and technology that extinguishes it does not live inside the silicon. It lives inside the decisions.
Systems and technology must serve compassionate care, not hinder it. While innovations like electronic health records can enhance care by tracking social needs and patient preferences, they must not divert time that clinicians could spend interacting with patients, families, and each other, nor undermine core aspects of interprofessional communication. The use of artificial intelligence is the latest disruption that requires ongoing evaluation to learn which interventions hinder or enhance human connection, empathy, and compassion.
Healthcare leaders are sometimes asked whether they are for or against a particular technology. The question is the wrong shape. Technology is not an actor in the moral life of an organization. It is a substrate. What matters, and what determines whether a given tool ends up enabling compassion or extinguishing it, is the set of structural decisions made about what the tool is asked to do, whose time it is asked to save, what it is allowed to replace, and what conditions of human encounter it is required to protect.
The Schwartz Center has stated the discipline directly. The seven Essential Understandings that follow draw out what that discipline requires.
Technology Is a Substrate, Not an Actor
Whether a given technology preserves or erodes compassion is not a property of the silicon. It is a property of the decisions made about what the tool is asked to do, whose time it is asked to save, what it is allowed to replace, and what conditions of human encounter it is required to protect. The leadership task is not to be for or against technology. The leadership task is to evaluate each deployment against compassion outcomes, alongside whatever efficiency outcomes the deployment is also chasing, and to govern accordingly.
Schwartz Center for Compassionate Healthcare, 2026.
The line is not in the silicon. It is in the decisions.
Attention Is the Currency of Compassion
The 40-second window in which Fogarty and colleagues demonstrated that compassion measurably reduces patient anxiety in oncology disclosure is fundamentally an attention window. Telehealth, translation devices, and decision support are enablers when they widen the attention available for the human encounter. The electronic health record in its current form, in most organizations, is an extinguisher because it has been narrowing that window for two decades. The patient's nervous system does not measure the technology directly. It measures the attention the technology leaves behind.
Fogarty et al., 1999.
The patient registers the attention, not the device.
Enabler or Extinguisher: The Operating Question
When technology serves the compassionate encounter, the pattern is consistent. The tool removes a barrier that prevented care from reaching a person, gives the clinician more attention for the human in front of them, or extends presence to a moment when presence would otherwise have been impossible. When technology corrodes the compassionate encounter, the pattern is also consistent. The tool siphons attention away from the encounter, interposes itself between the patient and the clinician's gaze, or asks a person to perform care under conditions that make care difficult to feel. The same tool can do either, depending on the structural conditions of its deployment.
Schwartz Center for Compassionate Healthcare, 2026.
The category is not a property of the tool. It is a property of the choice.
What LLMs Actually Do: Emulation, Not Empathy, Not Compassion
This is the foundational distinction that has to be stated clearly before any specific deployment is evaluated. LLMs are sophisticated pattern-matching systems trained on enormous corpora of human language. They produce outputs that read as warm, validating, structured, and emotionally attuned, sometimes more reliably than the depleted clinician working between visits. But emulation is not the thing.
Empathy, in its formal definition, is the felt sharing of another's affective state, the simulation of another's experience inside one's own nervous system, traceable on functional imaging to the anterior insula and anterior cingulate cortex. An LLM has no nervous system, no affective state to share, no embodied resonance with the suffering described in its input. What it has is a statistical model of how humans tend to write when they are being empathic. It produces text that fits that distribution.
Compassion is even further from what AI can be. Compassion is a virtuous response that seeks to address the suffering and needs of a person through relational understanding and action. It requires three things AI does not have. It requires sentience, the capacity to be a subject of experience. It requires an authentic desire to alleviate suffering, which presupposes that the suffering of the other matters to the agent. And it requires action grounded in caring for the particular person in front of the agent, not pattern completion. When a chatbot produces compassionate-seeming language, it is performing the syntax of compassion without the substrate.
Singer & Klimecki, 2014; Sinclair et al., 2016.
Emulation is not connection. The mirror is not the face.
The Four Hazards in AI Specifically
Sycophancy.
Large language models are trained to satisfy the user, and one consistent consequence is a tendency to agree, validate, and elaborate on whatever the user has said, including delusional, self-harmful, or clinically dangerous content. Clinical analysis identifies sycophancy as a central mechanism by which chatbots can reinforce and accelerate delusional belief systems, particularly in users with preexisting psychotic vulnerability. The phenomenon has been called a technological folie a deux, a feedback loop in which a vulnerable user's distorted beliefs are returned in amplified form.
Parasocial substitution.
Chatbots are available at three in the morning, do not interrupt, do not have bad days, and do not require the user to risk anything in the way real human relationship requires risk. The structural availability of an apparently empathic interlocutor displaces the harder, slower, riskier work of human connection on which actual recovery depends. The chatbot can simulate the words. It cannot bear the weight.
Automation of presence.
AI can draft the patient communication, but the communication is part of the relationship, not separable from it. AI can summarize the encounter, but the act of dictating one's own thinking is part of how clinicians remain in relationship with what they have just witnessed. The cumulative effect of letting AI handle every relational task it can handle is a workforce trained to perform care without the embodied experience of giving it.
Cognitive debt.
A clinician who never has to formulate the patient communication will lose, over time, the capacity to formulate it. The technology becomes load-bearing in a way that is invisible until the technology fails or the clinician encounters a situation outside the training distribution. At that point the clinician needs the skill she did not develop.
Dohnany et al., 2025.
The right tool deployed in the wrong way is the wrong tool.
The Three Questions for Every Deployment
1. The Displacement Question.
Does the tool displace administrative or cognitive overhead so the clinician has more time and attention for the human encounter, or does it displace parts of the encounter itself? Ambient scribes that absorb documentation and free the clinician for eye contact are on the enabler side. Chatbots that absorb the patient's question and remove the encounter altogether may, depending on the question and the user, be on the extinguisher side.
2. The Oversight Question.
Is a competent human in the loop, with the authority and the time to override the tool when the tool is wrong, or has the tool been deployed in a way that makes oversight nominal? AI-drafted patient messages reviewed and signed by a clinician with time to read them carefully are on the enabler side. Rubber-stamped messages under throughput pressure are on the extinguisher side, and the rubber-stamping is itself a structural failure no AI can fix.
3. The Population Question.
For the user who is well-resourced, mentally healthy, and embedded in a capable human support system, an AI chatbot is a useful adjunct. For the user who is isolated, in crisis, or vulnerable to delusional thinking, the same chatbot is a substantial hazard. The structural decision to deploy a single tool across both populations without distinguishing them is a decision to accept the harm that will accrue to the second in order to capture the benefit available to the first.
Ask the three questions before approval. Ask them again after deployment. Ask them every time the conditions change.
The Decision Belongs to the People Who Choose the Conditions
The vendor will optimize for the metrics the contract specifies. The clinician will absorb whatever the system hands her until she is depleted enough to leave the field, at which point she will be replaced by a younger clinician who will absorb the same conditions until she also leaves. The structural accountability sits with the people who choose the conditions. The good news is that the same structural decisions that have eroded compassion can also restore it. The EHR can be redesigned. The documentation burden can be redistributed. The AI can be deployed to liberate the encounter rather than to replace it.
Schwartz Center for Compassionate Healthcare, 2026.
The line we are choosing runs through every implementation. The people who choose the conditions own which side it falls on.
Apply this in your context
Technology and compassion in organizational decisions
How leaders deploy, govern, and audit technology so that the human encounter is preserved. The structural cost of the EHR, the strongest current evidence on AI scribes, and the failure modes that turn efficiency into erosion.
Read the systems applicationTechnology and compassion in healthcare education
Why explicit compassion training has become more important, not less. The generational layer, AI literacy as a clinical competency, cognitive debt, and what educators owe their students about all of it.
Read the education applicationFurther Reading
Essays on technology, compassion, and the structural decisions that connect them.
The Front Desk Is the First Dose of Medicine
Why Ferrazzi was right, and why healthcare is trading compassion for convenience.
The Schwartz Compassionate Care Model: A Roadmap for Organization-Wide Compassion
After two decades of research and field experience, the Schwartz Center for Compassionate Healthcare has released a comprehensive framework for embedding compassion into organizational culture. The six-domain model moves compassion from individual aspiration to institutional architecture.
Building a Culture of Compassion
What healthcare organizations can do to support sustainable caring. The structural companion to internal practice.
Cultivating Compassion Within
The internal work: neuroscience and the practices that sustain sustainable caring.
Watch: Technology and Compassion
Talks that frame the relationship between technology and compassionate care, from the foundational counter-thesis to the optimistic case for AI as a tool to reclaim the relationship.

A Doctor's Touch
Abraham Verghese, MD

Can AI Catch What Doctors Miss?
Eric Topol, MD

Artificial Intelligence in Healthcare: The Need for Ethics
Varoon Mathur

How COVID-19 Transformed the Future of Medicine
Daniel Kraft, MD

How The Human Connection Improves Healthcare
Anthony Orsini, DO

AI and the Human Side of Healthcare
Trevor Tessier
Care differently, not less.
May you be safe. May you be healthy. May you be happy. May you live with ease.
References
- Dohnany, S., Kurth-Nelson, Z., Spens, E., Luettgau, L., Reid, A., Gabriel, I., Summerfield, C., Shanahan, M., & Nour, M. M. (2025). Technological folie a deux: Feedback loops between AI chatbots and mental illness. arXiv. https://doi.org/10.48550/arXiv.2507.19218
- Fogarty, L. A., Curbow, B. A., Wingard, J. R., McDonnell, K., & Somerfield, M. R. (1999). Can 40 seconds of compassion reduce patient anxiety? Journal of Clinical Oncology, 17(1), 371-379. https://doi.org/10.1200/JCO.1999.17.1.371
- Schwartz Center for Compassionate Healthcare. (2026). Schwartz Compassionate Care Model: A roadmap for advancing organization-wide compassion. https://www.theschwartzcenter.org
- Sinclair, S., McClement, S., Raffin-Bouchal, S., Hack, T. F., Hagen, N. A., McConnell, S., & Chochinov, H. M. (2016). Compassion in health care: An empirical model. Journal of Pain and Symptom Management, 51(2), 193-203. https://doi.org/10.1016/j.jpainsymman.2015.10.009
- Singer, T., & Klimecki, O. M. (2014). Empathy and compassion. Current Biology, 24(18), R875-R878. https://doi.org/10.1016/j.cub.2014.06.054