Technology and Compassion in Healthcare Systems
What leaders owe the deployment decision
This page applies the operating frame established in Technology and Compassion: Enabler or Extinguisher to the decisions leaders actually make about EHRs, ambient AI scribes, patient communication tools, and the deployments that have not yet been named.
New here? Start with the foundation: Technology and Compassion: Enabler or Extinguisher
The hub establishes that technology is a substrate, not an actor, and that the line between enabler and extinguisher runs through the decisions, not through the silicon. The hub also establishes the three structural questions, displacement, oversight, and population, that every deployment must answer. This page applies that frame to the technology decisions leaders are making right now.
The EHR Has Been Closing the Forty-Second Window
The phrase pajama time did not enter clinical vocabulary by accident. It entered because the work it names became universal. A systematic review of EHR-related burnout from 2016 through 2021 found that the documentation hours imposed by the EHR are responsible for a sense of loss of autonomy, cognitive fatigue, and degraded relationships with colleagues. A clinician who is documenting cannot be present, and a clinician who is preoccupied with the documentation she will have to do later is not fully present even when her hands are on the patient.
Sinsky et al., 2016; Kruse et al., 2022.
The forty-second window Fogarty showed mattered is the same window the documentation burden has been quietly closing for two decades.
Ambient AI Scribes: The Strongest Current Evidence on the Enabler Side
A study of an ambient AI scribe across The Permanente Medical Group, published in NEJM Catalyst Innovations in Care Delivery, found that 3,442 physicians used the technology across more than 303,000 patient encounters in the first ten weeks, with reported reductions in documentation time and improvements in physician-patient interaction. A subsequent editorial review in JMIR Medical Informatics concluded that the consistently reported benefit across studies is improvement in the patient-physician interaction, with clinicians describing greater felt presence in the encounter, alongside persistent concerns about note accuracy and the need for vigilant clinician oversight.
The mechanism is straightforward. The tool absorbs the documentation. The clinician gets attention back. That returned attention is exactly what the compassionate encounter requires. Note quality and oversight remain real engineering and governance problems, not solved problems, but the directional evidence is consistent.
Tierney et al., 2024; Leung et al., 2025.
Displace the overhead, not the encounter.
The Population Question Is the Most Often Skipped
For the user who is well-resourced, mentally healthy, and embedded in a capable human support system, an AI chatbot or AI-drafted communication is a useful adjunct. For the user who is isolated, in crisis, or vulnerable to delusional thinking, the same tool is a substantial hazard. The clinical literature on AI psychosis and parasocial chatbot use has matured to the point where this distinction is no longer hypothetical. It is a matter of standing governance.
The implication for leaders is that AI deployments touching patient communication or emotional content require population stratification at the design level, not at the disclaimer level. A consent paragraph at the bottom of a chatbot interface does not constitute population governance. Neither does a clinical override that the high-volume clinician does not have time to perform.
Dohnany et al., 2025.
Different patients, different tools. Or at minimum, different governance for the same tool.
Throughput Optimization Is Compassion Erosion in Slow Motion
Time saved is necessary but not sufficient. A scheduling system that uses the time saved by an ambient scribe to add two more visits per session does not return attention to the encounter; it converts a relational gain into a productivity gain that the next quarter's patient experience scores will measure as a loss. The displacement question in the operating frame is not satisfied by saving time. It is satisfied by saving time and then ensuring the saved time goes to the patient.
This is a structural decision. It belongs to the leaders who set panel sizes, productivity expectations, and scheduling templates, not to the clinicians who are handed those parameters as conditions of employment.
Sinsky et al., 2016.
Saved time is not returned time. Returning it is a separate decision, and it is the decision that determines which side of the line the deployment falls on.
Vendor Optimization Will Not Save You
The vendor accountability problem is not that vendors are adversarial. It is that vendors are responsive to the metrics they are paid against. A leadership team that includes compassion outcomes in vendor selection criteria, in implementation milestones, and in renewal evaluations is buying a different deployment from a leadership team that includes only efficiency outcomes, even if the underlying technology is identical.
This applies most acutely to AI deployments. AI vendors are currently selling against documentation time, message volume, and clinician hours. Healthcare leaders who want AI deployments that preserve presence have to add presence-relevant metrics to the contract: clinician-reported attention during visits, patient-reported sense of being heard, and audit-ready oversight rates on AI-generated content.
Schwartz Center for Compassionate Healthcare, 2026.
What you measure is what you get. What the vendor measures is what the vendor optimizes.
Adding the Technology Dimension to the Compassion Audit
The For Healthcare Systems page already lists five compassion audit domains: culture, structure, leadership, routines, and communication. Technology belongs as a sixth. The questions to ask are the operating questions established in the hub.
For each significant technology in active use, the audit asks whether the displacement is overhead or encounter, whether the oversight is real or nominal, and whether the population the tool is deployed across has been stratified or assumed homogenous. The audit also asks whether the saved time, if any, has been returned to the encounter or absorbed into throughput, and whether the procurement included compassion outcomes alongside efficiency outcomes.
The output of the audit is not a score. It is a map of where the organization currently sits on the enabler-or-extinguisher line for each major technology, and what would be required to move any deployments currently on the wrong side to the right side. Most organizations doing this exercise for the first time will find that some deployments are clean enablers, some are clean extinguishers, and many are mixed. The mixed ones are the ones the structural decisions of the next twelve months will determine.
Schwartz Center for Compassionate Healthcare, 2026.
Audit what you deploy. Govern what you audit. The line is yours to walk.
Systems and technology must serve compassionate care, not hinder it.
Return to the operating frame
The conceptual foundation that establishes what AI can and cannot be, and the three structural questions that govern every deployment.
Read the foundationContinue with the systems stakeholder page
The full systems leadership view: organizational compassion, the cost of inaction, frameworks, interventions that work, and the structural elements of sustainable cultivation.
Continue to /for-systemsFurther Reading
Essays on technology, compassion, and the structural decisions that connect them.
The Front Desk Is the First Dose of Medicine
Why Ferrazzi was right, and why healthcare is trading compassion for convenience.
The Schwartz Compassionate Care Model: A Roadmap for Organization-Wide Compassion
After two decades of research and field experience, the Schwartz Center for Compassionate Healthcare has released a comprehensive framework for embedding compassion into organizational culture. The six-domain model moves compassion from individual aspiration to institutional architecture.
Building a Culture of Compassion
What healthcare organizations can do to support sustainable caring. The structural companion to internal practice.
Cultivating Compassion Within
The internal work: neuroscience and the practices that sustain sustainable caring.
Care differently, not less.
References
- Dohnany, S., Kurth-Nelson, Z., Spens, E., Luettgau, L., Reid, A., Gabriel, I., Summerfield, C., Shanahan, M., & Nour, M. M. (2025). Technological folie a deux: Feedback loops between AI chatbots and mental illness. arXiv. https://doi.org/10.48550/arXiv.2507.19218
- Kruse, C. S., Mileski, M., Dray, G., Johnson, Z., Shaw, C., & Shirodkar, H. (2022). Physician burnout and the electronic health record leading up to and during the first year of COVID-19: Systematic review. Journal of Medical Internet Research, 24(3), e36200. https://doi.org/10.2196/36200
- Leung, T. I., Coristine, A. J., & Benis, A. (2025). AI scribes in health care: Balancing transformative potential with responsible integration. JMIR Medical Informatics, 13, e80898. https://doi.org/10.2196/80898
- Schwartz Center for Compassionate Healthcare. (2026). Schwartz Compassionate Care Model: A roadmap for advancing organization-wide compassion. https://www.theschwartzcenter.org
- Sinsky, C., Colligan, L., Li, L., Prgomet, M., Reynolds, S., Goeders, L., Westbrook, J., Tutty, M., & Blike, G. (2016). Allocation of physician time in ambulatory practice: A time and motion study in 4 specialties. Annals of Internal Medicine, 165(11), 753-760. https://doi.org/10.7326/M16-0961
- Tierney, A. A., Gayre, G., Hoberman, B., Mattern, B., Ballesca, M., Kipnis, P., Liu, V., & Lee, K. (2024). Ambient artificial intelligence scribes to alleviate the burden of clinical documentation. NEJM Catalyst Innovations in Care Delivery, 5(3). https://doi.org/10.1056/CAT.23.0404