Skip to main content
Application - For Healthcare Systems

Technology and Compassion in Healthcare Systems

What leaders owe the deployment decision

This page applies the operating frame established in Technology and Compassion: Enabler or Extinguisher to the decisions leaders actually make about EHRs, ambient AI scribes, patient communication tools, and the deployments that have not yet been named.

New here? Start with the foundation: Technology and Compassion: Enabler or Extinguisher

The hub establishes that technology is a substrate, not an actor, and that the line between enabler and extinguisher runs through the decisions, not through the silicon. The hub also establishes the three structural questions, displacement, oversight, and population, that every deployment must answer. This page applies that frame to the technology decisions leaders are making right now.

§ 01

The EHR Has Been Closing the Forty-Second Window

Essential Understanding
Time-and-motion data show physicians spend 27 percent of their day on direct face-to-face patient care and 49 percent on EHR and desk work, with one to two additional hours of after-hours documentation. The relational consequence is structural, not personal.

The phrase pajama time did not enter clinical vocabulary by accident. It entered because the work it names became universal. A systematic review of EHR-related burnout from 2016 through 2021 found that the documentation hours imposed by the EHR are responsible for a sense of loss of autonomy, cognitive fatigue, and degraded relationships with colleagues. A clinician who is documenting cannot be present, and a clinician who is preoccupied with the documentation she will have to do later is not fully present even when her hands are on the patient.

Sinsky et al., 2016; Kruse et al., 2022.

The forty-second window Fogarty showed mattered is the same window the documentation burden has been quietly closing for two decades.

§ 02

Ambient AI Scribes: The Strongest Current Evidence on the Enabler Side

Essential Understanding
Ambient AI scribes that draft documentation for clinician review consistently produce reductions in documentation time, after-hours work, and clinician-reported attention deficits during visits. Among current AI deployments in healthcare, this is the cleanest example of AI used as an enabler rather than an extinguisher.

A study of an ambient AI scribe across The Permanente Medical Group, published in NEJM Catalyst Innovations in Care Delivery, found that 3,442 physicians used the technology across more than 303,000 patient encounters in the first ten weeks, with reported reductions in documentation time and improvements in physician-patient interaction. A subsequent editorial review in JMIR Medical Informatics concluded that the consistently reported benefit across studies is improvement in the patient-physician interaction, with clinicians describing greater felt presence in the encounter, alongside persistent concerns about note accuracy and the need for vigilant clinician oversight.

The mechanism is straightforward. The tool absorbs the documentation. The clinician gets attention back. That returned attention is exactly what the compassionate encounter requires. Note quality and oversight remain real engineering and governance problems, not solved problems, but the directional evidence is consistent.

Tierney et al., 2024; Leung et al., 2025.

Displace the overhead, not the encounter.

§ 03

The Population Question Is the Most Often Skipped

Essential Understanding
Of the three structural questions, displacement, oversight, and population, the population question is the one most often omitted. Deploying a single AI tool across heterogeneous patient populations without distinguishing them is a decision to accept the harm that will accrue to the most vulnerable users in order to capture the benefit available to the rest.

For the user who is well-resourced, mentally healthy, and embedded in a capable human support system, an AI chatbot or AI-drafted communication is a useful adjunct. For the user who is isolated, in crisis, or vulnerable to delusional thinking, the same tool is a substantial hazard. The clinical literature on AI psychosis and parasocial chatbot use has matured to the point where this distinction is no longer hypothetical. It is a matter of standing governance.

The implication for leaders is that AI deployments touching patient communication or emotional content require population stratification at the design level, not at the disclaimer level. A consent paragraph at the bottom of a chatbot interface does not constitute population governance. Neither does a clinical override that the high-volume clinician does not have time to perform.

Dohnany et al., 2025.

Different patients, different tools. Or at minimum, different governance for the same tool.

§ 04

Throughput Optimization Is Compassion Erosion in Slow Motion

Essential Understanding
When time saved by technology is reinvested in additional throughput rather than returned to the encounter, the deployment is structurally an extinguisher even when the immediate metrics suggest otherwise.

Time saved is necessary but not sufficient. A scheduling system that uses the time saved by an ambient scribe to add two more visits per session does not return attention to the encounter; it converts a relational gain into a productivity gain that the next quarter's patient experience scores will measure as a loss. The displacement question in the operating frame is not satisfied by saving time. It is satisfied by saving time and then ensuring the saved time goes to the patient.

This is a structural decision. It belongs to the leaders who set panel sizes, productivity expectations, and scheduling templates, not to the clinicians who are handed those parameters as conditions of employment.

Sinsky et al., 2016.

Saved time is not returned time. Returning it is a separate decision, and it is the decision that determines which side of the line the deployment falls on.

§ 05

Vendor Optimization Will Not Save You

Essential Understanding
Technology vendors optimize for the metrics the contract specifies. If the contract specifies efficiency, the vendor will deliver efficiency. If the contract is silent on compassion outcomes, the deployment will be silent on them too. Compassion outcomes have to be specified in procurement, not added afterward.

The vendor accountability problem is not that vendors are adversarial. It is that vendors are responsive to the metrics they are paid against. A leadership team that includes compassion outcomes in vendor selection criteria, in implementation milestones, and in renewal evaluations is buying a different deployment from a leadership team that includes only efficiency outcomes, even if the underlying technology is identical.

This applies most acutely to AI deployments. AI vendors are currently selling against documentation time, message volume, and clinician hours. Healthcare leaders who want AI deployments that preserve presence have to add presence-relevant metrics to the contract: clinician-reported attention during visits, patient-reported sense of being heard, and audit-ready oversight rates on AI-generated content.

Schwartz Center for Compassionate Healthcare, 2026.

What you measure is what you get. What the vendor measures is what the vendor optimizes.

§ 06

Adding the Technology Dimension to the Compassion Audit

Essential Understanding
The compassion audit, applied at the organizational level, must include a technology dimension alongside culture, structure, leadership, routines, and communication. Technology decisions are now structural enough to require their own audit category.

The For Healthcare Systems page already lists five compassion audit domains: culture, structure, leadership, routines, and communication. Technology belongs as a sixth. The questions to ask are the operating questions established in the hub.

For each significant technology in active use, the audit asks whether the displacement is overhead or encounter, whether the oversight is real or nominal, and whether the population the tool is deployed across has been stratified or assumed homogenous. The audit also asks whether the saved time, if any, has been returned to the encounter or absorbed into throughput, and whether the procurement included compassion outcomes alongside efficiency outcomes.

The output of the audit is not a score. It is a map of where the organization currently sits on the enabler-or-extinguisher line for each major technology, and what would be required to move any deployments currently on the wrong side to the right side. Most organizations doing this exercise for the first time will find that some deployments are clean enablers, some are clean extinguishers, and many are mixed. The mixed ones are the ones the structural decisions of the next twelve months will determine.

Schwartz Center for Compassionate Healthcare, 2026.

Audit what you deploy. Govern what you audit. The line is yours to walk.

Systems and technology must serve compassionate care, not hinder it.
Schwartz Center for Compassionate Healthcare, 2026

Care differently, not less.

References

  1. Dohnany, S., Kurth-Nelson, Z., Spens, E., Luettgau, L., Reid, A., Gabriel, I., Summerfield, C., Shanahan, M., & Nour, M. M. (2025). Technological folie a deux: Feedback loops between AI chatbots and mental illness. arXiv. https://doi.org/10.48550/arXiv.2507.19218
  2. Kruse, C. S., Mileski, M., Dray, G., Johnson, Z., Shaw, C., & Shirodkar, H. (2022). Physician burnout and the electronic health record leading up to and during the first year of COVID-19: Systematic review. Journal of Medical Internet Research, 24(3), e36200. https://doi.org/10.2196/36200
  3. Leung, T. I., Coristine, A. J., & Benis, A. (2025). AI scribes in health care: Balancing transformative potential with responsible integration. JMIR Medical Informatics, 13, e80898. https://doi.org/10.2196/80898
  4. Schwartz Center for Compassionate Healthcare. (2026). Schwartz Compassionate Care Model: A roadmap for advancing organization-wide compassion. https://www.theschwartzcenter.org
  5. Sinsky, C., Colligan, L., Li, L., Prgomet, M., Reynolds, S., Goeders, L., Westbrook, J., Tutty, M., & Blike, G. (2016). Allocation of physician time in ambulatory practice: A time and motion study in 4 specialties. Annals of Internal Medicine, 165(11), 753-760. https://doi.org/10.7326/M16-0961
  6. Tierney, A. A., Gayre, G., Hoberman, B., Mattern, B., Ballesca, M., Kipnis, P., Liu, V., & Lee, K. (2024). Ambient artificial intelligence scribes to alleviate the burden of clinical documentation. NEJM Catalyst Innovations in Care Delivery, 5(3). https://doi.org/10.1056/CAT.23.0404