By Allison DeLaney
What is a hospice chaplain’s scope of practice? I thought I had a clear answer for myself, having spent eight years working in a small, rural, nonprofit hospice house, but then I read an article about a wild invention: artificial intelligence to predict mortality in patients.
The idea was born when Dr. Stephanie Harman, Stanford’s founding medical director of palliative care services, partnered with Nigam Shah in the biomedical informatics department. They used electronic health records of 2 million patients to predict mortality in the subsequent three to 12 months. The resulting algorithm has resulted in an all-cause mortality prediction model, which is being piloted by researchers and physicians.
To be clear, they are using statistical data from many different types of illnesses to predict the likelihood that one will die. The argument for it goes like this: We assume that palliative care teams are an important resource near the end of one’s life. Too often palliative care recommendations are absent or delayed. If doctors had the help of AI to predict death, then they would be more likely to refer these patients to palliative care. The logic is captivating: earlier intervention, better patient care.
But after learning about this project, I was both shocked and curious. As a hospice chaplain, I have witnessed diverse ways that patients and their families find meaning during health crises and how they come to terms with vulnerability and mortality. How might this affect patients and families? How might it change the relationship between healthcare staff and patients? I now felt compelled to reflect on this type of data, how it is used and interpreted, and what it might mean for my patients and professional practice.
The introduction to the Ethical and Religious Directives provides a healthy foundation: “The dialogue between medical science and Christian faith has for its primary purpose the common good of all human persons. It presupposes that science and faith do not contradict each other. Both are grounded in respect for truth and freedom. As new knowledge and new technologies expand, each person must form a correct conscience based on the moral norms for proper health care.” Perhaps the Stanford researchers and hospice chaplains are interested in a common goal: to relieve suffering as we near the end of life. In this case, AI could enhance what physicians are already trying to do: guess when we are going to die. To gain access to the hospice benefit, a doctor must certify that a patient has six months or less to live. Not surprisingly, the accuracy of those predictions differs based on the relationship between the physician and the patient. As the length of the patient-physician relationship increases, so does the probability that the physician will be overly optimistic about the patient’s life expectancy. Would AI help physicians be more aware of their patient’s mortality? If Stanford’s AI were proved accurate (this is still to be determined), what repercussions would this have for the spiritual care of patients, families, and staff?
I hope this is where we can ask the questions that won’t be pondered otherwise. Chaplains have extremely valuable data points about coping with mortality and providing competent palliative and hospice care. The knowledge that one might die sooner than later starts a chain reaction of emotions and questions. Will the knowledge help improve the quality of time I have left? What resources will I have to help me face existential concerns? Feelings of anger, abandonment? Will it spur despair/hopelessness? Grief? Guilt? Will I feel isolated or choose isolation? Will I experience spiritual and/or religious struggle? All these concerns tend to follow the initial prognosis of death.
We know that palliative care is a valuable resource that can enhance quality of life precisely because it embraces the constellation of questions that come with mortality and does not separate the knowledge of mortality from its repercussions. My concern around Stanford’s mortality prediction algorithm is the way this information could harm patient-provider relationships and access to care. Perhaps these statistics could be used for cost savings to justify withholding treatment to the most vulnerable. I also fear the doctor who trusts the algorithm more than listening to my signs, symptoms, and story. If the computers predict my death, throwing my life off course, and it turns out to be a mistake, no one might be held accountable. The list of fears is long because the weight of knowing that death is near is dangerous — if divorced from compassionate and sensitive communication and trusted relationships.
This technology, used in a moral and ethical context, might do what the Stanford AI team hopes: provide more timely palliative care resources targeted at the people who need it most. Perhaps it will give insurance companies and policymakers the evidence needed to fund more palliative care and further improve access. All this benefit, too, is possible.
Why is this a concern for us chaplains? Because to be silent on technological advances is to promote technological monism, the idea that the most meaningful problems and solutions depend primarily on technology. The answer to human suffering and mortality is not only an accurate prediction of my date of death.
As human beings, consumers of healthcare, people trying to develop systems of care that help us live well until our death, we chaplains must argue for addressing the innate emotional and spiritual needs that arise when we struggle with mortality. I believe it is our professional duty to illuminate the data, qualitative, and quantitative, that arises from our patient-based experience, to evaluate the context in which these statistics will play out.
My hunch is that the more accurately we can predict death, the more our physician and healthcare colleagues will need support in how to “break the bad news” to even more patients. In a time-crunched, resource-limited environment, creative interdisciplinary initiatives are addressing this need for sensitivity training, communication skills, self-reflection, and spiritual awareness. Let us gather the wisdom and lessons learned from our experiences working at the edges of life, illness, and death and make the much-needed case for presence, story, meaning-making. Improved end-of-life care includes good access to quality spiritual care, which is more possible with better chaplain-patient and chaplain-staff ratios. Hopeful initiatives that directly improve palliative care through building healthcare workers’ empathy include the practice of narrative medicine initiated by Dr. Rita Charon of Columbia University and interdisciplinary reflection rounds lead by Dr. Christina Pulchalski at George Washington Institute for Spirituality and Health.
In the end, technologies such as the Stanford AI algorithm that aspire to predict our probability of dying offer radical new possibilities for approaching palliative and end-of-life care. Aside from “When will I die?” the more compelling question is, “How does this change the quality of support that I receive?” The Stanford researchers acknowledge that their pilot needs to study the effect of AI on physician behavior and the extent to which the resulting care aligns with patient wishes. This is a chaplain’s wheelhouse and demands that we engage with strange and new technologies so we can indeed fulfill our professional duty to appropriate spiritual care. Our experience, our data points, are needed if this technology (or any new technology) will benefit rather than hurt patients who are already vulnerable.
Allison DeLaney, BCC, MA, MPH, PT is pediatric and women’s health chaplain at Virginia Commonwealth University and is part of the first cohort of chaplains to earn a master’s degree in public health through the Transforming Chaplaincy initiative.