Should a Robot Tell You You’re Going to Die?
Doctors using telepresence robots can help distant patients while improving quality of care, but is that the real motivation?
There’s no denying that change can be hard. Perhaps change the medical industry is particularly fraught, dealing as it does in life and death. Digital communications technologies--conference calls, Facetime, and so on--are not new to most of us, but their rollout in a medical context recently was bungled, resulting in outrage over a “robot” who told a California man that his death was imminent. This event raises important questions about technological change in the medical field and whose opinions on it count.
First, the facts behind the headline: Ernest Quintana, age 78 with pre-existing medical conditions, showed up to a Kaiser Permanente emergency department in dire straits. He was admitted to the ICU while accompanied by several members of his family, who were told by a nurse that the doctor would soon make rounds to see them.
The doctor who came around did not enter the room physically; instead he interacted with Mr. Quintana via video on a “telepresence robot” (think: a tablet on wheels). Unfortunately, the prognosis was grim: Mr. Quintana’s lungs were failing with no more treatment options on the table. He died a few days later, and his family’s dissatisfaction with this meeting got picked up by the local news. Hence, the “Terminal patient learns he’s going to die from a robot doctor” headlines.
In theory, biomedical ethics is based on four foundational principles. 1) Autonomy requires respect for patient agency through informed consent. 2) Justice requires the equitable distribution of healthcare resources across patients. 3) Beneficence requires healthcare providers to act with the intent of improving patient health. 4) And when providing genuine health benefit is difficult or impossible, non-maleficence requires healthcare providers to, at the very least, “do no harm” (this really means no harm that’s not outweighed, according to the patient, by the likelihood of benefit).
These four principles can help focus our attention on aspects of the problem at hand. But applying the seemingly-tidy principles is rarely straightforward in practice, and they can cut in different directions. For instance, patients often want expensive, marginally beneficial treatments that soak healthcare systems for resources that could be more effectively used by others.
In this case, the principles do not easily resolve the telepresence robot issue. Autonomy applies to who gets to do what to patients’ bodies, but does it extend to patient preferences about how their doctors operate their offices? Justice could go either way: do digital health innovations widen or close the gap between low-income and high-income medical consumers? Well, that depends on how they’re used. Some technologies genuinely reduce the costs of the transaction, while others just shift those costs around. Doctors used to make housecalls, but they mostly stopped. (Though the direct primary care movement is trying to revive the practice.) Why?
Like so many problems in America’s mess of a healthcare system, the decline of housecalls is closely related to the insertion of third parties (like insurance companies and government programs) into the healthcare market. As both healthcare consumers and providers lost much of their power to choose the terms of the transaction, it became less possible for consumers to pay a little more for the convenience of a housecall (which might actually save the consumer money, all things considered - transportation, foregone earnings, etc).
For better or for worse, these same third parties still participate in the American healthcare system and more heavily than ever. So there is something morally suspect about the shift to telepresence robots under these conditions. In a truly free healthcare market, the widespread emergence of telepresence technologies would indicate that customers freely chose them (or, at a minimum, dislike the technologies less than they dislike the alternatives: higher prices or longer wait times).
Instead, providers are now experimenting with telepresence technologies as a way of containing costs, especially since other choices are not legally on the table. Obamacare limited how much insurers could raise their rates, reduce coverage, or restrict the insurance pool. So the emergence of these technologies is not completely spontaneous and freely chosen.
Yet we shouldn’t dismiss telepresence robots out of hand just because their use isn’t unanimously accepted in their current, early form. Over time, can healthcare consumers come to accept technological changes like telepresence robots? The answer is definitely yes. Similar robots have been used to help seriously ill children to virtually attend school once again, with some amazing results.
Even senior assisted living residents with initial reservations come to value new telepresence devices for making family more accessible (though norms around resident privacy haven’t settled yet). These populations are motivated to try the devices for improving daily life, which is quite a bit different from the emergency scenario Mr. Quintana faced. Maybe the answer is to get more robots involved in medical care sooner even when they aren’t strictly necessary, instead of reserving them solely for exceptional situations.
Plus, the particulars of the recent case in California suggest that much of the deceased family’s reaction was due to poor implementation of the telepresence robot and not inherent to its use. Since Mr. Quintana was hard of hearing, his family had to repeat to him what the doctor had said via robot.
But technology is supposed to enhance accessibility, not jeopardize it! Something as simple as a set of headphones, closed captioning, or real-time text-based messaging with the doctor could have gone a long way to fix that part of the problem. Stationing a nurse in the room while the telepresence robot visits might mitigate the sterile, “dehumanized” feel of the interaction at a manageable cost.
The costs of technological change to those with real, contrary preferences are not dismissible out of hand. Just labeling those skeptical of the technology as “Luddites” doesn’t resolve anything; it only obscures and inflames genuine disagreement. These individuals do and should get their consumer’s vote, the option to choose where they put their dollars. But they shouldn’t get more of a say than that, neither legislatively nor via regulatory bodies, neither by fiat nor lawsuit. These latter remedies only calcify old preferences and further lock consumers into excessively hard-to-change institutions.
A victim in this telepresence robot case, one of Mr. Quintana’s relatives, told her local news station “I don’t want this to happen to anyone else… it just shouldn’t happen.” Sympathetic though her story sounds, it’s simply not her choice whether continued medical innovation along these lines is the right thing to pursue.
Strongly negative reactions to telepresence robots can partially be solved now through better communication with patients and more judicious implementation of telepresence services. And the problem will partially solve itself over time, as cultural mores continue to shift. We shouldn’t let the specific, visible objectors to technological change outweigh the faceless, unseen victims of forestalled technological change, those families who wait many hours to see a doctor at all, or even families whose loved one dies in the meantime.