We often respond more openly to computers and robots than we do to our fellow-humans. Yet some ethicists worry that relying too much on A.I. could be dangerous.
few years ago, Timothy Bickmore, a computer scientist at Northeastern University, developed an artificial-intelligence program to help low-income patients at Boston Medical Center prepare for their return home from the hospital. The virtual nurse, alternately called Louise or Elizabeth, was embodied as an animated figure on a screen. It began by asking patients whether they were Red Sox fans, then walked them through what they should do after they were discharged. (“Your doctor has prescribed Pantoprazole. This medication is for your stomach. You will take one pill in the morning.”) Bickmore has since created a slew of these programs—an A.I. couples counsellor, an exercise coach, a palliative-care consultant—all aimed at disadvantaged clients. “It’s where we think we can have the most impact,” he told me recently. “Hopefully, the A.I. is better than nothing.”
It sounds like a classic techno-dystopia—human warmth displaced by a cold computer, one made somehow worse by the patronizing nod to local-sports fandom. But this was not the same old story of the relentless drive for efficiency spawning a dehumanizing tool. There was a surprise buried in Bickmore’s experiment: seventy-four per cent of his subjects preferred Louise and Elizabeth to their real-life counterparts. Human health-care providers spend an average of seven minutes with patients at discharge, Bickmore told me, but low-literacy patients need more like an hour. With the virtual nurse, his subjects could proceed at their own pace, digesting the information without the embarrassment of doing so too slowly. As one patient remarked, “Doctors are always in a hurry.”
Most contemporary writing about A.I. fixates on the vital concerns of job disruption, privacy, and algorithmic bias. But there is an equally important conversation to be had about shame and vulnerability. We often respond more frankly to computers and robots than we do to our fellow-humans. In online surveys, for example, people admit to financial stress and illegal or unethical acts more readily than they do over the phone, and potential blood donors report riskier behaviors. When a virtual interviewer is asking the questions, children are more candid about bullying and adults show sadness more intensely. Part of this openness stems from the presumed anonymity of telling something to a machine: computers seem private because of their very facelessness. Their anonymity can feel like a license to let go, even if the daily drumbeat of hacks, leaks, and other breaches reminds us that data are identifiable, indelible, and obtainable.
Ethicists note the dangers of these findings. “If it turns out that humans are reliably more truthful with robots than they are with other humans, it will only be a matter of time before robots will interrogate humans,” Matthias Scheutz, a philosopher and computer scientist at Tufts University, warned in 2011, just as engineers funded by the Department of Homeland Security were developing an avatar kiosk for use in border screenings. The kiosk, which has already been field-tested at “low-risk” sites, asks questions such as “If we searched your bag, would we find anything you haven’t declared?” and relies on sensors measuring voice, pupil dilation, and pulse to detect deception.
Yet, in caring work, what could be wrong with allowing people to feel less embarrassment or humiliation? Neeta Gautam, a physician with Stanford Primary Care, in Santa Clara, California, told me that breaking down these emotions is a crucial part of her practice. You can’t get patients to do what they need to do, she said—from making incremental changes in diet and life style to taking their medication—unless they trust you enough to be honest about their failings. Gautam said that she tries to make sure her exam room is “a safe environment to talk about things like ‘I can’t afford’ or ‘I don’t like this’ or ‘I don’t know how to cook it’ or ‘I don’t have time to do it.’ ” Shame can stifle patients, causing them to keep their incompetence and unhealthy behaviors hidden. For some populations, including veterans, who often see a stigma in therapy, it can prevent them from seeking treatment in the first place. In these cases, A.I. is not just “better than nothing” but, indeed, better than humans.
Still, some health practitioners believe that vulnerability has its uses. “Treatment is not about the simple act of telling secrets,” Sherry Turkle, a psychoanalyst at the Massachusetts Institute of Technology, writes in her book “Alone Together”; rather, it is about the patient speaking to someone who can “push back.” Turkle argues that “when we talk to robots, we share thoughts with machines that can offer no such resistance. Our stories fall, literally, on deaf ears.” When I spoke with Andreas Paepcke, a senior research scientist and data analyst at Stanford, he made a similar point about teaching. Humans offer “an audience that matters,” he said, and it could well be impossible to “project enough humanness onto a robot that you want to make it proud of you.” Shame might be too important to eliminate, Paepcke mused, because the relief from it is so profound that it leads to “the understanding that here is a person who did not trample on me in my vulnerability”—an understanding that can lead, in turn, to personal growth.
By skirting shame entirely, apps and bots may offer only the thinnest version of care. Several of the therapists and teachers I spoke with suggested that automating or using A.I. to deliver care would be the same as relying on a “cloth monkey”—a reference to a cruel experiment, carried out in 1959, in which infant monkeys were given a choice between two surrogate mothers, one made from welded wire, the other from terry cloth. (The infants preferred the cloth mother, even when only the wire mother gave them milk.) A.I.-driven care was a sorry version of the real thing, they argued. The only reason that people might think they preferred it was because the care they normally received was full of judgment without support—in other words, full of unrelieved shame.
Paepcke’s point echoes what Bickmore, the computer scientist, found in the reaction to Laura, an A.I. exercise coach that his lab created. One of the testers wrote, “Laura was very repetitive, so it was actually more motivating in the beginning to talk to her than later on. She would go through the same routine every single time. As a result, I didn’t feel obligated, I didn’t feel like I had to impress her in any way.” The downside of freedom from shame, it seems, is freedom from caring at all.