Industry Insights

Syndicated News

Building Bots With Empathy Requires Finding The Right Balance

I’ve been talking with the cognitive behavioral therapist chatbot known as Woebot a lot recently, and it’s a little like talking to my mother. Not because of the whimsical anecdotes and GIFs the bot dispenses—that’s not her style—but because of all the practical reminders to get out of my own way. Like the sensible woman who raised me, Woebot isn’t big on protracted whinging or solicitations for pity, and its dogged optimism occasionally rubs me the wrong way. But they are both mostly right, my mom and the bot. More importantly, where a dose of practicality would do a person good, they are mostly helpful.

Woebot is one of a phalanx of AI tools designed to solicit and respond to sensitive information for the benefit of its users’ health and wellbeing. According to Woebot Labs’ Chief Clinical Officer, Athena Robinson, it’s also joining a wave of products in specifically targeting employers, who may be able to harness these tools to promote health and safety in the workplace. These products serve different purposes, which can be deployed at scale and at relatively low costs, like Spot and Botler AI for reporting harassment and discrimination, and Woebot and Tess, which aim to directly improve users’ mental health. (All but Tess are also available for use by individuals at no cost.)

In order for these products to work, each needs to pose and respond to questions with sensitivity to make employees feel comfortable divulging difficult information. And while much has been made of the quest to build empathetic AI, they’ve taken radically divergent approaches to the challenge of exhibiting empathy and seeming “human.”

Jessica Collier, who cofounded AI vehicle All Turtles in October of 2016 and took over as CEO of Spot earlier this year, recalls that creating the AI’s tone of voice was the trickiest part of the product’s first year of development. Designed to enhance accurate recall and to collect the details surrounding harassment or discrimination at work, the tool uses a question-and-answer protocol originally established to increase the efficacy of police interviews. Spot’s developers argue that performing the protocol via the AI interface eliminates bias and elicits descriptive details in response to intelligent follow-up questions. “What you’re getting isa much more thorough account [of an episode of harassment or discrimination],” Collier says. “Your intake process is a lot more efficient.”

While the Spot bot never offers explicit sympathy—users may be told “Take a deep breath” or “Thank you for telling me about that,” but not “I’m sorry that happened to you”—it should handle users’ difficult disclosures with care, Collier says. “The nuances of phrasing, the ability to be very precise and very clear without…being sort of cold: those are very delicate and difficult-to-mine skills,” she notes, stressing that talented UX writers are key to these products’ success. (Collier herself came to AI by way of UX writing, as well as a PhD in English.) According to Collier, the Spot team has “leveled up and down” the bot’s personality and warmth since the product launched in early 2018, arriving at a genderless, personality-neutral interviewer.

I am very anti heavy doses of personalities in bots or machine applications,” Collier stresses. “I think we skew way too much towards a weird hybrid of delight and faux humanity.” In an effort to make AI tools more human, Collier argues, many applications are missing the point, opting to “harness all the pieces of human personality that are quirky or delightful or fun, and not any of the facets of human personality that are very deliberate or functional or thoughtful or precise.”

She notes that Spot users tend to feel better after using the reporting platform simply because the bot listens without judgment and despite the fact that Spot offers no therapeutic support.

Humanity,” as a property of AI, itself may be more of a liability than an asset. Research indicates that people may be more likely to disclose sensitive information anonymously to an AI interface than a human-operated virtual therapy platform. Tools like Spot could effectively eliminate reporting bias and the discomfort of telling another person (of another gender, race, or background) about something unfortunate that happened to you. And although these platforms were designed to enhance the experiences of people in the workplace, they can all be used at home and at one’s own convenience, or whenever moments of psychological stress arise.

Woebot’s approach, Robinson acknowledges, is decidedly not-human. The highly stylized chatbot offers an individualized crash course in cognitive behavioral therapy, prompting users to become cognizant of negative beliefs or patterns of thinking and then helping them to re-write those scripts. The bot’s cheery persona is “a cross between a Kermit the Frog and a Spock kind of character,” Robinson says: “not human, but openly and acceptingly not human” and “very curious about humans.” After a few sessions with the bot, he offhandedly mentions a pet seagull, “Zed.” One morning, he prompts me to begin a session with a GIF of a baby hedgehog getting a belly rub. I respond with the most apt of the three preselected replies, a resigned “Oh Woebot.

When I suggest that Woebot users may not disclose the same sensitive information they would discuss with a traditional (human) therapist, Robinson pushes back. “I do think that you can tell Woebot what you would tell your human therapist, but the way in which the interaction will therefore flow, subsequently, will of course be different,” she says. “We are not a human. We are not designed to be a human. We are purposely a bot.”

Robinson, who holds a PhD in clinical psychology, took to the Woebot Blog last summer to argue for greater awareness of and treatment for employees’ mental health issues. As she explains, an AI tool like Woebot fits within the “scaffolding” businesses created to support their employees, supplementing traditional therapy wherever possible and serving as a free 24-hour resource for anyone who desires help.

“I think because so many of us seek care through our employers, that it’s a great opportunity there to educate the space,” Robinson posits, “to support the space, to provide a solution to the space.” There’s no be-all-end-all solution to a complex set of problems, but an approachable AI tool can help.

Collier agrees. “I think if we can get people accustomed to a higher standard for healthy workplace culture, then that starts to bleed out into how we behave outside of work,” the CEO says. “It starts to bleed out into our daily lives.”

View oringinal content here

Related News:

Are Governments Providing Improved Digital Experiences During a Global Pandemic?