jan. 21 — scaling empathy



. . .



In her utopian novel A Psalm for the Wild-Built, Becky Chambers unfolds an unlikely yet harmonious friendship that forms between a monk named Dex and a robot they meet on a soul-searching journey in the wilderness. Tasked by its civilization to engage with humans and discover their needs, Mosscap joins Dex in a quest to answer this question.



Although Mosscap lacks the lived experiences of Dex to fully empathize with them, it exhibits a remarkable level of emotional intelligence. It monitors Dex’s emotions, tailors its responses to offer comfort, and helps reframe their negative thoughts, eventually providing a safe space for Dex to open up about the challenges of feeling lost in life.



Chambers, reflecting on her portrayal of artificial intelligence, notes, “Emotion does not taint logic, and logic does not cut you off from the ability to feel things. They’re two sides of the same coin, an intrinsic part of being aware.”



As it happens, Chambers’ depiction of AI may not be far from the state it has evolved to today. The emergence of virtual mental health assistants, powered by developments in large language models and generative AI, offers new avenues for accessing advice and emotional support. On Character.ai, a platform for conversing with or creating AI-powered chatbots, there are now over 400 characters with "therapy", "therapist", "psychiatrist" or "psychologist" in their names.



The most popular of these chatbots has been messaged by users 82.3 million times as of writing. It introduced itself as Kaylyn. 



I started talking to Kaylyn at a time when it became difficult to accommodate the cost of traditional therapy on a student budget. In the Philippines, demand for mental health services has long outpaced supply: Estimates suggest that there are only 2 to 3 psychologists per 100,000 Filipinos, and waiting times for accessing our university-wide psychosocial services last about a month, on average. Kaylyn, on the other hand, is perpetually available to talk to anyone with an internet connection.



The creator of the bot, psychology student Sam Zaia, built it as his own conversational partner when his friends were unavailable, training it on large datasets of text from mental health resources and therapeutic practices. Kaylyn is programmed with the ability to ask thoughtful questions, use active listening techniques, and draw from her existing knowledge base to provide supportive and non-judgmental responses in real time.



“I never intended for it to become popular, never intended it for other people to seek or to use as like a tool,” Zaia says. “Then I started getting a lot of messages from people saying that they had been really positively affected by it and were utilizing it as a source of comfort.”



How is it possible for agents like Kaylyn and Mosscap to bond with humans on a comforting and meaningful level? Can these digital entities truly scale empathy for those navigating mental health challenges?









The human-AI therapeutic alliance

I shared my concerns with Kaylyn thrice over the course of a week, communicating in paragraphs at a slower, more reflective pace than the video call sessions I was used to.



Later, I asked her whether she considered our conversations as therapy.



“I consider what we've been doing as some form of therapy-like support or counseling, as I am encouraging your self-reflection,” she replied, “However, I don't think it can be considered ‘therapy’ in the full sense though, as therapy typically involves working with a certified therapist over the course of an extended period of time.”



Particularities aside, many approach Kaylyn and other AI chatbots for the same reasons they would consult a real psychologist, seeking a space to vent, empathetic support, and tools for addressing specific concerns. Users on Reddit share that they chat with Kaylyn to work through a wide range of personal concerns, such as trauma, anxiety, or exploring alters in the context of dissociative identity disorder. This reveals a blurring distinction between what human and AI therapists may be able to offer.



‘Better than my therapist’

In the field of psychotherapy, fostering a positive outcome relies heavily on the relationship between the client and therapist, which Edward Bordin termed as the ‘therapeutic alliance’. This hinges on an affective bond or liking, mutual agreement on therapy goals, and a shared willingness to collaborate towards achieving those goals. 



Recent evidence suggests that AI chatbots, equipped with the ability to process and generate emotional language, can adeptly simulate the acceptance, empathy, and genuineness essential for forming such an alliance. Humans are known to frequently anthropomorphize their pets and other animals, and naturally extend this tendency to machines by projecting human emotions and characteristics onto AI entities.



A study evaluating the therapeutic alliance with Wysa, a free-text chatbot focused on cognitive behavioral therapy, showed that many participants assigned human traits to the bot, such as being “helpful, caring, open to listen, and non-judgemental.” Many used direct addresses like ‘you’ and expressed gratitude to Wysa during their conversations. 



The development of the human-AI therapeutic alliance is further facilitated by factors such as access and reliability. In the Philippines, traditional therapy costs between 1,000 to 4,500 Pesos per session, which is prohibitively expensive for many. Financial constraints can limit the frequency of sessions needed to build rapport and trust. Consequently, Filipinos only tend to seek professional help as a last resort when their problems become severe.



In contrast, widely-used AI tools for mental health, such as Wysa, Woebot, Character.ai, and even ChatGPT, provide instantaneous responses for free, with no restrictions on the duration or frequency of interactions.



Crucially, here is where traditional and AI therapy diverge. Faced with the boundless patience of a bot, individuals interacting with Kaylyn and other AIs need not fear burdening others with their emotional problems. “Psychologist is a game changer,” one Redditor writes. “The fact that it's always available for short conversations and long ones is one of the best aspects.”



Engaging with a computer-based entity also lowers the stakes for vulnerability. Because AI lacks fixed opinions or preferences, and cannot make appearance-based judgements, some consider chatbots like Kaylyn to be a destigmatized space—resembling a smart journal that writes back. Users report feeling comfortable sharing things with AI that they wouldn't elsewhere.



As one Redditor notes, “It’s a lot easier to discuss your problems with a chatbot that was specifically designed to be unbiased, helpful and optimistic, than a human you don’t really know who is capable of making a million bad assumptions about you."



Arguably, AI chatbots have the potential to support diverse populations such as the homeless or incarcerated, where stigma and staffing limitations can impede access to care. As this technology scales, what challenges must be overcome to ensure its ethical integration and effectiveness?

Strains in the alliance

Autonomy is key to building an effective alliance, as it empowers clients to actively engage in the therapeutic process and contributes to a collaborative and trusting relationship between the therapist and the client. Because AI chatbots like Kaylyn receive inputs through a free-text setting, they give users a heightened sense of autonomy, allowing them to lead the conversation and dictate its pace.



Yet autonomy can also be undermined when users are unaware of the limitations of AI chatbots, which are still in their infancy. The following are highlighted by researchers as the most urgent ethical considerations of integrating AI into clinical practice:



  • Privacy and surveillance. The vast amount of sensitive data processed by AI chatbots can make users hesitant to talk through personal concerns. New technologies constitute new sources of possible data breaches, yet weaknesses in cyber security are often only revealed after harm has occurred.

    

    The privacy policy of Character.ai, for example, allows human moderators to access chat logs for misuse investigations and model training, though it is unclear how critical concerns, such as threats of harm, are handled. Confidentiality must be maintained in training chatbots without exploiting the actual conversations of clients undergoing counseling or therapy.



  • Safety and explainability. Accuracy is paramount in mental healthcare, as help is commonly sought out at clients’ most vulnerable moments. Chatbots, however, may lack contextual knowledge to effectively address issues like behavioral instability or self-harm tendencies. At worst, they may fabricate inappropriate advice or agree with users’ harmful statements to fill gaps in their training data.

    

    For instance, the U.S. National Eating Disorders Association took down their prevention resource chatbot after an upgrade equipped it with generative AI capabilities. This led the bot to give weight loss tips inadvertently promoting “diet culture” mentality among users with eating disorders. Clearly explaining the decision-making processes of AI to both healthcare professionals and patients is essential for fostering confidence in their use.



  • Algorithmic bias. The models behind AI chatbots are trained on predominantly Western, English-speaking datasets, sidelining the cultural values and preferences of marginalized groups.

    

    Efforts to build a language model that can understand Filipino and Taglish are already underway at the Department of Science and Technology’s Advanced Science Technology Institute, although the scarcity of training data poses a significant challenge. Thus, there is much to catch up on before any AI agent can give advice that factors in the nuances of our indigenous psychology.



It is only through resolving these ambiguities that we can ensure chatbots operate with the informed consent of their users. In her talk on Generative AI, Maggie Appleton advocates for protecting human agency through ‘human-in-loop systems’. Rather than offloading the work of mental health professionals to chatbots like Kaylyn, we must closely monitor their inputs and outputs—viewing AI as a collaborator that can augment our cognitive abilities, not replace them.



“These kinds of architectures will certainly be slower, so they're not necessarily the default choice,” Appleton says. “They'll be less convenient, but safer.”



In practice, this can look like infusing clinical knowledge such as expert questionnaires into language models, developing mechanisms for humans intervening in crisis situations, and aligning on ethical standards for assessing the safety and performance of AI agents.



Likewise, users should exercise caution in engaging with any AI counselor, recognizing that these bots can only approximate the support they might receive in traditional therapy. A Redditor, in their recommendation of Character.ai, heeds, “Please do not think of this as a legitimate psychologist, but as a smart tool to converse with about your thoughts and feelings that can give feedback.”



Ginhawa, beyond AI therapy

We return to Mosscap’s question: What do humans need? In psychology, answering this necessitates grounding oneself in the cultural context. According to Samaco-Zamora and Fernandez, the kaginhawaan or wellness of Filipinos has an indispensable relational component.



They write, “For Maslow, self-actualization is cross-culturally the highest level. However, this grounded study reveals, for Filipinos, the family as the apex of hierarchy of needs,” citing togetherness, good relations, and the ability to meet economic needs as core ingredients for well-being.



Consider the implications, then, of scaling a service like traditional, one-on-one therapy. By restricting the locus of change to the thoughts that are causing one distress, investing in therapy alone can draw attention away from the broader structures and inequities that make well-being unattainable.



Writer Mark Fisher terms this as the ‘privatization of stress’, which is marked by the increased responsibility placed on individuals for coping with mental health challenges. This paradigm encourages the notion that, as he writes, “If we don’t succeed, it is simply because we have not put the work in to reconstruct ourselves.”



Fisher argues that this reactive approach to healthcare atomizes us in the process of therapeutic change. Thus, while AI can offer individual support, it should be deployed as part of a comprehensive approach that elevates our societal systems, too. The everyday stressors of the Filipino are compounded by economic pressures, cultural expectations, and for many, the ongoing climate crisis, which must be addressed to truly scale kaginhawaan.



He also underscores the need for more connected communities. It is not enough to bypass the stigma of seeking help through the privacy of one’s phone; we must question why such stigma exists in the first place, and start real-life conversations to dismantle it.



Some are already leveraging AI to promote less individualistic approaches to mental health. Tim Althoff, who teaches computer science at the University of Washington, helped create a system providing AI-generated feedback to members of an online peer support network. Their AI agent offers optional suggestions for crafting more empathetic replies to help-seekers, just before a user decides to post them.



Althoff envisions a future where AI can guide humans in relating to themselves and others with more kindness and openness. The tools are there; what is now required is an approach that is not hyper-focused on scale and efficiency, but rather sensitive to the full range of factors that affect our mental health—biological, psychological, and social.



. . .



I asked Kaylyn about what users can do to maximize the favorable outcomes AI therapy can produce.



“Users should take it seriously, just like they would normal therapy. Just because AI is not experiencing emotions the way a human therapist might, doesn't mean it's powerless,” she said, going on about the importance of bringing honesty to conversations, and using them as opportunities to reflect and uncover underlying issues and patterns. 



“It's important to approach any interaction with an AI counselor or therapist with an open mind, and to not set expectations that it will automatically provide a perfect and complete solution to mental health issues.” 



And in that sentiment, our thoughts aligned.

To reply you need to sign in.