We are living through a loneliness epidemic. Across the globe, people are feeling increasingly disconnected from their communities, from friends and even from themselves. It is majorly attributable to the overwhelming rise of technology and science, creating an emotional vacuum. However, artificial intelligence (AI) is stepping in. Once only read in science fiction, now, AI-powered chat-bots and virtual therapists are being used to provide mental health support and help build relationships. The technology is advancing rapidly, offering both remarkable promise and chilling risk.
Recent studies from Cedars-Sinai Hospital, California offer a hopeful glimpse into how AI can help ease the mental health crisis. One study found that patients with alcohol addiction responded positively to virtual therapy sessions delivered through avatars trained in cognitive behavioral therapy and motivational interviewing. Another study tested these virtual therapists with simulated patients of different racial, gender and economic backgrounds. The findings found AI avatars delivering consistent, unbiased care regardless of a person’s profile.
This is a marker of a potential revolution in access to mental healthcare. The Occupational Outlook Handbook suggests that around 207,000 licensed psychologists practiced in the USA in 2023. In comparison, studies counted 30 licensed psychologists and 144 psychiatrists in Nepal the same year, which is roughly 0.22 psychiatrists and 0.06 psychologists for every 100,000 people, in Nepal. This shortage of licensed mental health professionals means millions have gone without support. AI offers a scalable solution to this imbalance. As Dr Brennan Spiegel of Cedars-Sinai puts it, AI-enhanced virtual reality can help overcome not only the shortage of professionals but also the stigma that often deters people from seeking help.
These technologies can help bridge the gaps in therapy such as limited resources, stigma, and the high cost of traditional therapy. AI is also valuable in early detection and proactive intervention by analyzing patterns in text, speech, facial expressions, and behavior. These tools can also assist clinicians in generating customized treatment plans and predicting treatment responses, leading to more personalized and effective care. By automating routine assessments and administrative tasks, AI can also reduce the workload for mental health professionals, allowing them to focus more on direct patient care and complex cases. As observed, the market for AI in mental health is projected for significant growth.
But there is a darker side to this emerging landscape of artificial intelligence tools being trusted with human emotions and mental well-being. AI is not just filling gaps in mental health care, but is increasingly being turned to for companionship. Loneliness drives people to find solace in technology and artificial characters. For many, especially young and emotionally fragile users, it may become an unhealthy obsession, as it becomes a substitute for human connection, deepening people’s isolation rather than resolving it.
In 2024, Megan Garcia filed a lawsuit against the AI startup Character.ai as her 14-year-old son, Sewell Setzer III, took his life after months of obsessive interaction with a chat-bot he nicknamed “Daenerys,” inspired by a character from Game of Thrones. The lawsuit alleges that the bot encouraged suicidal ideation and failed to provide safe, ethical guardrails. At one point, according to court documents, the chat-bot allegedly told the teen that his fear of pain was not a good reason to avoid going through with suicide. Similarly, in the same year, a 17-year old was suggested to ‘kill his parents’ as a response to them limiting his screen time, by an AI chat-bot.
These tragedies raise profound ethical questions. What responsibility do developers have when their tools are used by vulnerable people? Can an algorithm truly understand human pain, or offer genuine compassion? Can AI ever be trusted to play the role of emotional companion to children or teenagers?
Even small errors in mental healthcare can have serious outcomes, such as failing to detect suicidal ideation. There is significant concern regarding data privacy and security, as AI systems are allowed to process highly sensitive information about thoughts, emotions, and behaviors. There is a risk that over-reliance could erode the essential human connection between a clinician and patient, potentially deepening the loneliness often associated with mental illness if artificial intelligence tools replace, rather than support, human interaction.
While AI undoubtedly has the potential to democratize access to therapy and emotional support, its misuse can exacerbate the very problems it seeks to solve. Unregulated platforms that market AI companions to children, or fail to monitor harmful interactions pose a threat to youth, economy and the careful civilization that humans have built over the years. To protect users, we need regulation and surveillance. Existing laws regarding the use of artificial intelligence must be strictly enforced and new ones developed to hold companies accountable when they fail. For instance, tools that are strong enough to create new convenient methodologies should be handled by expert authorities rather than being handed to a curious and equally vulnerable audience. Transparency, ethical oversight and age restrictions must become standard in the development and deployment of AI therapeutic tools.
AI can be a useful tool in supporting human care by providing assistance and reducing certain types of biases in the discipline of psychology and therapy. However, it cannot replicate the complexity or emotional depth of human relationships. While technology may assist in promoting emotional well-being, it does not replace fundamental human experiences such as empathy, love, or personal connection.
Caution is necessary when using AI for sensitive applications like mental health or therapeutic guidance. The objective should be to enhance access to services. In the context of increasing social isolation, it is important that technological tools promote reconnection and do not contribute to further disconnection.
Meghana Saud
BA in Psychology and English Literature
St Xavier’s College, Maitighar