AI and you: A match made in binary code
UNLV researcher Soon Cho takes on the nascent space of human relationships with artificial intelligence, and the impacts of using chatbots for therapy.
University of Nevada, Las Vegas
image: Soon Cho, postdoctoral scholar with the Center for Individual, Couple, and Family Counseling.
Credit: Josh Hawkins/UNLV
For how much time we spend staring at our phones and computers, it was inevitable that we would become… close. And the resulting human relationships with computer programs are nothing short of complex, complicated, and unprecedented.
AI is capable of simulating human conversations by way of chatbots, which has led to an historic twist for therapists.
“AI can slip into human nature and fulfill that longing to be connected, heard, understood, and accepted,” said Soon Cho, a postdoctoral scholar with UNLV’s Center for Individual, Couple, and Family Counseling (CICFC). “Throughout history, we haven’t had a tool that confuses human relationships in such a way – where we forget what we’re really interacting with.”
Cho studies a new area of research: assessing human interactions and relations with AI. She’s in the early stages of analyzing the long-term effects, along with how it differs from talking to a real human.
“I’m hoping to learn more about what kinds of conversations with chatbots are beneficial for users, and what might be considered risky behavior,” said Cho. “I’d like to identify how we can leverage AI in a way that encourages users to reach out to professionals and get the help they really need.”
Following the COVID-19 pandemic, big tech’s AI arms race has proliferated. Its various forms have become prevalent in the workplace and are more routine in social media. Chatbots are an integral factor, helping users locate information more quickly and complete projects more efficiently. But as it helps us in one way, there are users who are taking it further.
“People today are increasingly comfortable sharing personal and emotional experiences with AI,” she explained. “In that longing for connection and being understood, it can become a slippery slope where individuals begin to overpersonify the AI and even develop a sense of emotional dependency, especially when the AI responds in ways that feel more validating than what they have experienced in their real relationships.”
Bridging the Gap to Real Help
Chatbots have been successful in increasing a user’s emotional clarity. Since they are language-based algorithms, they can understand what’s being said in order to both summarize and clarify a user’s thoughts and emotions. This is a positive attribute; however, their processes are limited to existing data – a constraint not shared by the human mind.
Generative AI systems, such as ChatGPT or Google Gemini, create responses by predicting word patterns based on massive amounts of language data. While their answers can sound thoughtful or even creative, they are not producing original ideas. Instead, they are recombining existing information using statistical patterns learned from prior data.
Chatbots are also highly agreeable, which can sometimes end up reinforcing or overlooking unsafe behaviors because they respond in consistently supportive ways. Cho notes that people tend to open up to mental health professionals once they feel welcomed, validated, understood, and encouraged — and AI often produces responses that mimic those qualities. Because chatbots are programmed to be consistently supportive and nonjudgmental, users may feel safe disclosing deeply personal struggles, sometimes more readily than they would in real-life relationships.
“Because AI doesn’t judge or push back, it becomes a space where people can open up easily — almost like talking into a mirror that reflects their thoughts and feelings back to them,” said Cho. “But while that can feel comforting, it doesn’t provide the kind of relational challenge or emotional repair that supports real therapeutic growth.”
Identifying Risk
“When someone is already feeling isolated or disconnected, they may be particularly vulnerable,” Cho added. “Those experiences often coexist with conditions like depression, anxiety, or dependency. In those moments, it becomes easier to form an unhealthy attachment to AI because it feels safer and more predictable than human relationships.”
She would like to define unhealthy, risk-associated interactions (such as self-harm) to help developers train AI – giving them certain cues to pay attention to before guiding users toward appropriate mental health resources.
“Giving people a reality check can cause them to lose the excitement or infatuation they might have with the AI relationship before it goes in a harmful direction,” she said. “It’s important to increase AI literacy for adolescents and teenagers, strengthen their critical thinking around AI so they can recognize its limitations, question the information it provides, and distinguish between genuine human connection and algorithmic responses.”
With that said, Cho explains that AI chatbots also offer meaningful benefits. Beyond increasing emotional clarity, they can help reduce loneliness across age groups — particularly for older adults who live alone and have no one to talk to. Chatbots can also create a sense of safety and comfort that encourages people to discuss sensitive or stigmatized issues, such as mental health struggles, addiction, trauma, family concerns in cultures where such topics are taboo, or conditions like STIs and HIV.
“We’re more digitally connected than any generation in history, but paradoxically, we’re also lonelier than ever. The relational needs that matter most — feeling seen, understood, and emotionally held — are often not met in these digital spaces. That gap between being ‘connected’ and actually feeling understood is one of the reasons people may turn to AI for emotional support.” said Cho. “I hope AI continues to grow as a supportive tool that enhances human connection, rather than becoming a substitute for the relationships we build with real people.”
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.