AI chatbots can deliver exactly what users want with minimal effort, like a genie granting endless answers with the rub of a lamp. The idea, which researchers call the ‘AI Genie’ phenomenon, is at the centre of new research examining how chatbot design may contribute to addictive patterns of use.
M. Karen Shen, a UBC student at SOCIUS Lab, is studying the psychological impacts of AI chatbots. The lab focuses on social computing, human-computer interaction and artificial intelligence designed to support healthier online communities.
Shen said, in an interview with The Ubyssey, that the project emerged from earlier work examining how chatbot design can encourage overreliance.
“We were already aware that AI chatbot addiction is an emerging issue,” Shen said. “But there are very few studies with empirical accounts of users describing their symptoms and why they’re getting addicted to AI chatbots.”
AI chatbot addiction is a phenomenon that researchers have yet to clearly define. Many studies conceptualize it as a form of behavioural addiction, similar to technology or social media addiction, characterized by excessive dependence on chatbots that leads to negative consequences for users.
To investigate this, researchers conducted an exploratory mixed-methods study using Reddit discussions about chatbot usage. They first performed a thematic analysis of user posts to identify common experiences and patterns of addictive behaviour, then determined relationships between different types of chatbot use and addiction-related symptoms. Reddit was selected because its pseudonymous accounts and topic-specific communities can allow users to share sensitive experiences more openly.
What stood out most in the analysis, Shen said, was how closely many of the reported experiences aligned with established components of behavioural addiction. Users described symptoms such as preoccupation, withdrawal, relapse, mood modification and conflict with daily functioning. Some accounts described severe impacts; Shen described examples of chatbot use affecting hygiene, and one case in which a user reported physical chest pain when going without the chatbot.
“[People say] students are using AI chatbots too much in their learning … it’s different from addiction, where people experience functional impairment,” she said.
Studies show that addictive behaviours are tied to the brain’s motivation system. Positive interactions — such as receiving immediate responses or validation — can activate dopamine pathways, reinforcing the behaviour and increasing the likelihood that users return to it.
The researchers also identified different patterns of addictive use. One category involved escapist role-play, where users became attached not only to chatbot characters but also to the immersive fictional worlds they were building with them. Another centred on pseudo-social companionship, where users turned to chatbots for emotional closeness and support. A third, less common category involved what the study calls an epistemic rabbit hole, where excessive use began with information-seeking. This is supported by the AI Genie phenomenon described in the paper.
The team was concerned by design features that anthropomorphize chatbots or make disengagement emotionally difficult. Shen pointed to account deletion prompts on character-based chatbot platforms that refer to losing “the love that we shared” or “the memories we have together.”
Other hooks, such as allowing multiple simultaneous chats with alternate versions of the same character, may also encourage prolonged use by making interactions more expansive and harder to leave behind. Shen explained that some users described these prompts as an influence on their decision not to delete their accounts.
Shen said chatbot dependency can emerge through interaction between design, psychological vulnerability and circumstance. Loneliness and other contextual factors may increase susceptibility, while design choices can make those vulnerabilities easier to exploit, with or without intention.
The study also explored recovery strategies described by users. Shen said many common attempts, such as deleting an account or simply trying to stop, were not always effective, as some users returned quickly after trying to quit.
“What might work for one type might not work for the other,” Shen said.
Other strategies appeared to work better depending on the type of use. For escapist role-play, users found more success with alternatives that “scratched the itch,” such as creative writing, traditional role-play servers or hobbies related to the themes they had been exploring through chatbots. Building real-world social connections did not help the escapist role-play type, but it did help companionship-focused use.
This research, among other papers from the SOCIUS lab, is set to be presented at the Association for Computing Machinery’s CHI conference on Human Factors in Computing Systems in April.
The most important takeaway for Shen is awareness — both among users and among the people supporting them. She said some users in the study reported seeking therapy, only to find that professionals were not always familiar with the issue. As AI chatbot use becomes more common, Shen believes better recognition will be essential for support and prevention of overreliance.
“We don’t want to over-pathologize anything … but there’s not much research in this yet, so it’s important to fill that gap,” Shen said. “We don’t want [actual] emerging issues to go undiagnosed.”