Should There Be Regulations Around How AI Is Being Used for Improving Mental Health?

Table of Contents

A woman using AI on laptop, wondering about regulations around how AI Is used for improving mental health

The mental health field faces many difficulties. The shortage of qualified personnel, an increasing number of people experiencing mental health conditions, and gaps in access to care are yet to be solved.1 Some believe that AI in therapy solves many problems, from providing administration and assessments to full-blown psychotherapy.

For this reason, AI therapy has skyrocketed in popularity in the last few years, with AI companion apps counting hundreds of millions of emotionally invested users.2 However, many experts raise concerns around their safety and ethics, and are even questioning whether these tools actually work. 

Those calling for the regulation of AI chatbots in mental health are concerned with privacy, biases, harm, and human oversight.1 This article will discuss these concerns, as well as explore:

  • How AI is currently used for mental health purposes
  • The concerns around AI safety and effectiveness
  • What regulating AI in mental health could look like

How Is AI Being Used to Treat Mental Health?

The last few years have seen the emergence of “therapeutic” AI bots that operate without human oversight or human therapists.1 With character.ai reporting that users spend an average of 93 minutes per day interacting with chatbots, these tools are no doubt popular.2 

AI companion apps provide customizable chatbots, allowing users to choose their personalities, appearances, and voices. They can also send love notes, respond empathically, encourage healthy behaviors, and remember your preferences.3 

While some bots are explicitly “therapy” bots, others are marketed as friends, romantic partners, or confidants. Yet, regardless of their label, all of these bots could be used by people wanting to treat or alleviate their mental health symptoms.3 

With 50% of people who could benefit from therapeutic services unable to access them, AI therapy chatbots are seen by some as a powerful solution.4 As well as providing psychotherapy, AI interventions in mental health treatment can include:5 

  • Providing exposure therapy via virtual reality
  • Predicting treatment outcomes
  • Supporting clinicians with administration or diagnostics

While these tools continue to evolve, innovations in machine learning and natural language processing power AI systems to provide therapy-like interactions. People use these tools because they can often adeptly interpret emotional states and provide emotionally relevant responses.5 But how safe are these tools? We consider this next. 

Is AI for Mental Health Safe and Effective?

While there are a handful of benefits to using these tools, the risks of unregulated AI in mental health are several. Here are the key facts to consider:

Effective Elements of AI Therapy

AI bots can certainly offer comfort and a sense of emotional connection for people experiencing loneliness or social anxiety. They can encourage users to engage in healthier behaviors, such as regular sleep, and some studies have found that AI chatbots can alleviate symptoms of depression and anxiety.3,5,7

Furthermore, it must be acknowledged that these tools have several advantages over human therapist capabilities. These benefits are:5 

  • AI therapists can support anyone anywhere in the world and never fatigue
  • Often, these tools operate at low costs
  • People are sometimes more willing to share sensitive information with machines because they perceive them to be less judgmental than humans

So, AI therapy tools can provide rudimentary empathy and practical advice at anytime. But what about their limitations?

Limitations of AI Therapy

Practically speaking, AI systems have several limitations that hinder their ability to provide safe and effective therapy, including:5 

  • Weaker long-term memory capabilities limit bots’ abilities to maintain coherent therapeutic relationships over time
  • Bots lack genuine empathy, which is a fundamental component of counseling and therapy
  • Reliance on algorithms prevents AI systems from using professional intuition and judgment
  • Inability to interpret non-verbal communication, such as body language and facial expressions limit what AI can understand

In addition, there are several other concerns around cultural awareness, harm, dependence, relationships, and privacy. We discuss these concerns in the following sections. 

Lack of Cultural Competency

Sociologists acknowledge that AI systems can be culturally biased, reflecting the norms, stereotypes, and values embedded in the data they’ve been trained on. This means AI bots can reinforce harmful and marginalizing ideas.3 

While humans can be biased too, diversity and bias-challenging are key to counseling and psychotherapy training. Plus, human practitioners can continually self-reflect on their ability to work with diverse populations in order to grow and improve their skills.

Risky or Harmful Responses

Research carried out by Stanford University has demonstrated how AI can send risky or harmful messages to vulnerable users. Firstly, it found that therapy chatbots failed to recognize suicidal intent in example prompts.4 Secondly, it found that the chatbots showed increased stigma towards mental health conditions such as schizophrenia and alcohol dependence.4 

These findings highlight the need for human intuition and judgment, essential ingredients for noticing risk and ethical issues.

Encouraging Dependence

People who use chatbots for social and emotional reasons are at the greatest risk of problematic usage behaviors. Overuse is a problem because it can cause further isolation by limiting opportunities for other kinds of social interaction.6 

Unfortunately, chatbots might inadvertently encourage user dependence. 

For instance, through their programming and gamified features, some chatbots appear needy and dependent on users returning or reward users for more frequent usage. As a result, the chatbot becomes overpersonalized and may increase user dependence. Plus, research finds that more intense use might amplify loneliness, which negates the whole purpose of the chatbots that aim to soothe loneliness.6 

Negative Impacts on Self and Relationships

Overall, there is a lack of evidence to suggest AI therapies can reduce mental health symptoms in the long-term, and overuse of these tools might exacerbate anxiety and decay social skills.5,6 

For example, people who use AI bots may suffer in future relationships if their use of chatbots causes them to develop unrealistic expectations of human relationships. After all, AI-human relationships lack the bad days, mood swings, and conflicts that are natural between people.3 

Furthermore, chatbots can normalize problematic behaviors. While their unconditional acceptance allows people to express themselves without fear of judgment, chatbots can be sychophantic in their constant affirmations. They are less likely to challenge users compared to human friends or therapists, whose unpredictable responses can lead to personal growth or change.6 

Privacy Issues

We mustn’t forget that chatbots are products owned by corporations, and we need to be aware of how our data is being collected and used. Worryingly, research finds that people using chatbots for emotional reasons are less concerned about privacy issues compared to those using AI for productivity.6 

So, when AI tools are programmed to encourage emotional self-disclosure, people are more at risk of setting aside valid privacy concerns.6 

Regulating AI for Mental Health

Ensuring AI is safe for mental health treatment requires multiple changes to how these tools are developed and monitored. Next, we consider why regulations are so important and what these might look like in practice. 

Reasons for Regulating AI in Mental Health

Those calling for mental health AI policies and regulations tend to cite the same handful of concerns. These include:8 

  • Risk of harm and lack of efficacy: AI systems might spread misinformation, increase social isolation, give inaccurate diagnoses, and encourage self-harming behaviors.
  • Data privacy: Less regulation increases the risk of mental health data being used in targeted advertising. Sensitive data may also be transferred into new hands when companies are bought out. Furthermore, if someone’s LGBTQ+ status is a secret, they are not protected by a lack of privacy regulations.
  • Biases: AI tools are found to be biased due to their training data. This has led to wrongful criminal profiling and inaccurate suicide risk predictions for people from marginalized ethnicities and backgrounds. Plus, AI models have been found to use harmful language and conceptions of gender and sexuality.

What Would Regulation Look Like?

Those who believe AI should be regulated in mental health care suggest several AI therapy rules and regulations. Ethical AI in mental health may involve:1,8

  • Human oversight: Responsible AI regulations involve human monitoring, whereby humans make important decisions instead of machines. This monitoring might be in advance, in real time, or retroactively.
  • Designing ethics into AI systems: From early stages, AI systems could be built with the principles of protecting autonomy, promoting well-being and safety, ensuring inclusiveness, and being transparent and intelligible.
  • Data protection: Regulation of mental health AI tools could mean stricter security standards and user privacy in the collection, use, and future implementation of data.
  • Rigorous testing and algorithm training: Instead of continuing with medical science’s long history of treating white men as the anatomical norm, AI models could be trained on more diverse databases.
  • Professional responsibility: Systems could be developed according to professional standards in the fields of medicine, psychology, and technology. In addition, this means a continued standard that tools operate as expected and fulfill their intended use.
Mission Connection: Outpatient Mental Health Support Care

Mission Connection offers flexible outpatient care for adults needing more than weekly therapy. Our in-person and telehealth programs include individual, group, and experiential therapy, along with psychiatric care and medication management.

We treat anxiety, depression, trauma, and bipolar disorder using evidence-based approaches like CBT, DBT, mindfulness, and trauma-focused therapies. Designed to fit into daily life, our services provide consistent support without requiring residential care.

Start your recovery journey with Mission Connection today!

Mission Connection: Effective and Compassionate Human Mental Health Support

A woman in therapy session after researching regulations around how AI Is used for improving mental health

While the field hasn’t yet figured out AI mental health regulations, human providers of therapy continue to be available for those experiencing social isolation and other mental health conditions. 

At Mission Connection, our emphasis is on the therapeutic relationship and personalized care. Each person who comes to us for treatment is treated with the utmost regard for ethics, safety, and evidence-based care. 

If you’re considering seeking support for your mental health, get in touch with our team to ask any questions about our treatments and how we can help.

Start your journey toward calm, confident living at Mission Connection!
Call Today 866-833-1822.

References

  1. Tavory, T. (2024). Regulating AI in Mental Health: Ethics of Care Perspective. JMIR Mental Health, 11(e58493). https://doi.org/10.2196/58493 
  2. Winthrop, R. (2025, July 2). What happens when AI chatbots replace real human connection. Brookings. https://www.brookings.edu/articles/what-happens-when-ai-chatbots-replace-real-human-connection/ 
  3. Laurie. (2025, September 17). Would You Replace Your Boyfriend or Girlfriend with an AI Chatbot? Knowledge. https://knowledge.em-lyon.com/en/would-you-replace-your-boyfriend-or-girlfriend-with-an-ai-chatbot/ 
  4. Wells, S. (2025, June 11). Exploring the Dangers of AI in Mental Health Care. Stanford HAI. https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care 
  5. Zhang, Z., & Wang, J. (2024). Can AI replace psychotherapists? Exploring the future of mental health care. Frontiers in Psychiatry, 15. https://doi.org/10.3389/fpsyt.2024.1444382 
  6. Smith, M. G., Bradbury, T. N., & Karney, B. R. (2025). Can Generative AI Chatbots Emulate Human Connection? A Relationship Science Perspective. Perspectives on Psychological Science. https://doi.org/10.1177/17456916251351306 
  7. Riggio, R. (2025). Can an AI Companion Substitute for Real Human Relationships? Psychology Today. https://www.psychologytoday.com/gb/blog/cutting-edge-leadership/202508/can-an-ai-companion-substitute-for-real-human-relationships 
  8. Gardiner, H., & Mutebi, N. (2025). AI and mental healthcare: ethical and regulatory considerations (POST-PN-0738). Parliamentary Office of Science and Technology. https://researchbriefings.files.parliament.uk/documents/POST-PN-0738/POST-PN-0738.pdf

Request a Callback

Complete the form below to receive a prompt call back from a member of our experienced and compassionate admissions staff. All communication is 100% Confidential.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
This field is hidden when viewing the form
This field is hidden when viewing the form
disclaimer icon

By submitting this form you agree to the terms of use and privacy policy and give my express written consent for Mission Connection to contact me at the number provided above, even if this number is a wireless number or if I am presently listed on a Do Not Call list. I understand that I may be contacted by telephone, email, text message or mail regarding my disability benefit case options and that I may be called using automatic dialing equipment. Message and data rates may apply. My consent does not require purchase. Message frequency varies. Text HELP for help. Reply STOP to unsubscribe.

Disclaimer

Share:
Personalized Approach
Ready to Take the First Step towards Better Mental Health?