**Questions? [[Contact us!]]**
## Introduction
A mental health chatbot has the potential to provide accessible and immediate support to individuals dealing with mental health issues and crises. However, legal concerns include user data privacy and compliance with healthcare regulations to safeguard sensitive information and maintain ethical and effective mental health care.
## Relevant legislation
### [[GDPR]]
The chatbot can make use of [[Automated decision making]], which, in principle, is prohibited (art. 22 GDPR). The [[Data subject]] has the right to human intervention. It is thus advisable, to always keep a human involved.
Furthermore, mental health data is [[Sensitive data]] under the GDPR and should be processed only by exception and with caution (art. 9 GDPR). See the [[Dutch GDPR Implementation Act (UAVG)]] for specific exceptions under Dutch law for the processing of health data.
Overall, the use of a chatbot for mental health (patients) is a tricky operation. When using such an AI model, you should have robust security measures in place, to keep the data safe and out of the hands of others.
_More information about automated decision making in the_ [_EDPB guidelines_](https://ec.europa.eu/newsroom/article29/items/612053)_._
### [[AI Act]]
The chatbot could under circumstances fall under the [[Prohibited practices]] of the AI Act (art. 5 AI Act).
If the chatbot is a [[Medical device]] according to the [[MDR]], it is possible that the medical device is also considered high-risk system according to the AI Act.
### [[AI Liability Directive]]
The EU's proposed AI Liability Directive lays down procedures for addressing AI system-related damages outside of contracts. It simplifies matters for victims by assuming a causal link, although it doesn't completely shift the burden of proof.
Currently, chatbots cannot be held accountable for damages caused by users' unintended actions and formal contracts with chatbot services are unlikely. To establish liability for a party involved, such as the chatbot manufacturer or data provider, victims must prove a fault in the chatbot system led to the harm, which is challenging due to the opacity of AI models and the lack of transparency (this could change under the AI Act, when it is finalised and put into force).
Furthermore, providers are very likely to include disclaimers in their terms and conditions further complicating liability claims. This could make the changes of a successful claim harder as well.
It is however most ethical to make clear, beforehand, where victims can claim compensations when things go wrong.
### [[MDR]]
Under the existing Medical Devices Regulation (MDR), whether a product (physical or software) is considered a [[Medical device]] depends on its intended purpose.
A chatbot helping people with loneliness or mental health issues would only be considered a medical device if used to diagnose, prevent, monitor, treat or alleviate a mental disease.
Some manufacturers classify mental health chatbots as medical devices, while others disclaim that their products are not medical devices, even though they offer similar services. This lack of clarity poses a risk, especially when dealing with mental health services, as users might rely on untested services instead of seeking actual therapy.