Apr 09, 2025

AI in therapy only with reason

Artificial intelligence can help in everyday practice, but there are clear rules when dealing with sensitive health data. In this article, find out why tools like ChatGPT cannot simply be fed with patient data, what risks exist and how they can be used responsibly.

Artificial intelligence and data protection in practice

What therapists need to know about data protection and patient data

Digital helpers have long since become part of our everyday lives. Many health professionals are already using tools for scheduling, documentation or text design. In this context, a term is appearing more and more frequently that is receiving great attention: artificial intelligence (AI).

So-called language models in particular, i.e. programs that can independently write or summarize texts, are becoming increasingly popular. ChatGPT is particularly well-known, but there are many similar applications. They promise time savings, creative suggestions, and assistance with writing tasks.

This sounds tempting, especially in everyday practice. Why not have a session summarized? Why not have AI revised a treatment report linguistically? The options are attractive. But they also have a downside.

Because as soon as patients' personal information is processed, we enter a sensitive area with clear legal limits.

This post points to

  • Why health data is particularly protected
  • Why many AI systems such as ChatGPT cannot be used for this
  • What are the risks
  • and how meaningful use is still possible

Health data requires special protection

Health data is one of the most sensitive personal information. They are particularly protected both in Switzerland by the revised Data Protection Act and in the EU by the General Data Protection Regulation.

These include, for example:

  • Diagnosis and complaints
  • Therapeutic courses and treatment plans
  • Information about mental or physical stress
  • Conversation content from meetings
  • any form of attribution to a specific person

Even if no names are mentioned, contextual information may be sufficient to make patients indirectly recognizable. This is precisely why care should be taken as soon as external digital systems come into play.

Why many AI tools are not allowed

1. No data protection-compliant contract

Most publicly available AI systems do not offer a legally valid contract for so-called order processing. This means that there is no legal guarantee as to how the data entered is used, stored or protected. Without this basis, use of patient data is not legally permitted.

2. Data is processed outside Switzerland

Many well-known systems, such as ChatGPT, operate their servers outside Switzerland and the EU. They are often located in the USA. Such countries are considered unsafe under data protection law. It is only possible to transfer health data there with the express and informed consent of the person concerned. This consent must be given voluntarily and must not be required.

3. Content is stored for training purposes

Many AI providers incorporate the content entered into so-called model training. This means that they are stored, analysed and used to further develop the system. Although no names are included, the information leaves the protected space and can be reused in the long term.

What risks arise in practice

Violation of confidentiality

Health professionals are subject to professional secrecy. Anyone who enters patient data into an external system without a legal basis risks a violation with professional and possibly criminal consequences.

Lack of control over data

Once entered, data cannot be retrieved. Control over their continued whereabouts is lost. It is incomprehensible what happens to them, who has access to them, or how they are processed.

Liability issues in case of errors

AI systems generate content that appears credible, but is not always accurate. If such a text is used in a therapeutic context and damage is caused as a result, the question of responsibility is raised. Without clear guidelines, this can quickly become a problem.

What is allowed and where is the benefit

Despite all restrictions, this does not mean that AI is fundamentally taboo in everyday practice. Used correctly, it can be useful as long as no personal information is used. Possible applications include:

  • Formulation aids for neutral texts, such as for the website or patient information
  • Linguistic assistance for general topics without personal reference
  • research on specialist topics or therapeutic approaches
  • Preparation of anonymized case studies for supervision or continuing education

The key point is the strict separation between general content and sensitive data.

Are there any safe alternatives?

Anyone who wants to use AI specifically in practice should rely on solutions that meet data protection requirements. These include, for example:

  • Swiss provider with server location in Switzerland
  • Self-hosted systems, i.e. locally operated AI tools on practice servers
  • Open-source models that run offline on your device
  • Commercial versions of AI tools with data storage disabled and contractual protection

It is important that the systems used are completely under your control and that no data is transferred to third parties.

Recommendations for practices and teams

  • Create clear guidelines for using digital systems
  • Train employees when dealing with sensitive information
  • Only use AI tools where there is no personal reference
  • Create transparency for patients if digital systems are used
  • Review data protection and technology regularly and document

Conclusion

Artificial intelligence has also arrived in the healthcare sector. It offers new opportunities to relieve stress in everyday life. At the same time, their use requires a high degree of responsibility. Anyone working with patient data must check particularly carefully whether and how digital tools may be used.