Cmtb Centre Medical Terre Bonne Barre Laterale Pictogram Rendez Vous
Cmtb Centre Medical Terre Bonne Barre Laterale Pictogram Message
Cmtb Centre Medical Terre Bonne Barre Laterale Pictogram Location
Cmtb Centre Medical Terre Bonne Barre Laterale Pictogram Linkedin

How well can we treat ourselves with AI?

That’s the question Bastien von Wyss, Jeannet François, and @Myriam Semaani tried to answer by submitting 3 scenarios raising health issues to 4 chatbots. I had the pleasure of taking part in this “blind” evaluation with my colleagues Jean Gabriel Jeannot, Cédric Yves Bandelier, and Dominique Bünzli.

Further information: https://www.rts.ch/info/sante/2025/article/ia-et-sante-un-test-revele-les-forces-et-limites-des-chatbots-medicaux-28878512.html

Cmtb Medical Center Terre Bonne Article To What Extent Can We Treat Ourselves With Ai

1. Response variability: accuracy and reliability

The chatbots provided answers that varied widely in quality and reliability.

  • Some answers were generic and succinct.
  • Others were more detailed, with specific recommendations for symptom management, such as for fever or sleep disorders.

This variability can create confusion, making it difficult for users to assess the relevance of the information they receive. In addition, chatbots have not been trained on the same data sources, which can lead to recommendations based on unreliable sources.

Example: some advice on fruit consumption for people with type 2 diabetes is based on popular information from a best-selling book, but which is not widely accepted by health professionals.

It is essential to train these AIs on reliable data, or to ensure that they come from recognized sources validated by healthcare professionals.

2. Lack of contextual clues

Chatbots can overlook crucial contextual elements (medical history, lifestyle, epidemic context, etc.), which can lead to inappropriate advice.AI can sometimes present extreme scenarios, inducing unnecessary worry in the patient, or underestimate warning signals, as in the pediatric emergency example.

The more specific and contextualized the information transmitted to the chatbot, the more precise the answers can be expected to be.

3. Bias and hallucinations

Chatbots can sometimes fail to grasp subtleties of language, invent references or give answers with biases of their own.This can carry a risk particularly in the case of inappropriate medical advice or incorrect diagnoses.

It is important not to limit yourself to the first opinion of a general practitioner, but to consult a healthcare professional.

4. Privacy and confidentiality

AIs collect a great deal of personal data, which raises confidentiality issues, especially for sensitive medical information.

It’s crucial that patients are informed about how their data is protected.

5. Liability for inappropriate advice

One of the major challenges is to determine who is liable in the event of inadequate advice provided by the AI, especially if this delays care or leads to complications.

The absence of clear accountability can represent a risk to patient safety.

AI has real potential for delivering medical information to patients, but it also carries risks. It is essential that patients understand the limitations of these tools. Consultations with healthcare professionals remain essential for appropriate, personalized care.

No comments to show.