How to use AI to get healthcare information – and what not to ask

Using AI tools to check symptoms can be risky. Find out what precautions to take, mistakes to avoid and how to use these tools safely.

Related Tags

  • AI
Woman sitting on sofa, wrapped in a blanket, feeling ill while checking health symptoms on her smartphone

Have you felt an odd pain, opened ChatGPT and started to enter the symptoms? You’re not alone. As happened with 'Dr. Google', Artificial Intelligence tools have become the go-to source of medical advice for millions of people. But how much can we trust them when it comes to our health? 

In this article, we reveal the most common mistakes associated with health-related checks in LLMs and how to use technology as an ally, without putting your wellbeing at risk.

LLMs: what are they and how do they work? 

LLMs (Large Language Models) are artificial intelligence systems trained on large volumes of text available on the internet, in books and other public sources. 

ChatGPT is one of the best-known examples, but there are other similar models, such as Gemini, developed and integrated into Google services, which apply the same principles. These tools are able to answer questions in a natural tone, explain complex concepts and maintain seamless conversations, adapting to the context they are faced with.

The goal of LLMs is to generate coherent and plausible answers based on language patterns – not diagnose, assess clinically or replace healthcare professionals. This means that, even when the answer appears safe, “human” or detailed, there is no individual medical assessment or access to your medical history, tests or personal context.

Using artificial intelligence tools in healthcare shouldn’t only be viewed as a risk. When applied properly, this research can improve healthcare literacy, help better understand existing diagnoses and prepare questions for a medical consultation. The challenge lies less in the tool itself and more in how it is used.

 

Artificial intelligence and healthcare literacy: what are the benefits? 

Language models can play a positive role in increasing healthcare literacy, namely to understand medical information and make more informed decisions. The main advantages include:

Understanding complex  medical terms

These tools can translate technical language into simple explanations, making it easier to understand medical reports, tests or existing diagnoses.

Better preparation for a consultation

By clarifying concepts beforehand, you can arrive at a consultation with clearer and more specific questions, making the appointment more productive.

Quicker access to general information

These tools can provide a structured explanation of what a disease is, what the risk factors are, or how a particular test is performed.

Greater patient engagement

When people are better informed, they tend to participate more actively in healthcare-related decisions. 

When used as a supplement – and not a substitute – for medical supervision, LLMs can be allies in healthcare education.

How to research healthcare more safely

Despite their limitations, LLMs can be helpful when used in an informed and responsible manner. Discover some good practises that help reduce risks and make better use of these tools.

  • Do not used AI tools to obtain a diagnosis

  • Do not interpret the information as being personalised

  • Do not make decisions about treatments or medication

  • Use LLMs for general information

  • Always verify information with credible sources

  • Use the information to assist in the medical consultation

  • Maintain critical thinking

  • Seek a medical assessment to ensure the right care

What to ask (and what not to ask) LLMs about healthcare 

One of the main differences between using a LLM and searching on Google is in the way questions are phrased. The quality of the answer depends largely on how the question is worded. Discover examples of “good” and “bad” questions to ask artificial intelligence tools.

 

 

Examples of “good” questions 

These questions can help improve understanding, without replacing a medical assessment:

•    “What does it mean to have high LDL cholesterol?”
•    “How does magnetic resonance imaging work?”
•    “What are the general risk factors for type 2 diabetes?”
•    “What does a cardiology consultation usually involve?”
•    “What are the differences between a flu and a cold?”

In these cases, the goal is to obtain general information or explanations, not a personalised medical diagnosis.

 

Examples of “bad” questions 

The phrasing of these questions can lead to incorrect interpretations or unsafe decisions:

•    “I’m experiencing chest pains. What is wrong with me?”
•    “Should I stop taking this medication?”
•    “Could these symptoms mean cancer?”
•    “What treatment should I have for this situation?”
•    “Based on these test results, am I sick?”

These questions require an individual clinical assessment, medical tests and integration of the medical history, which LLMs are unable to do.

Health checks on LLMs: frequently asked questions

We answer below some frequently asked questions about health checks on LLM. 

  • Can ChatGPT replace a medical consultation?

  • Is it safe to check symptoms on LLMs?

  • Can I use ChatGPT to decide treatments or medication?

Healthcare questions: let Joaquim Chaves Saúde help you get answers

Technology is increasingly involved in how we obtain information. Used in a critical manner, it can be an ally. However, when it comes to medical diagnosis, treatment or decisions, a clinical assessment is still essential.

At Joaquim Chaves Saúde, our multidisciplinary teams are ready to explain, diagnose and monitor each situation on a personal basis, drawing on scientific evidence and your medical history. Schedule your consultation or medical tests, and discover a team of specialists at your disposal to provide safe and thorough support.

Schedule an appointment or exam

Schedule your appointment or medical exam and receive the best care.
Share