Have you felt an odd pain, opened ChatGPT and started to enter the symptoms? You’re not alone. As happened with 'Dr. Google', Artificial Intelligence tools have become the go-to source of medical advice for millions of people. But how much can we trust them when it comes to our health?
In this article, we reveal the most common mistakes associated with health-related checks in LLMs and how to use technology as an ally, without putting your wellbeing at risk.
LLMs: what are they and how do they work?
LLMs (Large Language Models) are artificial intelligence systems trained on large volumes of text available on the internet, in books and other public sources.
ChatGPT is one of the best-known examples, but there are other similar models, such as Gemini, developed and integrated into Google services, which apply the same principles. These tools are able to answer questions in a natural tone, explain complex concepts and maintain seamless conversations, adapting to the context they are faced with.
The goal of LLMs is to generate coherent and plausible answers based on language patterns – not diagnose, assess clinically or replace healthcare professionals. This means that, even when the answer appears safe, “human” or detailed, there is no individual medical assessment or access to your medical history, tests or personal context.
Using artificial intelligence tools in healthcare shouldn’t only be viewed as a risk. When applied properly, this research can improve healthcare literacy, help better understand existing diagnoses and prepare questions for a medical consultation. The challenge lies less in the tool itself and more in how it is used.
Artificial intelligence and healthcare literacy: what are the benefits?
Language models can play a positive role in increasing healthcare literacy, namely to understand medical information and make more informed decisions. The main advantages include:
Understanding complex medical terms
These tools can translate technical language into simple explanations, making it easier to understand medical reports, tests or existing diagnoses.
Better preparation for a consultation
By clarifying concepts beforehand, you can arrive at a consultation with clearer and more specific questions, making the appointment more productive.
Quicker access to general information
These tools can provide a structured explanation of what a disease is, what the risk factors are, or how a particular test is performed.
Greater patient engagement
When people are better informed, they tend to participate more actively in healthcare-related decisions.
When used as a supplement – and not a substitute – for medical supervision, LLMs can be allies in healthcare education.



