Are there any dangers in using AI?
It’s important to remember that LLMs are trained on information taken from the internet, and the internet is full of misinformation.
LLMs are also prone to hallucinations—a term that means the AI has generated its own, made-up answer, rather than pulling from a real source. These hallucinations can be nonsensical, misleading, or just plain wrong.
Before taking any health advice from an AI, be sure to ask it for credible sources to back up the information it gives you.
Remember, LLMs make great research and planning tools, but they are not a substitute for the advice and support of trained health professionals.