Google has removed some of its AI-generated health summaries after a Guardian investigation found that false and misleading information was putting people at risk.
The company describes its AI Overviews—which use generative AI to provide quick summaries on a topic or question—as “helpful” and “reliable.” However, some of these summaries, which appear at the top of search results, have provided inaccurate health information that could harm users.
In one case experts called “dangerous” and “alarming,” Google gave incorrect information about key liver function tests. This could lead people with serious liver disease to mistakenly believe they are healthy. When searching “what is the normal range for liver blood tests,” the Guardian found that Google presented a list of numbers with little context and no consideration for factors like nationality, sex, ethnicity, or age.
Experts warned that what Google’s AI Overview labeled as normal could differ drastically from actual medical standards. This might cause seriously ill patients to assume their test results were fine and skip important follow-up appointments.
Following the investigation, Google removed AI Overviews for the search terms “what is the normal range for liver blood tests” and “what is the normal range for liver function tests.”
A Google spokesperson said, “We do not comment on individual removals within Search. In cases where AI Overviews miss some context, we work to make broad improvements, and we also take action under our policies where appropriate.”
Vanessa Hebditch, director of communications and policy at the British Liver Trust, welcomed the removal but expressed ongoing concern. “This is excellent news, and we’re pleased to see the removal of the Google AI Overviews in these instances,” she said. “However, if the question is asked in a different way, a potentially misleading AI Overview may still appear, and we remain concerned that other AI-produced health information can be inaccurate and confusing.”
The Guardian found that even slight variations of the original search—such as “lft reference range” or “lft test reference range”—still triggered AI Overviews. Hebditch said this was worrying. “A liver function test is a collection of different blood tests. Understanding the results involves much more than comparing a set of numbers,” she explained. “The AI Overviews present a list in bold, making it easy for readers to miss that these numbers might not even be relevant to their test. They also fail to warn that someone can have normal results even with serious liver disease, requiring further care. This false reassurance could be very harmful.”
Google, which holds a 91% share of the global search market, said it is reviewing the new examples provided by the Guardian.
Hebditch added, “Our bigger concern is that this addresses a single search result, and Google can simply turn off the AI Overview for that query, but it doesn’t tackle the larger issue of AI Overviews for health information.”
Sue Farrington, chair of the Patient Information Forum, also welcomed the removal but shared similar concerns. “This is a good result, but it is only the very first step in maintaining trust in Google’s health-related search results. There are still too many examples of Google AI Overviews giving people inaccurate health information.”
Farrington noted that millions of adults worldwide already struggle to access trusted health information. “That’s why it is so important that Google directs people to robust, researched health information.”AI Overviews continue to appear for other examples originally highlighted by the Guardian to Google. These include summaries about cancer and mental health that experts have called “completely wrong” and “really dangerous.”
When asked why these AI Overviews had not been removed, Google stated that they link to well-known and reputable sources and advise users when it is important to seek expert advice.
A spokesperson said: “Our internal team of clinicians reviewed the examples shared with us and found that in many cases, the information was not inaccurate and was supported by high-quality websites.”
Victor Tangermann, a senior editor at Futurism, said the Guardian’s investigation shows Google still has work to do “to ensure that its AI tool isn’t dispensing dangerous health misinformation.”
Google stated that AI Overviews only appear for queries where it has high confidence in the quality of the responses. The company added that it constantly measures and reviews the quality of its summaries across many categories of information.
In an article for Search Engine Journal, senior writer Matt Southern noted: “AI Overviews appear above ranked search results. When the topic is health, errors carry more weight.”
Frequently Asked Questions
Of course Here is a list of FAQs about Googles removal of AIgenerated search summaries that were flagged as dangerous
BeginnerLevel Questions
1 What happened with Googles AI search summaries
Google recently removed some of the AIgenerated answers in its search results after users shared examples where the advice was wrong and potentially harmful especially for healthrelated questions
2 What is an AI Overview in Google Search
Its a new feature where Google uses its Gemini AI to generate a concise summary answer at the top of search results pulling information from websites so you dont have to click multiple links
3 What kind of dangerous answers were given
Some widely shared examples included AI suggesting that users add glue to pizza sauce to make it stickier or that eating rocks is beneficial For health it gave incorrect advice like how many rocks to eat per day which is obviously unsafe
4 Why did the AI give such bizarre answers
AI models can sometimes misinterpret lowquality satirical or joke content from the web as factual information They dont have realworld understanding or common sense to know the advice is dangerous
5 Has Google fixed the problem
Google has removed the specific incorrect AI Overviews that were reported and says its making technical improvements to prevent these kinds of errors However the core challenge of AI hallucination is an ongoing issue
Advanced Practical Questions
6 What does this mean for the future of AI in search
It highlights a major hurdle for AIpowered search balancing speed and convenience with accuracy and safety This incident will likely slow down the full rollout and force more human oversight stricter guardrails and better source verification
7 How can I tell if an AI Overview is reliable
Check the sources Click the small arrow next to the summary to see which websites it pulled information from Are they reputable
Use common sense If an answer seems odd shocking or too simple for a complex topic it likely is
Dont stop at the summary Always consult the linked sources or do further research especially for health financial or legal information
8 Can I turn off AI Overviews in my search results
Currently there is no universal onoff