Doctor google, online symptom checkers

Dr. Google’s Rx? Take with a grain of salt


If you have an ethical challenge or situation you wish to share, please submit a case description to ethics@aoa.org.

Punxsutawney Phil, the time-honored groundhog meteorologist, is more accurate predicting spring's arrival than online symptom checkers are at making correct diagnoses, but that doesn't stop people seeking Dr. Google's advice.

Published online in the Medical Journal of Australia, a new study found that online or mobile app-based symptom checkers (SCs) landed on the correct diagnosis, first, only 36% of the time and listed the correct diagnosis in the top three results only 52% of the time. As for Punxsutawney Phil, he's correct about 40% of the time—without reliance on the worldwide web.

Each day, nearly 7% of all Google searches are for health-related queries, a total of about a billion searches worldwide. Accessibility is why nearly one in three Americans head online for answers to their health questions, but misinformation or incorrect information is rampant.

While SCs are algorithm-based to provide potential diagnosis and triage advice, they can cause unwarranted anxiety, prompt users to seek unnecessary care or, alternatively, may downplay red flags and provide a false sense of security. But how much of that is warranted based on SCs' accuracy?

Per the study, researchers from Edith Cowan University identified 36 common SCs providing medical diagnosis or triage advice on popular search engines, including Google, Yahoo and Bing, and health apps available on Google Play and Apple's App Store. To assess their performance, each SC was presented with 30 clinical vignettes, ranging from appendicitis and conjunctivitis to heart attack and deep vein thrombosis, then the SC's diagnosis and triage advice were compared to two general practitioners' assessments.

Of the 27 SCs providing diagnostic-only advice, 36% listed the correct diagnosis first while 52% included it among the top three results and 58% included it among the top 10 results. Moreover, the proportions of correct results listed first differed by condition:

  • Emergency: 27%.
  • Urgency: 45%.
  • Nonurgency: 39%.
  • Self-care: 32%.

Additionally, the eight SCs relying upon artificial intelligence (AI) algorithms fared only moderately better than the other 19 diagnostic SCs—46% listed correct diagnosis first vs. 32%, respectively.

Of the 19 SCs providing triage advice, 49% correctly identified the severity level with higher accuracy for emergent care (63%) and urgent care (56%) as compared to nonurgent (30%) and self-care (56%). Although, 40% of the advice for nonurgent and self-care vignettes was to seek urgent or emergent care, while 10% of the advice for emergency vignettes resulted in nonurgent or self-care recommendations.

"Triage advice, especially for less serious case vignettes, tended to be risk-averse, although there were also notable instances of the opposite; the former can place unnecessary burdens on health care systems but the latter can be dangerous," the study notes.

"To diminish the burden on health care systems, particularly emergency care, it is vital that online tools promptly direct people to appropriate care. Accordingly, online programs should be backed by quality sources and provide diagnostic or triage advice that is as accurate as possible."

AOA Ethics Forum: Dr. Google

The Australian study provides clear evidence for the risks associated with uninformed reliance on Dr. Google, the subject of an AOA Ethics Forum case study available from the AOA's Ethics and Values Committee (EVC).

"The Ethics Forum case study for optometrists addresses issues that arise in practice and the discussion provides doctors of optometry with useful guidelines to consider when determining how to effectively communicate with patients who bring a second opinion they've extracted via Dr. Google," says Hilary Hawthorne, O.D., AOA EVC member and co-author of the case study.

Patients' resistance of professional medical advice based on oft-unreliable web sources is an increasingly common situation in today's practice, the case study notes. While being mindful and respectful of the doctor-patient relationship, it's important to address patients' concerns through a conversation about the reliability of information found online.

Here are some tips to discuss with patients seeking the most from their health-related internet search:

  • Recommend specific resources for learning more about health issues.
  • Remind patients prone to fact-checking that they can double-check information with a second, reliable source and consult with their care provider.
  • Encourage patients to contact their eye doctor via phone, email or schedule an in-person appointment to discuss any concerns.
  • Encourage patients to prioritize their concerns before arriving at the appointment.
  • Caution patients about nonmedical sources or websites and how this information should be scrutinized.

"As a general rule, patients shouldn't expect information gained on a website to replace their eye doctor," Dr. Hawthorne says. "Nothing can replace a well-performed history and physical exam in an office setting with access to diagnostic instrumentation. Seeking professional eye health care can help immensely in arriving at a correct diagnosis and treatment plan."

Access the full Dr. Google case study.

Have ethical questions? Get answers

The Ethics Forum provides an opportunity to review a hypothetical case study concerning ethical challenges and includes suggestions on how doctors might handle the situation based on the AOA Standards of Professional Conduct and Code of Ethics. If you have any questions on ethics, please submit them to ethicsquestions@aoa.org, and the EVC will respond to your questions as soon as possible.

Have an ethical challenge or situation you wish to share? Submit a case description to ethics@aoa.org. The case description will be reviewed by the EVC and may be featured in future Ethics Forum discussions.

May 21, 2020

comments powered by Disqus