In my recent article, I questioned the validity of online patient reviews as measures of the clinical expertise of medical providers. The piece ignited a heated debate on social media and so I wanted to clarify a few of my arguments.
Patients have a crucial role in their medical care
By consulting patients and involving them in the decision making process, physicians can make better decisions and patients will experience better care outcomes. With the growth of Information Technology in the healthcare sector, this has become easier than ever. Patients can access their medical records through portals, read more about their condition online, share their experience with other patients on specialized social networks and ultimately be much more informed about their condition. All of these advances enable patients to be much more informed than before, and engage in a meaningful and informed discussion with physicians about their treatment options. This will not only result in better outcomes for patients, but also will potentially lead to advances in medical science which can benefit many other patients.
However, all of the patients who rate doctors online are not equally knowledgeable and thus their feedback should not be equally weighted. Some patients may be experts in their own medical conditions but how can this expertise be verified? And how should reviews from expert patients stand out from the majority of other reviews provided by patients that do not have such expertise? More importantly, how should one compare the qualifications of a physician with those of an expert patient? If an expert cancer patient disagrees with a board certified oncologist with years of medical education and experience, then whose opinion should be given more value?
Online reviews are not valid measures of clinical expertise
An instrument is valid when it measures what it was intended to measure. For example, your car’s fuel gauge is a valid instrument for measuring the amount of gas in your tank; it is not a valid measure of your car’s speed. Online reviews are limited to (or at least heavily biased by) certain criteria such as bedside manners, waiting times, scheduling flexibilities, and staff demeanor. While all of these are important factors that affect a patient’s experience in the health care system, none of them are actual measures of clinical expertise, and do not reflect medical outcomes. One can use online reviews to find doctors with the most reassuring smiles and most polite staff. However, using online reviews to find the best doctor is just as effective as looking at the car’s gas gauge to know how fast you are driving.
A few researchers have investigated the association between clinical outcomes, quality of care, and patient satisfaction at the hospital level (outcome measures at physician level are not publicly available). Among UK hospitals, the correlation between patients’ online ratings and measures of clinical outcome (such as mortality rates and readmission ratios) ranged between 0 to -0.31 in one study and between -0.21 to -0.31 in another one. The correlations between the Yelp ratings and the clinical outcomes among American hospitals are almost identical to what was reported in the UK: no correlation at all to a maximum of a weak correlation of -0.31. It even gets worse. According to this JAMA Surgery article, patient satisfaction is correlated with neither quality of care nor hospital safety. These studies provide additional support for my argument that online reviews are measuring something other than clinical outcomes.
Patients’ involvement in their medical care is the best thing that could happen to our severely sick health care system. Patients should have access to reliable and valid data to help them decide about their medical provider. They should have the capacity to shop around and visit multiple providers. Healthcare is the most important service we obtain in our life and being able to choose who provides it, in my opinion, is a fundamental patient right. Currently available online patient reviews however, are not the correct measure to rely on when making such a decision.