Remarks for a Health Datapalooza 2016 panel
May 15, 2016
I participated in a panel on about rating doctors and hospitals at the recent Health Datapalooza conference. Below is a slightly edited version of my prepared remarks.
Our mission at DocSpot is to help people make better healthcare decisions. When people use our service to find doctors, we see our job in two parts: the first part is to find information about doctors that's out there and to help users navigate that information just like other search engines, and the second part is to help consumers make sense of that data. Philosophically, we don't rank doctors on our site, in large part because we think that doctors are multi-dimensional people like the rest of us, and that quality means different things to different people. Instead, we support a variety of filters to allow users to express what's important to them.
As you can imagine, DocSpot's service is very data-hungry. We'd like to incorporate as much data as we can to help patients make informed decisions. Some of this data might just be basic information as found in the National Provider Identifier database. Some of this data will be affiliation information and self-reported clinical interests found in hospital directories. Other information might be board action data from the state licensing boards, and still other information will be online patient reviews. We spend a lot of our time indexing and understanding this data so that users can use one unified interface to search across several hundred different sources.
It's exciting that CMS has recently released volumes of data about doctors. Whether that be Open Payments or the Provider Utilization and Payment data or the prescription data. CMS has also recently mandated that insurance companies publish provider networks. Our aspiration is to incorporate all of the data sets that are likely to be useful to prospective patients and help them meaningfully navigate them, rather than just providing a data dump. One example of this is how we're helping patients understand the Provider Utilization and Payment data set by crafting consumer-friendly labels instead of showing procedure codes. These consumer-friendly translations allow patients to search for physicians who have performed specific procedures. In addition to these labels, we've also grouped the procedures into a hierarchy so that users can refine their queries. For example, if users search for "hip surgery", we might ask if they mean "total hip replacement" or "hip arthroscopy" or a variety of other refinements.
Since the data set includes some pricing information, you can start to imagine the beginnings of a healthcare marketplace. The problem is that there's almost no clinical data that patients can use to evaluate the quality of the service they might be getting. If we were to ask the medical establishment how to select a physician, I think we would get a polite non-answer. I believe someone from a physician organization at one point said that any properly licensed physician should be a qualified option, and some providers even think that it's literally impossible to measure quality. Where some performance data is actually collected, it's usually kept hidden from patient view. For example, the National Practitioner Data Bank accumulates information about malpractice payments and adverse events, but that database is purposely kept confidential within the provider community. All of this is to say that the deck is stacked against the patients in terms of their figuring where they should go. Imagine making a $50,000 purchasing decision that might be a life-or-death decision and not having any hard data on which options are better than others, that's what being asked of many patients today.
It's in this context that we should view the debate about ratings. For example, there's an avid eagerness for online patient reviews. We all recognize there are a number of potential problems with online patient reviews, starting with not knowing whether actual patients left the reviews. But, it's important to remember the alternative: patients have virtually no other information to help make that decision. So, if online patient reviews were going to lead to a subset of doctors that have been approved by the medical community anyway, it's hard to see the harm in that. In reality, I do think it's likely that there are meaningful signals in many of the online textual comments. I see ProPublica's Surgeon Scorecard and Consumer's Checkbook's Surgeon Ratings in the same light. Marshall Allen would probably be the first to admit that the Surgeon Scorecard isn't perfect and the underlying data has issues, but I think the effort should be recognized for what it is: a huge step forward in helping patients assess their options. I'd take criticisms from the provider community more seriously if they actually put forth a plan on how they were going to collect and disseminate meaningful performance data for individual providers.
As far as I can tell, the medical establishment has by-and-large abdicated leadership in this arena. It might be that they're simply too busy, or they aren't paid to do this, or just that they don't want to be rated. Whatever the reasons, they have left a meaningful void where it would be natural for them to have assumed leadership. In that void, others can create metrics that are better than nothing. Now that patients turn to an outside perspective like ProPublica's, providers complain that the perspective is not perfect without giving a credible alternative. Similarly, CMS itself has started to define quality metrics, and providers have voiced some criticism. If providers continue to avoid defining and promulgating practical standards, our best hope might be for government agencies like CMS and all-payer claims databases to release increasingly large volumes of data. We can then hope that there will be thoughtful people who are willing to work on deriving meaningful signals to help patients make these potentially life-or-death decisions. As society continues to push an increasing share of the medical costs onto patients, it's only reasonable and even positive that are patients demanding better visibility into their options. We should not let the perfect be the enemy of progress.