Critique of transparency efforts
June 05, 2016
At DocSpot, our mission is to connect people with the right health care by helping them navigate publicly available information. We believe the first step of that mission is to help connect people with an appropriate medical provider, and we look forward to helping people navigate other aspects of their care as the opportunities arise. We are just at the start of that mission, so we hope you will come back often to see how things are developing.
An underlying philosophy of our work is that right care means different things to different people. We also recognize that doctors are multidimensional people. So, instead of trying to determine which doctors are "better" than others, we offer a variety of filter options that individuals can apply to more quickly discover providers that fit their needs.
June 05, 2016
Dr. Ashish Jha posted an interesting critique of the Hospital Compare website run by the Centers for Medicare and Medicaid Services (CMS). His criticisms are that there are too many measures, the measures aren't differentiated in importance, and the statistic methods don't differentiate among enough hospitals (only 0.5% of hospitals were rated as below average on one metric).
Having worked with this data before, I agree with the second two criticisms. The first criticism may be true, but would be addressed by a solution to the second criticism -- if consumers knew which metrics to pay attention to, the sheer volume of other metrics would be less overwhelming. The additional metrics might still be useful to those who want to get into details and need not always be presented. Ideally, those in the medical community could come out and identify which metrics are the most important to watch. For the metrics to be useful, they would indeed need to differentiate among more providers instead of lumping the vast majority of them into an "average" category.
Getting transparency correct in healthcare is hard. Fortunately, Dr. Jha points out some benefits to transparency, as demonstrated with regards to New York State's cardiac surgery reporting.
May 28, 2016
Over the last few years, Medicare has declared its intentions to move towards value-based payments (mostly in regards to procedures). Also recently, there has been an increasing outcry over the rise of prescription drug costs. Along those lines, Medicare has recently proposed using reference pricing to extract better value from prescription drugs. The idea is that for many conditions, there are multiple solutions to arrive at the same endpoint; in this case, there may be competing drugs that will each work. Instead of agreeing to pay for the full cost of any of those drug treatments, reference pricing would allow Medicare to specify a maximum amount to be paid (known as reference price), based on the more cost-effective drugs. If some drugs cost more than the reference price, then the patient would owe the difference. This scheme offers patients flexibility to opt for more expensive drugs if they deem the difference worthwhile, while introducing some cost considerations in the decision-making process.
Understandably, the pharmaceutical industry is not happy about this proposal. Interestingly, Medicare is prohibited by law from negotiating with pharmaceutical companies for discounts. The proposed scheme technically complies with that requirement while creating some downwards pricing pressure. The referenced article discussed the success of reference pricing for some procedures in California, where providers who were asking for prices above the reference price eventually lowered their prices.
May 21, 2016
Bloomberg published a fascinating piece that details why some pharmaceutical companies seem eager to offset patient costs through certain charities. The basic idea outlined in the article is that drug companies may raise prices aggressively, but that could deter patients from using the drug because the co-payment would be too high. Drug manufacturers are prohibited by law from directly offsetting the patient co-payment if the patient is on Medicare; instead, companies can achieve the same effect by contributing to independent charities that offsets patient co-payments. In practice, these charities will set up disease-specific funds that companies can donate to, and the charities appear to prioritize paying the co-payments of drugs that the donor companies sell. The donor companies themselves then benefit because Medicare end up paying the rest of the much higher prices that the donor companies recently set.
This loophole seems very much related to the third-party payer problem: when the decision-maker is not the one bearing the financial implications of the choice, cost tends not to be considered. There still remains a very valid question of what should happen when a drug company raise prices such that a life-saving drug is financially out-of-reach for many patients.
May 15, 2016
I participated in a panel on about rating doctors and hospitals at the recent Health Datapalooza conference. Below is a slightly edited version of my prepared remarks.
Our mission at DocSpot is to help people make better healthcare decisions. When people use our service to find doctors, we see our job in two parts: the first part is to find information about doctors that's out there and to help users navigate that information just like other search engines, and the second part is to help consumers make sense of that data. Philosophically, we don't rank doctors on our site, in large part because we think that doctors are multi-dimensional people like the rest of us, and that quality means different things to different people. Instead, we support a variety of filters to allow users to express what's important to them.
As you can imagine, DocSpot's service is very data-hungry. We'd like to incorporate as much data as we can to help patients make informed decisions. Some of this data might just be basic information as found in the National Provider Identifier database. Some of this data will be affiliation information and self-reported clinical interests found in hospital directories. Other information might be board action data from the state licensing boards, and still other information will be online patient reviews. We spend a lot of our time indexing and understanding this data so that users can use one unified interface to search across several hundred different sources.
It's exciting that CMS has recently released volumes of data about doctors. Whether that be Open Payments or the Provider Utilization and Payment data or the prescription data. CMS has also recently mandated that insurance companies publish provider networks. Our aspiration is to incorporate all of the data sets that are likely to be useful to prospective patients and help them meaningfully navigate them, rather than just providing a data dump. One example of this is how we're helping patients understand the Provider Utilization and Payment data set by crafting consumer-friendly labels instead of showing procedure codes. These consumer-friendly translations allow patients to search for physicians who have performed specific procedures. In addition to these labels, we've also grouped the procedures into a hierarchy so that users can refine their queries. For example, if users search for "hip surgery", we might ask if they mean "total hip replacement" or "hip arthroscopy" or a variety of other refinements.
Since the data set includes some pricing information, you can start to imagine the beginnings of a healthcare marketplace. The problem is that there's almost no clinical data that patients can use to evaluate the quality of the service they might be getting. If we were to ask the medical establishment how to select a physician, I think we would get a polite non-answer. I believe someone from a physician organization at one point said that any properly licensed physician should be a qualified option, and some providers even think that it's literally impossible to measure quality. Where some performance data is actually collected, it's usually kept hidden from patient view. For example, the National Practitioner Data Bank accumulates information about malpractice payments and adverse events, but that database is purposely kept confidential within the provider community. All of this is to say that the deck is stacked against the patients in terms of their figuring where they should go. Imagine making a $50,000 purchasing decision that might be a life-or-death decision and not having any hard data on which options are better than others, that's what being asked of many patients today.
It's in this context that we should view the debate about ratings. For example, there's an avid eagerness for online patient reviews. We all recognize there are a number of potential problems with online patient reviews, starting with not knowing whether actual patients left the reviews. But, it's important to remember the alternative: patients have virtually no other information to help make that decision. So, if online patient reviews were going to lead to a subset of doctors that have been approved by the medical community anyway, it's hard to see the harm in that. In reality, I do think it's likely that there are meaningful signals in many of the online textual comments. I see ProPublica's Surgeon Scorecard and Consumer's Checkbook's Surgeon Ratings in the same light. Marshall Allen would probably be the first to admit that the Surgeon Scorecard isn't perfect and the underlying data has issues, but I think the effort should be recognized for what it is: a huge step forward in helping patients assess their options. I'd take criticisms from the provider community more seriously if they actually put forth a plan on how they were going to collect and disseminate meaningful performance data for individual providers.
As far as I can tell, the medical establishment has by-and-large abdicated leadership in this arena. It might be that they're simply too busy, or they aren't paid to do this, or just that they don't want to be rated. Whatever the reasons, they have left a meaningful void where it would be natural for them to have assumed leadership. In that void, others can create metrics that are better than nothing. Now that patients turn to an outside perspective like ProPublica's, providers complain that the perspective is not perfect without giving a credible alternative. Similarly, CMS itself has started to define quality metrics, and providers have voiced some criticism. If providers continue to avoid defining and promulgating practical standards, our best hope might be for government agencies like CMS and all-payer claims databases to release increasingly large volumes of data. We can then hope that there will be thoughtful people who are willing to work on deriving meaningful signals to help patients make these potentially life-or-death decisions. As society continues to push an increasing share of the medical costs onto patients, it's only reasonable and even positive that are patients demanding better visibility into their options. We should not let the perfect be the enemy of progress.
May 07, 2016
I'm a big fan of healthcare transparency. When patients have better knowledge about both cost and quality and can make better decisions about their medical care, the marketplace for healthcare services can function more like a... marketplace. It was surprising to me to read how one study found that widespread access to pricing information actually led to a slight increase in spending compared to a control group.
I've only read the abstract, but one detail that jumps out is that only 10% of the employees who had access to the pricing information tools actually used it in the first year of access. That raises questions as to whether or not the rest of the employees were sufficiently notified about the tools and whether they had reason to care (in the form of deductibles and co-insurance). Another interesting detail was that the average spending for the employees who had access started out slightly higher than those who did not. I'm not sure if that's a meaningful detail, but it does raise the question of whether the two companies that offered access had been seeing larger increases in prior years or not. Either way, it would be interesting to see a similar study between a high-engagement group (where people were actually using the tool) and a control group.