Despite some notable improvements to the healthcare system over the last few years, American families and workers still struggle with annual cost increases that, on average, outpace wage growth and annual medical inflation rates. The COVID pandemic and the ongoing economic crisis have only exacerbated these concerns. Policymakers at the state and federal levels are rightly focused on bringing down costs, not only to help patients, but to rein in the budgets of various government health programs.
Many of these cost-cutting efforts have been focused on drug pricing, which isn’t surprising or inherently problematic. However, most policymakers and government offices lack the capacity to make meaningful top-down decisions on the value of a drug or treatment. So, they will often seek out advice from organizations – like the Institute for Clinical and Economic Review (ICER) – that claim the ability to discern the true value of new medicines and how much they should they cost. More and more often, decision-makers take those assessments at face value when making pricing and access decisions.
And that is a problem.
To be clear, there really is no single, definitive value-based price for any individual drug. Of course, it is contractually possible to determine an acceptable price that reflects the interests of patients, insurers, and manufacturers. However, such decisions should be based on sound science and real-world evidence. As we’ve repeatedly pointed out, that is precisely what is lacking in many third-party value assessments, especially those that use the Quality Adjusted Life Year (QALY). Instead of testable hypotheses and conclusions based on hard data, these valuations – particularly those produced by ICER – tend to rely on arbitrary standards and speculative criteria, many of which have long been considered outmoded and discredited.
Recent controversies involving New York’s Medicaid program and Vertex Pharmaceuticals present a near-perfect case study to demonstrate these problems. The saga began in spring 2018, when ICER produced a draft evidence report for three new treatments for cystic fibrosis (CF). The treatments – all developed by Vertex – were the first that were designed to treat the underlying genetic causes of CF rather than just the symptoms. For that reason, these drugs were highly valued by patients and families in the CF community.
ICER, using its standard and fundamentally flawed valuation methodology, concluded that all three treatments were severely overpriced. To align the cost of the treatments with the benefits provided, ICER declared that the prices should be reduced by at least 71 percent. Like many of ICER’s conclusions on drug pricing, these recommendations were clearly nonsense.
The New York Medicaid Drug Utilization Review Board – apparently lacking the forensic skills to evaluate the ICER methods – took these results at face value. Utilizing new authorities granted under state law to cut health costs, the board determined that Orkambi – one of the three Vertex drugs evaluated in the report – was not worth the price and demanded significant discounts for the state’s Medicaid program. Not surprisingly, the size of the demanded discount closely resembled ICER’s arbitrary price recommendations.
While Vertex and New York ultimately negotiated a confidential rebate agreement, this episode clearly illustrates why patients – and, yes, manufacturers – are right to be concerned about ICER’s expanding influence on government health agencies. As more state and federal programs defer to ICER’s unsupported recommendations on drug pricing, more patients are going to be denied access to new and innovative therapies based on arbitrary budget standards and value determinations that are, for lack of a better word, imaginary.
So, what was wrong with ICER’s analysis of the new CF treatments?
For one thing, its conclusions were based on bad science. Utilizing a model developed by the University of Minnesota to represent a hypothetical patient population, ICER produced a lifetime cost utility scoring framework for CF patients that applied values to CF disease stages defined by various ranges of ppFEV 1, which is an accepted standard – based largely on lung capacity – used to measure CF severity. Using these utility scores, ICER created a definition for a QALY – or a theoretical year in perfect health – for CF patients in the disease state. They added up the QALYs over the lifetime of a hypothetical CF patient and discounted them – with adjustments for CF life expectancy and assumed medical costs – to divine a cost-per-QALY value for new and existing treatments.
Setting aside the fact that the QALY itself is a largely arbitrary standard with no fixed meaning, this model, like most of ICER’s QALY-based models, is designed to make calculations that are entirely impossible. As we’ve noted many times in the past, the utility scores used in ICER’s framework are ordinal, meaning they can only be used to rank points on the scale against one another. Unlike a ratio scale, ICER’s utility framework doesn’t provide any numerical value for the distance between any two points. This is an important distinction because it is not mathematically possible to multiply time spent in a disease state by an ordinal score, which is what ICER claims to be doing with its QALY models. Long story short, the data and conclusions ICER produces with this methodology are ultimately nonsense.
In addition, this modeling process produces results that cannot be replicated or objectively validated. ICER’s valuations are not the result of the collection of hard data showing the drugs’ real-world impact. Instead, they are based on arbitrary assumptions and hypothetical situations that cannot be evaluated under any clear or objective standard of reality. Any competing organization could start with a different set of equally defensible assumptions and reach an entirely different result.
In short, ICER’s methodology fails the most basic standards of scientific inquiry and research. At the very least, New York Medicaid should have asked for competing models to support any number of competing pricing recommendations before making pricing demands or threatening to limit patient access to new treatments.
On top of the bad science, the evidence report for the Vertex CF drugs did not incorporate the perspectives and needs of patients. Generally speaking, any valuation rooted in a generic utility will inherently undervalue importance of drugs that treat chronic or incurable illnesses to their target patient population. For a CF patient, no available medicines can restore them to perfect health for a prolonged period. Yet, treatments that can significantly extend a patient’s life and provide significant improvements to their condition are, from the patient’s perspective, extremely valuable. ICER’s model ignored this and this clear failing was never acknowledged by New York Medicaid.
Case in point, in response to ICER’s call for comments on its draft report, several CF patients and their families submitted letters that detailed their experiences with the Vertex drugs. One patient who had been treated with Orkambi said the following:
Within one month of being given access, I saw my lung function increase by 5 percent. While this may not seem like a lot to some and when evaluated against the cost of this drug probably seems insignificant and not a cost effective method to treat cf. But when your lung function is at 28 percent, that 5 percent is the difference between being able to carry your kids up the stairs to bed at night or carrying them around when they are tired, and not being able to do so. It is the difference of being able to perform normal responsibilities of being a father and a husband.
In addition, the frequency in which I have anxiety and panic attacks in the normal flow of daily activities has been reduced leading to increased exercise, more playing with my kids and a desire to do the activities that I have hesitated doing for the past several years.
Obviously, these types of patient-centered outcomes are difficult to measure – but measuring them is not impossible. While to date, none have been developed specifically for CF, there is a range of disease-specific instruments that meet the standards of good science in measuring patients’ quality of life and fulfillment of needs.
If the stated goal of a value assessment is to quantify and assess a drug’s impact on patient’s quality of life, these perspectives cannot be discounted. Yet, with its reliance on QALY-based models and inadequate measurements, the vast majority of ICER’s valuations do just that. Once again, New York Medicaid appeared willing to put patient and caregiver needs aside when relying on ICER’s assessment and made significant policy decisions – choices that could impact the lives of patients for years – without any apparent reservations. The state also failed to consider the real value new drugs have for patients with chronic or life-threatening conditions like CF. Medicines that can extend a patient’s life with reasonable function and quality of life may potentially give them time to take advantage of treatments that will become available later. In such cases, even if drugs that are currently available cannot promise a return to perfect health – or even a dramatic improvement on the patient’s current condition – there is significant value for the patient in the possibility that future treatments will offer those types of benefits.
These were all glaring shortcomings in ICER’s value assessment of Vertex’s CF treatments, an assessment accepted by New York Medicaid. And New York is not alone. In recent years, more and more private insurers have opted to rely on ICER value assessments when making coverage decisions. Other state and federal government agencies – including the U.S. Department of Veterans’ Affairs – have done the same.
Ultimately, determining an acceptable price for any drug – one that accounts for the sometimes competing interests of those involved – is a complicated matter. If decision-makers lack the skills or capacity to properly evaluate pricing arguments, negative and improper outcomes will continue to be the norm. The continued acceptance of QALY-based pricing models is emblematic of this failure. Going forward, policymakers must prioritize building capacity for this kind of forensic analysis and incorporating competing views and assessments before making these kinds of pricing decisions.