There are many reasons for patients and advocates to be concerned about organizations – including the Institute for Clinical and Economic Review (ICER) – that use the Quality Adjusted Life Year (QALY), and their growing influence over insurance coverage for new drugs and treatments. As more patients are denied access to essential medicines because of QALY-based cost evaluations, such concerns will presumably become more widespread. However, in most cases, patients likely won’t get much of an explanation as to why their insurers won’t cover the newest treatments.
This is a problem that can only be combatted by educating patients and shining a light on ICER’s role in the healthcare system.
A couple weeks back, we explained why ICER – given its funding and formation – seems to be designed specifically to benefit insurance companies at the expense of patients. Yet, even if one assumed that at a fundamental level, ICER is a disinterested party that performs unbiased health and economic research, its conclusions would still be problematic because its methodologies are flawed.
And that’s putting it lightly.
Following the lead of some European healthcare systems, ICER uses a “Quality Adjusted Life Year” (QALY) standard to make its cost determinations. Put simply, this approach assigns a monetary value to the quality of and duration of a patient’s life in order to make assessments about a drug’s value and effectiveness. For a patient hoping to get access to a particular drug or therapy, the QALY model attempts to discern how long the treatment would prolong their life, and how much it would improve their quality of life. One QALY equals one full year of life in perfect health. For ICER, a QALY is valued at around $100,000 to $150,000.
Not surprisingly, QALY-based value assessments have raised a number of ethical questions. While it is not uncommon for economists to assign a numeric or monetary value to a human life, such determinations often seem arbitrary, especially when they’re being used to deny access to medical treatments. In addition, the “year in perfect health” aspect of the QALY standard automatically suggests that drugs used to treat elderly, disabled, or chronically ill patients are less valuable because there can be no reasonable expectation that these patients will experience “perfect health” under any circumstances. Few would likely argue that these types of value judgments should be left to economists.
Of course, if ICER’s use of the QALY standard produced results that were defensibly accurate in economic terms, some of these ethical questions might be considered ancillary. Indeed, if the model reflected a reasonable attempt to account for these kinds of concerns to reach fair conclusions, it might be broadly justifiable, even if there were some disagreements on the margins.
And, there’s the rub: In addition to being unethical and potentially discriminatory, the QALY standard produces results that do not reflect reality. All moral questions aside, ICER’s QALY is based on bad science.
A baseline element for credible scientific inquiry is presence of empirically testable theories and hypotheses. Children are taught this basic concept – the scientific method – starting in elementary school. The use of theories that can be objectively tested and falsified is one of the major dividing lines between science and pseudoscience.
When developing a new drug or therapy, pharmaceutical companies are subject to strict testing and evidence standards at all phases. These standards apply in determining whether a drug is safe and effective as well as any claims companies make when marketing the drug to doctors and patients. Yet, once a new drug is on the market, organizations like ICER – not to mention the insurance companies that rely heavily on ICER’s research – are free to abandon even the pretense of objective standards when making claims about a treatment’s cost effectiveness.
It’s patently absurd.
In making a value determination on a new drug or therapy, ICER’s goal is to divine a cost-per-QALY estimate and match it to a baseline “willingness to pay” threshold. In making its estimate, ICER doesn’t rigorously gather patient data and test it against a verifiable theory or hypothesis. Instead, it constructs artificial simulations to track the progress of hypothetical patient populations. The simulations are based on assumptions of how patients will progress through various stages of a disease using either the drug being evaluated, or a chosen alternative.
This progress is measured by utility scores that are usually extracted – sometimes without appropriate context – from literature. These utility scores are entirely ordinal in nature, which essentially means they can only measure whether patients’ quality of life improved, but provide no details about the extent or ratio of any improvements. For example, if the simulation showed that a drug increased a patient’s quality of life from an initial level (“A”) to a preferable one (“B”), that progress would be reflected in ICER’s QALY determination, but the model wouldn’t account for the distance between the two levels. Regardless of whether B was one percent or 100 percent better than A, ICER’s ordinal scoring model would treat the progress the same. As a result, the utility scores convey very little useful information regarding the value or efficacy of new treatments.
Long story short, ICER’s QALY-based cost assessments are made with very little hard, scientific data. The approximate values derived by ICER’s construct cannot be evaluated, replicated, or falsified. Another set of researchers – if they made a different set of untestable assumptions or used different literature – could reach at completely different results. For all intents and purposes, ICER’s cost per QALY and value determinations are imaginary.
This wouldn’t be major concern if ICER’s cost assessments were meant to be viewed merely as qualitative or academic observations. However, when insurers and government policymakers take their conclusions at face value and make potentially life-altering coverage and access decisions based on such subjective and unscientific research, it should be cause for alarm.