➢ Evidence-based medicine has had an important impact on the changing landscape of health care and policy in the United States.
➢ The principles of evidence-based medicine continue to grow within the field of orthopaedics as a means of decreasing variation in orthopaedic health-care delivery and improving patient outcomes.
➢ The quality of orthopaedic research is continuing to improve, with increasing levels of research quality.
➢ Orthopaedic surgeons have an important opportunity to develop and direct the implementation of evidence-based principles into clinical practice and national health policy.
On March 23, 2010, the Patient Protection and Affordable Care Act (PPACA) was signed into law, introducing important and vast changes in access to the U.S. health-care system1. Designed to address the presumed unsustainable growth in federal spending, with an unfunded liability for Medicare alone of $23.8 billion in 2012 and estimated depletion of the Medicare trust fund by 2026, the legislation introduced various policy initiatives in an attempt to “bend the cost curve” by slowing the increase in annual health-care expenditures2. PPACA assumes that there is currently widespread provision of health-care services that are not based on evidence, that do not improve health and outcomes, and that, if not mitigated, will increase health-care costs and reduce quality. The legislation proffers timely provision of appropriate evidence-based care that is expected not only to improve quality but also to decrease overall spending. PPACA also promotes physician adoption of information technology such as electronic health records. The act also assumes that physicians, hospitals, and insurance companies will be able to create new care-delivery mechanisms and find ways to equitably distribute global payment (e.g., bundled payments, accountable care organizations, etc.). These assumptions are based on a proliferation of published literature regarding the evidence basis of medicine delivered in the United States.
Evidence-based medicine has been defined as “the integration of best research evidence with clinical expertise and patient values.”3 Advocates of evidence-based medicine promote its use to combat the confirmation bias that can exist among health-care professionals, who frequently base decisions on custom and practice. The development of evidence-based medicine has coincided with observations of variations in care practices across the United States4. The propagation of evidence-based medicine coupled with the desire to reduce these variations in care delivery has had an important impact on health-care practice and policy. This review will address the history and impact of evidence-based medicine in the United States and will provide additional consideration for the future direction of evidence-based care.
History of Evidence-Based Medicine
The history of evidence-based medicine reflects the history of clinical trials. James Lind, a Scottish physician and pioneer of naval hygiene in the Royal Navy, conducted the first orthopaedically relevant clinical trial when he developed the theory that citrus fruits cured scurvy5. Interestingly, slow adoption of evidence also was prevalent in that era as it took the Royal Navy more than fifty years to adopt and implement the findings reported by Lind. The origin of modern evidence-based medicine dates to the early 1990s6. Prior to evidence-based medicine, the quality of care was measured tautologically by the local “customary and standard” practice delivered by health-care providers. The advancement of evidence-based medicine over the more traditional methodology of the time occurred because of the improved statistical tools that were developed during this period7. As the relatively young field of clinical epidemiology evolved, improved and sophisticated statistical processes that we know now as meta-analysis were developed. Meta-analysis made it possible to evaluate and compare like studies on a particular diagnostic or treatment issue. The ability to rigorously perform these comparisons provided an improvement to the narrative reviews that had been used previously.
Conceptually, evidence-based medicine can be viewed as the evolution of clinical research from retrospective, observational studies to prospective, and preferably controlled, clinical trials. Randomized trials minimize the effects of bias (which may cause the observed effect to be unrepresentative of the true or accurate effect) and confounding (which may lead to the conclusion of a spurious relationship between the independent and dependent variable). While a comparison of observational and randomized controlled trials (RCTs) in the cardiology literature revealed a correlation of 0.848, the concerns about relying on observational studies are severalfold. First, it is more difficult to ensure appropriate patient comparisons and balance of both known and unknown patient-related factors that may affect outcomes in observational studies. Issues of patient selection bias and the consecutiveness of patient enrollment are harder to evaluate in those studies. Observational studies also are more difficult to evaluate with regard to cause and effect because of the problem of both identifiable and unknowable patient-related factors. Reported differences may be real but also could be related to regression to the mean. Observational studies are useful for studying harm (a purpose for which RCTs are unethical), for studying rare diseases, and for documenting “proof of concept,” but higher-level clinical trials are preferable for investigating a particular treatment that is being considered for widespread adoption. Conversely, prospective clinical trials have increased internal validity, which mitigates concerns regarding the potential sources of bias associated with observational studies. Internal controls, which limit threats to validity such as confounding, selection bias, and regression to the mean, are also more restrictive, making comparisons more accurate. For many surgical procedures, however, the lack of clinical equipoise makes it impossible to conduct conventional RCTs. Expertise-based RCTs, in which patients are randomized to surgeons with expertise in the intervention being investigated, have been furthered as an alternative in surgical research9.
To date, the RCT has been considered the so-called gold standard in research methodology because of its ability to minimize confounding between patient groups. However, there are many concerns regarding the use of RCTs in a vacuum. As mentioned earlier, it is not possible to perform RCTs to investigate many clinical questions because of the lack of clinical equipoise. RCTs are also quite expensive and time-consuming to perform. These costs have created a situation in which most RCTs are now funded by industry, raising concerns about potential external sources of bias10,11. The quality of RCTs varies substantially, with a number of trials still having a number of methodological faults. The value of application of RCTs to routine clinical practice also has been questioned, with tight internal validity not necessarily important for the external validity needs of clinical practice, making the highly specific populations of RCTs less applicable to real-life treatment decisions. RCTs are the pinnacle of efficacy research, in which internal validity and expert utilization abound. Critics of efficacy-based evidence-based medicine highlight the need for translating and measuring these trials among more typical and common practice (Fig. 1).
As part of this process, the level-of-evidence hierarchy was developed as a way to evaluate the strength of clinical research evidence12. The hierarchy downgrades the value of expert opinion and poor-quality observational research while rewarding well-done, prospective clinical research. The primary orthopaedic research publications utilize this methodology in accepting and publishing orthopaedic clinical research. The overall quality of orthopaedic clinical research has continued to improve with time in many journals13-18. Despite the challenges of performing blinded, prospective, randomized clinical trials for many surgical interventions because of issues related to clinical equipoise, surgeon preferences, blinding difficulties, and patient recruitment, the overall increase in the number of prospective studies is a positive sign for evidence-based orthopaedic clinical research. Pediatric orthopaedics is a special exception for two reasons: (1) mismatch between patients and parents with regard to interest in study participation and (2) the longitudinal nature of many pediatric orthopaedic maladies19,20.
The use of evidence-based medicine and its methodologies accelerated from 1990 to 2005, but the definitions and methodologies of what constituted evidence-based medicine lagged behind the concept. Many organizations, including professional medical societies like the American Academy of Orthopaedic Surgeons (AAOS), were obliged to develop and disseminate evidence-based medicine products, such as Clinical Practice Guidelines, to their members. As the quality of prospective literature was limited, many of these products were consensus-based and were not truly adherent to the principles of evidence-based medicine. Although the promise of evidence-based medicine rested in diminishing bias, some of these so-called evidence-based medicine products did little to mitigate bias in their development. The consumers of health care, including both patients and payers, remained skeptical about evidence-based medicine and its application to clinical practice.
Because of the proliferation of the many and varied forms of organized evidence-based medicine and Clinical Practice Guidelines, adherents of evidence-based medicine called for standardization of evidence-based medicine methodology and products. In 2011, the Institute of Medicine (IOM), in a publication entitled Clinical Practice Guidelines We Can Trust, proposed eight standards for the development of Clinical Practice Guidelines21. The standards are (1) establishing transparency, (2) managing conflicts of interest, (3) guideline development group composition, (4) Clinical Practice Guideline-systematic review intersection, (5) establishing evidence foundations for and rating the strength of recommendation, (6) articulation of recommendations, (7) external review, and (8) updating.
Over the ensuing years, as the core concepts of evidence-based medicine proliferated, the AAOS adopted the current principles and methodology for evidence analysis and for developing evidence-based Clinical Practice Guidelines. Its first evidence-based Clinical Practice Guideline was released in May 2007. Since that time, fourteen Clinical Practice Guidelines have been published, with an additional three guidelines currently in development22. The current AAOS Clinical Practice Guideline program meets or exceeds all of the IOM standards. Minimization of bias is evident in the AAOS process, which has adopted recommendations that do not support several common practices, including vertebroplasty and viscosupplementation.
Evidence-based-medicine methods often illustrate the paucity of evidence that is detailed enough to apply to the full range of patients seen in everyday clinical orthopaedic practice. Despite this limitation, physicians must make decisions on the basis of clinical indications about when to use a diagnostic modality or to perform a therapeutic intervention. Accordingly, the AAOS has adopted the RAND/UCLA Appropriateness Method (RAM) to combine available scientific literature with the collective judgment of experts to yield statements about the appropriateness of performing a procedure23. This methodology attempts to stimulate the implementation of evidence-based medicine. In 2013, the AAOS approved its first Appropriate Use Criteria to determine the appropriateness of select orthopaedic procedures24. An “appropriate” procedure is one for which the expected health benefits exceed the expected negative consequences by a sufficiently wide margin25-27. Evidence-based information in conjunction with the clinical expertise of physicians from multiple medical specialties is used to develop an AAOS Appropriate Use Criteria in order to improve patient care, reduce variation, and obtain the best outcomes while considering the subtleties and distinctions necessary for making clinical decisions.
Evidence-Based Medicine and Health Policy
Wennberg et al. initially described variation in health-care delivery in a report on the regional variations in prostatectomy28. That report spurred greater emphasis on exploring the factors associated with these unexplained regional variations. Factors commonly cited to explain these regional variations include residency and/or mentor training, local competitive pressures, lack of outcomes assessment, and lack of evidence-based “best practices.”29 The Dartmouth Atlas of Health Care regularly measures and reports the current variations that occur in health-care delivery30. The Dartmouth Atlas has highlighted the following key issues in variation: Medicare spending, supply-sensitive care, preference-sensitive care, effective care, physician workforce, end-of-life care, and racial and income disparities.
Concomitantly, health care and clinical practice policy began changing to meet the quality and financial questions being posed in the field of medicine. The regional variations in health-care delivery also were associated with large variations in costs by region, and the costs did not correlate with observed outcomes and quality measures. The start of the rapid escalation in the financing of health care led to the formation of managed care, the advocates of which touted its ability to slow the rise of health-care spending while also improving the quality and performance of the care delivered to its members. The drive to develop and implement evidence-based practice intensified with these additional pressures.
In 2003, McGlynn et al. reported that 6712 patients received 54.9% of basic “recommended” medical care (with 98,649 indicators for care being met in 179,689 opportunities)31. The study included an analysis of 110 patients with hip fractures, who received a rather dismal 22.8% of recommended care (with 167 indicators of care being met in 732 opportunities). Health-care policy-makers took note of that report, and the government established a more prominent role in mandating the delivery of evidence-based best-care practices to patients. If evidence-based best clinical practice can be defined, it can and should be measured. Such measurement is now possible with the recent development of quality measures and “pay-for-performance” (P4P) reimbursement schemas32-35. A logical extension of evidence-based medicine to policy, P4P programs like the Surgical Care Improvement Project (SCIP) and Physician Quality Reporting System (PQRS) endorse specific quality indicators and directly link process adherence to physician and hospital reimbursement36,37.
Measuring the impact of evidence-based medicine on orthopaedic health-care delivery is challenging because of the many simultaneous quality efforts that have been implemented over the past fifteen years. One important impact has been the set of policies developed to address conflicts of interest in orthopaedics. While The Journal of Bone & Joint Surgery instituted its first conflict-of-interest policy in 198538, more rigorous methodology to limit bias has been introduced over the past decade. The AAOS Committee on Outside Interests was announced in 2010 to formalize the definitions and exclusions related to conflicts of interest39. All of these efforts were aimed at reducing a potential source of bias, including inappropriate industry-related publication bias. The AAOS Clinical Practice Guideline program maintains a conflict-of-interest standard that is more stringent than IOM recommendations to reduce additional sources of bias in Clinical Practice Guideline development.
In 2002, Moseley et al. reported the results of a Level-I randomized, controlled study in which patients with knee osteoarthritis were randomized to arthroscopic debridement, arthroscopic lavage, or placebo surgery40. Postoperatively, the authors reported no differences between the groups in terms of self-reported pain scores. That study illustrated a high level of research evidence and showed that the results of arthroscopic surgery (current practice) were no better than those of sham surgery. The results of that study were subsequently supported by the study by Kirkley et al. involving a different population of patients with knee osteoarthritis41. To measure the impact of this landmark evidence, Holmes et al. reviewed the Florida State Ambulatory Surgery Database to evaluate the rates of surgery before and after the study40 was published42. The study revealed a 49% decrease in knee arthroscopy for the treatment of knee osteoarthritis. While appropriately noting that the Centers for Medicare & Medicaid Services (CMS) had instituted a coverage decision against the use of arthroscopy for patients with a primary diagnosis of osteoarthritis of the knee, the overall effect of reducing non-efficacious knee arthroscopy in this population was effective. The review by Holmes et al. supported the policy and practice-altering potential of evidence-based medicine. Those findings recently were further supported by Sihvonen et al.43 in an RCT in which the results of arthroscopic partial meniscectomy were compared with those of arthroscopic sham surgery in patients with degenerative meniscal tears. That well-done trial showed no clinical difference in outcome between the two groups.
The AAOS evidence-based guideline entitled “Prevention of Symptomatic Pulmonary Embolism in Patients Undergoing Total Hip or Knee Arthroplasty” was published in 2007 in response to the sixth edition of the American College of Chest Physicians (ACCP) guidelines entitled “The Sixth (2000) ACCP Guidelines for Antithrombotic Therapy for Prevention and Treatment of Thrombosis”44,45. The AAOS guideline addressed some of the evidentiary flaws of the ACCP effort. The primary controversy was over the use of a surrogate outcome, venographically confirmed deep venous thrombosis, instead of pulmonary embolism. The release of the eighth edition of the ACCP guidelines also failed to adequately address the magnitude of morbidity and mortality associated with bleeding events from the recommended antithrombotic protocols46. The discord between the AAOS and ACCP guidelines was increased when prophylaxis against venous thromboembolism was adopted as an SCIP measure. With consistent evidence-based pressure from the AAOS and other organizations, the ninth edition of the ACCP guidelines changed recommendations by including outcomes and decisions that were patient-centered47. In 2014, SCIP will reportedly include the use of aspirin for prophylaxis against venous thromboembolism for patients undergoing joint replacement48. The AAOS evidence-based approach to highlighting the flaws of the ACCP guidelines necessitated changes that were important to the care of orthopaedic patients.
The Future of Evidence-Based Medicine in Practice Policy
A confounding limiter of adoption of evidence-based medicine has been the relative lack of physician training and education in evidence-based medicine methodologies and principles. A survey of medical schools in the United Kingdom revealed that evidence-based medicine training was either lacking or informal, with no specifically defined evidence-based medicine curriculum49. Although the younger generation of physicians appears to be more receptive to the tenets of evidence-based-medicine practice, specific evidence-based-medicine training is still needed to enhance receptiveness. A lesser-discussed requisite is the education of patients on the concepts and importance of evidence-based medicine. Similar to physicians, patients are concerned about the potential of a “one-size-fits-all” strategy that minimizes or ignores individual patient care50. This misperception needs to be corrected as evidence-based medicine necessitates the use of patient values and preferences, clinical setting, and expertise in conjunction with high-quality evidence.
A limiting factor of traditional evidence-based medicine evaluation is the relatively rigid and limited hierarchical nature of evidence appraisal and grading. As current evidence-based-medicine level-of-evidence hierarchies emphasize efficacy research like RCTs, the challenges with these trials in surgical settings with regard to blinding, recruitment, and surgeon buy-in make them challenging to perform. Future approaches need to address practical effectiveness research, improving external validity through larger, more varied patient populations in more typical clinical settings. New approaches, such as pragmatic in addition to explanatory trials, increased study inclusion criteria, active treatment for all patients, non-academic (i.e., real-world) study conditions, and emphasis of patient-centered outcomes, will be critical to the long-term applicability of evidence-based medicine. Explanatory RCTs test efficacy with tight internal validity in a research setting with highly selected participants and under highly controlled conditions51. Conversely, pragmatic RCTs test effectiveness through external validity in everyday practice with relatively unselected participants and under flexible conditions. In orthopaedics, the integration of clinical registry data into the evidence hierarchy will be essential for defining future evidence-based best practices. Efforts to standardize registry reporting to meet this goal have already begun.
A key change in the future could be the impact of information technology on the entire evidence-based-medicine delivery framework. A goal of meaningful use is to create a dissemination and implementation network that is accessible to practicing physicians. While the traditional model of dissemination and implementation has been a push system, in which developers of evidence-based medicine products like systematic reviews and Clinical Practice Guidelines push their products to physicians, the results have been spotty at best. A pull strategy involves a system within an information technology framework that can pull evidence-based medicine data from the electronic health record, administrative databases, registries, and traditional sources of evidence-based medicine best practices52. The proposed value of this methodology is its seamlessness and invisible impact on the flow of a practicing physician.
National or health-system policy-makers are likely to base future clinical importance on the ability to determine value across a population of patients. Discrete or binary measures of clinical importance are typically easier to understand than continuous measures. “Clinical importance” means that the effect size of a particular intervention is large enough to be important to patients. What constitutes “clinical importance” can be influenced by the views of various parties interested in health care. Recent health policy has been directed at these patient-centered outcomes. The Patient-Centered Outcomes Research Institute (PCORI) is authorized by Congress under the PPACA, and its purpose “is to assist patients, clinicians, purchasers, and policy-makers in making informed health decisions by advancing the quality and relevance of evidence.”1 The research performed at PCORI is intended to give patients a better understanding of the prevention, treatment, and care options available and the science that supports those options. Patient-reported outcomes (PROs) and health-related quality-of-life (HRQoL) assessments are used to evaluate conditions and outcomes from the perspective of the patient. Some examples commonly used in orthopaedics include the Knee injury and Osteoarthritis Outcome Score (KOOS) and the Hip disability and Osteoarthritis Outcome Score (HOOS) as well as global health tools like the Patient Reported Outcomes Measurement Information System (PROMIS) and the Short Form-36 (SF-36) Health Survey53-56. While many patient-reported outcomes exist within orthopaedic specialties, orthopaedic policy will need to be directed toward agreement on which tools practicing orthopaedists need to integrate into their practices.
Shared decision-making has been defined as “the process of interacting with patients who wish to be involved in arriving at an informed, values-based choice among two or more medically reasonable alternatives.”57 Shared decision-making differentiates effective care, which has a strong evidence base and a positive benefit-risk ratio dictating that all patients should receive treatment, from preference-sensitive care. Preference-sensitive care involves situations in which the evidence supports multiple alternatives, treatment involves important trade-offs, or the personal preferences, values, and life situation of the patient are important factors in driving decisions. Shared decision-making can be viewed as a natural offshoot of evidence-based medicine, but both are indelibly intertwined. Shared decision-making cannot succeed without accurate determination of effective treatment alternatives, and evidence-based medicine depends on shared decision-making to deliver evidence at the patient-physician level. Considerable effort to develop and implement these kinds of programs that can be scalable to any orthopaedic practice is critical to future evidence-based medicine implementation and adherence. A so-called decision board is one example of an aid that is applicable to this process58. PCORI funding offers orthopaedic surgeons the opportunity to marry the potential of shared decision-making and patient-reported outcomes/HRQoL assessments into an integrated, patient-centered delivery and measurement plan.
Comparative effectiveness has been defined by the Agency for Healthcare Research and Quality (AHRQ) as “a type of health care research that compares the results of one approach for managing a disease to the results of other approaches. Comparative effectiveness usually compares two or more types of treatment, such as different drugs, for the same disease.”59 When combined with cost-effectiveness or cost-benefit analysis, comparative effectiveness could become the final manifestation of the impact of evidence-based medicine on health policy. Many of these analyses have been performed with modeling, but there is a defined need to measure cost and comparative effectiveness in real-world populations60. Better tools to define these characteristics in orthopaedic patients offer an opportunity for orthopaedic evidence-based medicine utilization in the future.
Government and private-payer evidence-based medicine mandates are likely to increase in the future. Data from the SCIP program demonstrate the improvement of adherence to evidence-based medicine-based process measures, even if some of those measures did not improve patient outcomes. Physician representation in the development of these evidence-based quality and performance measures is critical as physicians have unique perspectives on patient care. Recognizing the lengthy process that is currently required for the endorsement of quality measures, CMS has proposed an expedited pathway for the integration of performance measures. The AAOS is adopting a plan to develop performance measures for the care of orthopaedic patients.
Last, and perhaps most important, the future of evidence-based medicine requires the development of best-practice applications that address the individual needs of and risks to each patient. Although evidence-based medicine should be directed at limiting the variation of health-care delivery across physicians and organizations, particular emphasis on recognizing and treating the individual needs of a given patient must be preserved. Furthering and adopting some of the above future strategies must be done in a manner that preserves the uniqueness of each patient and maximizes the importance of the patient-physician relationship. The preservation of this relationship is a critical challenge for policy-makers as they attempt to leverage evidence-based medicine across broad patient populations. The patient-physician relationship provides a critical opportunity for orthopaedic surgeons to be in the lead of this transformational opportunity to improve and individualize patient care. Focus should be placed on integrating the vast array of clinical data with evidence-based medicine-driven best practices. Modeling tools like that created by Eddy may provide a pathway for a development strategy in orthopaedic research61.
In conclusion, the absence of a centrally controlled U.S. health-care system creates a potentially large research laboratory promoting study opportunities to investigate the delivery of high-quality, evidence-based care. With the PPACA, a large stimulus has been provided to experiment and drive innovation in health-care delivery. Despite this promise, the current U.S. system fails far too many people in terms of access, quality, and cost of care. The opportunity for orthopaedists, and their responsibility as the ultimate advocates for their patients, is to take a leading role in shaping the future of evidence-based health-care delivery so that it provides better care for the patient, is usable by the physician, and is done at a cost that our nation can afford.
Source of Funding: No external funds were received in support of this study.
Investigation performed at Intermountain Healthcare, St. George, Utah
Disclosure: The author received no payments or services, either directly or indirectly (i.e., via his institution), from a third party in support of any aspect of this work. Neither the author nor his institution has had any financial relationship, in the thirty-six months prior to submission of this work, with any entity in the biomedical arena that could be perceived to influence or have the potential to influence what is written in this work. Also, the author has not had any other relationships, or engaged in any other activities, that could be perceived to influence or have the potential to influence what is written in this work. The complete Disclosures of Potential Conflicts of Interest submitted by authors are always provided with the online version of the article.
- Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated