ANP 654 DQ 1/2
I graduated
1578185 - McGraw-Hill Professional ©
CHAPTER 7 Principles of Evidence-Based Medicine
and Quality of Evidence
Daniel I. Steinberg, MD
INTRODUCTION A BRIEF HISTORY
The March 1, 1981 issue of the Canadian Medical Association Journal included a landmark article titled “How to read clinical journals: I. Why to read them and how to start reading them critically.” Written by David Sackett, MD (1934–2015) of McMaster University, it introduced a series of articles that highlighted the importance of critical appraisal of the literature. Starting in 1993, a set of articles in the Journal of the American Medical Association titled “Users’ guides to the medical literature” reprised and expanded on the earlier series. These works, and other efforts by their authors, made critical appraisal of the literature accessible to the masses and laid the groundwork for evidence- based medicine (EBM).
Gordon Guyatt, MD, coined the term “evidence-based medicine” in the early 1990s, while he served as the internal medicine residency program director at McMaster University. Dr. Guyatt and colleagues had incorporated critical appraisal of the literature into the residency program curriculum, and Dr. Guyatt wanted a term to describe and advertise their efforts.
EBM caught on quickly over subsequent years as practicing physicians and training programs embraced and taught its methods, with dissemination greatly fueled by the rise of the Internet.
1578185 - McGraw-Hill Professional ©
ROLE OF CLINICAL JUDGMENT AND PATIENT PREFERENCES IN EBM
An early criticism of EBM, which some still harbor, was that it did not properly acknowledge the importance of clinical judgment or patient preferences. In an updated framework for evidence-based practice by R. Brian Haynes, P.J. Devereaux, and Gordon Guyatt in 2002, evidence-based decisions are based on four cardinal elements: (1) the research evidence, (2) the patient’s clinical state and circumstances, (3) the patient’s preferences, and (4) the clinician’s judgment and expertise.
PRACTICE POINT
Clinical judgment and expertise are essential to the practice of EBM. These skills facilitate optimal decision making by allowing the clinician to properly weigh the research evidence in the context of the patient’s individual clinical circumstances and preferences. Decisions should never be based on the evidence alone.
Practicing EBM may appear to be a straightforward affair with its methodical approaches to clinical question construction and to searching and critically appraising the literature. However, hospitalists should not confuse process with content, and they will often find that EBM tends to highlight clinical uncertainty and gaps in the medical literature. High-quality evidence does not exist to guide all clinical decisions, and extrapolation from lower quality evidence is often necessary. Bayesian diagnostic decision making often relies on clinical judgment to formulate pretest probabilities or to deal with the uncertainty that accompanies inconclusive post-test probabilities. Learning to deal with uncertainty is a core competency of EBM, which draws heavily on clinical judgment and experience.
STAYING UP TO DATE WITH THE LITERATURE PUSH INFORMATION RESOURCES
Few clinicians have the time to consistently read medical journals, identify relevant new research and critically appraise new studies to determine if they should be incorporated into one’s practice. “Push” information resources are resources that send content out to their users on a regular basis. “Pull” information resources are databases that clinicians search in order to answer a clinical question. Pull resources are discussed later in this chapter. Table 7-1 lists selected high-quality push and pull information resources.
TABLE 7-1 High-Quality Push and Pull Resources
Push Resources Pull Resources Resources that automatically send new, high-quality evidence to users via e-mail or RSS feed aggregator
Databases that are searched as needed to answer clinical questions
ACP JournalWise ACP Journal Club http://journalwise.acponline.org http://annals.org/journalclub.aspx
1578185 - McGraw-Hill Professional ©
BMJ Clinical Evidence BMJ Clinical Evidence http://clinicalevidence.bmj.com http://clinicalevidence.bmj.com DynaMed Cochrane Collaboration https://www.dynamed.com http://www.cochrane.org Evidence Updates DynaMed https://plus.mcmaster.ca/evidenceupdates https://www.dynamed.com NEJM Journal Watch NEJM Journal Watch http://www.jwatch.org http://www.jwatch.org PubMed (using an My NCBI account and search strategies created by the user, see text) http://www.ncbi.nlm.nih.gov/pubmed
Practice Guidelines from professional societies, eg, AHRQ National Guideline Clearing House http://www.guideline.gov
PubMed http://www.ncbi.nlm.nih.gov/pubmed
Trip Database https://www.tripdatabase.com
McMaster PLUS (Premium Literature Service) continuously searches over 120 medical journals and selects evidence for critical appraisal. Articles that pass the critical appraisal process and are also rated as clinically relevant and newsworthy by their team of reviewers are then transferred to the PLUS database. The PLUS database contributes content to evidence-based summary resources such as EvidenceUpdates, ACP JournalWise, DynaMed, and ClinicalEvidence. These resources all offer e-mail alerts to users. ACP Journal Club and NEJM Journal Watch critically appraise and produce synopses of high-quality evidence accompanied by expert commentary. PubMed, through its free account service “My NCBI,” allows users to receive the results of literature search strategies they either design or select (via the “Clinical Queries” feature) by e-mail on a regular basis.
PRACTICE POINT
Hospitalists should strongly consider using an e-mail-based alerting service or RSS (Rich Site Summary) feed aggregator from a high-quality evidence-based summary push resource to effectively stay up to date on the literature. Hospitalists can pair a virtual file cabinet with these resources to form an effective information management system that will make evidence readily available at the point of care.
KEEPING INFORMATION AT HAND: THE VIRTUAL FILE CABINET
Although the traditional way of storing articles for later reference is to use a physical file cabinet, this approach has a number of disadvantages. File cabinets are not mobile, they
1578185 - McGraw-Hill Professional ©
cannot be quickly searched or updated, and determining how to best file something for easy retrieval can be confusing. A clinician might ask himself/herself in frustration: “Did I file that great article on pulmonary manifestations of HIV under ‘HIV,’ or ‘infectious disease,’ or ‘pulmonary’?” They do not offer a way to electronically add content to them or electronically share content with others, making them incompatible with modern communication methods such as e-mail.
The virtual file cabinet (VFC) is an Internet cloud-based electronic document storage system that synchronizes across multiple electronic devices (eg, smartphone, tablet, laptop computer). A VFC is an effective way for hospitalists to electronically file articles they receive from a push information resource as described above for easy retrieval at the point of care. Box, Dropbox, Evernote, and Google Drive are some examples of the commercial products that currently exist that can be used as a virtual file cabinet. Products such as these also offer easy options for electronically sharing content with others.
THE EBM PROCESS: ASKING AND ANSWERING CLINICAL QUESTIONS Practicing EBM often involves asking and answering questions that arise during the care of patients. There are four steps in this process: (1) asking a focused clinical question, (2) searching the literature for the best available evidence, (3) critically appraising the literature, and (4) applying the literature to an individual patient. This chapter explores the basic principles of EBM as they relate to these four steps.
STEP 1: ASKING A FOCUSED CLINICAL QUESTION
Clinical questions fall into two general groups: background or foreground questions. Background questions ask about general knowledge, pathophysiology, epidemiology, and broad aspects of diagnosis and treatment. “What are the treatments for epilepsy?” is an example of a background question. Junior learners often ask background questions, and answers can often be found in textbooks. Foreground questions are more focused, address specific clinical situations, and facilitate the delivery of the most up-to-date, evidence-based care. Experienced clinicians ask foreground questions, with answers residing more in the medical literature. Hospitalists should always aim to construct focused foreground questions. These are further discussed below.
Most hospitalists would recognize the question, “Should patients with heart disease receive regular vaccinations?” as one that is overly broad. Not all heart diseases are the same, nor are all vaccinations, and the specific benefits patients might reap from vaccination are not specified by the question. Clinical questions need to be focused in order to be answerable. In addition to clinical questions about therapy, clinicians can ask focused clinical questions about diagnostic tests, about the harm an intervention might cause, about prognosis, or about differential diagnosis.
Clinical questions should be constructed using the “P-I-C-O” format. “P” stands for “population” and describes the patient the question is about in proper detail. “I” stands for “intervention” and refers to the therapy or diagnostic test in question. “C” stands for “comparison” and describes either an alternative treatment or standard of care (for questions about therapy) or the gold standard test (for questions about diagnostic tests). “O” stands for “outcome,” which should be clinically important and patient-centered. Surrogate markers of clinically important outcomes are acceptable.
1578185 - McGraw-Hill Professional ©
An example of a well-built clinical question about therapy is: “In patients admitted to the hospital with non-ST elevation myocardial infarction (P), what is the effect of influenza vaccination at discharge (I) as compared to no vaccination (C) on recurrent acute coronary syndrome or mortality (O)?”
An example of a properly designed clinical question about a diagnostic test is: “In patients presenting to the emergency department with suspected infection (P), how accurate is a history of shaking chills (I), as compared a gold standard of blood cultures (C) in diagnosing bacteremia (O)?”
When clinical questions do not perfectly fit into the P-I-C-O format, clinicians should follow as many of the above principles as possible.
PRACTICE POINT
Clinical questions must be focused to be answerable. Hospitalists should use the widely accepted “Population–Intervention–Comparison–Outcome” (P-I-C-O) format to construct focused clinical questions.
STEP 2: SEARCHING THE LITERATURE
Pull information resources
With a properly constructed clinical question in hand, the hospitalist can now search the literature to find the answer. The first step is to select an information resource that is appropriate for the clinical question and the amount of time available. Databases that are searched in an on-demand way in order to answer a clinical question are called pull resources.
In many cases, and especially when time is limited, one should first consult a high- quality summary pull resource. Summary resources that are frequently updated assess the quality of the evidence presented and are user-friendly and preferable. Examples include BMJ Clinical Evidence, ACP Journal Club, DynaMed, the Cochrane Collaboration, NEJM Journal Watch, UpToDate, and practice guidelines from professional societies. All are highly useful. Each has its strengths and weaknesses. UpToDate is fast to use, comprehensive, and provides expert guidance in an easy to digest, narrative format, but it is not as rigorously constructed as the others. ACP Journal Club provides excellent summaries of highly selected literature deemed valid and relevant to clinical practice, but as a result its database is not comprehensive. DynaMed is rigorously constructed and presents a lot of primary data from clinical trials, often in an outline format. The Cochrane Collaboration produces high-quality systematic reviews of the evidence. Practice guidelines are excellent resources that offer clear recommendations, but their quality can vary, update intervals can be long, and users must pay close attention to the level of evidence and strength of recommendations in published practice guidelines.
If one has more time, or if a deeper dive is needed after consulting a summary resource, the primary literature can be searched via PubMed (preferably using “Clinical Queries” option for clinical questions) or Trip Database (preferably using “PICO search” option for clinical questions). For certain questions, a content-specific resource can be best. JAMAEvidence catalogs evidence on the accuracy of history and physical exam
1578185 - McGraw-Hill Professional ©
findings. The Cochrane Collaboration focuses on systematic reviews. No single resource is perfect and clinicians should adopt a “toolbox” approach by becoming familiar with a few resources.
PRACTICE POINT
Pull resources are databases that are searched in an on-demand way to answer a clinical question. Pull resources have different and often complementary roles. None are perfect, and hospitalists should adopt a “toolbox” approach in which they become familiar with a few resources. The type of question and the amount of time available to answer the question should help determine which resource the hospitalist consults.
THE HIERARCHY OF EVIDENCE
Hospitalists should know which types of clinical trials will best answer different types of clinical questions, and which study designs will provide the most powerful results. The randomized controlled trial (RCT) is the gold standard for determining the effect of a therapeutic intervention.
Determining the accuracy of a diagnostic test requires a prospective design in which the test is studied in the same clinical setting it will be used, and is compared against an acceptable gold standard. The effect of a diagnostic test on clinical outcomes can be determined by a randomized controlled trial, in which the test in question is treated as the intervention and another diagnostic approach (preferably a gold standard if available) is considered the comparison.
A systematic review is a summary of the evidence on a topic in which the literature search and selection of evidence has been performed in a rigorous, transparent, and reproducible way. The most valuable systematic reviews will also include a meta-analysis. In a meta-analysis, the results of multiple similar types of studies (RCTs, observational studies, or studies of diagnostic tests) are statistically combined to offer more powerful results. What a meta-analysis gains in power, it can sometimes lose in applicability and focus if too much clinical heterogeneity exists among the patients included from individual studies. With that caveat, a high-quality systematic review that includes a meta- analysis is considered to be the highest level of evidence. Table 7-2 describes the hierarchy of evidence for different types of clinical questions.
TABLE 7-2 Hierarchy of Evidence for Different Types of Clinical Questions
Type of Clinical Question Best Types of Articles (Listed in Decreasing Level of Evidence)
Therapy or harm 1. Systematic review/meta-analysis of randomized controlled trials
2. Randomized controlled trial 3. Cohort study 4. Case-control study 5. Case series
1578185 - McGraw-Hill Professional ©
6. Case reports 7. Expert opinion
Accuracy of a diagnostic test
1. Systematic review/meta-analysis 2. Prospective comparison against gold standard conducted
in setting diagnostic test will be used in practice Prognosis 1. Systematic review/meta-analysis
2. Prospective cohort study of a representative, homologous patient group with appropriate follow-up and objective outcomes.
3. Retrospective case-controlled study Differential diagnosis of a condition
1. Systematic review/meta-analysis 2. Prospective evaluation of a representative sample that
includes definitive diagnostic evaluation, performed in a setting similar to actual practice
STEP 3: CRITICALLY APPRAISING THE LITERATURE
Although summary resources that appraise the medical literature have risen in quality and are an essential resource for clinicians, they will not always provide the answer to a clinical question. In addition, hospitalists may participate in discussions around particular studies, attend “journal club” conferences, or teach junior learners about evidence-based medicine. Hospitalists must have solid critical appraisal skills. The Users’ Guides to the Medical Literature (McGraw-Hill, 2014) is the benchmark textbook for learning how to practice EBM. It proposes an effective method for critical appraisal that has been widely adopted. The principles and approach it endorses are discussed further in this chapter.
In appraising any type of study, three broad questions must be answered:
1. Are the results valid? 2. What are the results? 3. How can I apply the results to patient care?
The critical appraisal process asks these three questions of each type of study, including those about therapy, diagnosis, harm, prognosis, and systematic reviews. Each of the three major questions is answered through a subset of critical appraisal questions that are specific to each study type. The critical appraisal questions help determine if a study used proper methods to prevent bias, if the results are large enough to be meaningful, and whether the results can be applied to a particular patient or population.
PRACTICE POINT
Critical appraisal focuses on answering three broad questions: Are the results valid? What are the results? How can I apply the results to patient care? The Users’ Guides to the Medical Literature offers a methodical approach to answering these questions for studies about therapy, diagnosis, harm, and prognosis, and for systematic reviews.
1578185 - McGraw-Hill Professional ©
In recent years, a new type of evidence, the results of quality improvement studies, has risen in prominence. As hospitalists often are involved in quality improvement efforts, they should have a working knowledge of how to critically appraise this type of evidence. The Users’ Guides to the Medical Literature offers further instruction in this area.
This chapter will illustrate the critical appraisal process through analysis of an article about therapy, as randomized controlled trials and prospective cohort studies are among the most common types of evidence encountered in practice. Table 7-3 outlines the critical appraisal questions, which are discussed in detail below. Clinicians should refer to the Users’ Guides to the Medical Literature for a complete list of critical appraisal questions for different types of research studies.
TABLE 7-3 Critical Appraisal Questions for an Article About Therapy
Main Question Supplemental Questions 1. Is the study valid? a. Were patients randomized?
b. Was group allocation concealed? c. Were patients in the study groups similar with respect
to prognostic variables? d. To what extent was the study blinded? e. Was follow-up complete? f. Were patients analyzed in the groups to which they
were first assigned (ie, intention to treat)? g. Was the trial stopped early?
2. What are the results? a. How large was the treatment effect? (What were the RRR and the ARR?)
b. How precise were the results? (What were the confidence intervals?)
3. How can I apply the results to patient care?
a. Were the study patients similar to my patients? b. Were all clinically important outcomes considered? c. Are the likely treatment benefits worth the potential
harm and costs? (eg, what is the number needed to treat? What is the number needed to harm?)
Adapted from Guyatt G, et al., eds. Users’ Guides to the Medical Literature: A Manual for Evidence- Based Clinical Practice, 3rd ed. New York, NY: McGraw-Hill Education; 2014.
CRITICAL APPRAISAL OF AN ARTICLE ABOUT THERAPY
To critically appraise an article about therapy, the clinician should answer the following set of questions.
Are the results valid
Were patients randomized? Randomization will best ensure that the intervention and control groups are equal at the start of the trial, except for the intervention being tested. In observational studies, investigators must take special steps to ensure experimental and comparison cohorts are evenly matched. Randomization does much of this automatically.
1578185 - McGraw-Hill Professional ©
Was group allocation concealed? When allocation concealment is present, those enrolling patients into the study during randomization are blinded to what group (ie, intervention or control) the patients are being assigned. Without allocation concealment, for example, a patient being enrolled but who is viewed as likely having a bad outcome might be steered into the comparison group, potentially improving the results in the intervention group. Were patients in the study groups similar with respect to known prognostic variables? This is necessary to isolate the effect of the intervention and minimize confounders. Proper randomization will ensure this. In the absence of randomization, clinicians should look to see that the intervention and comparison groups were carefully matched so as to be equal for all possible confounders. This is often difficult to do, which is why randomization is preferred. To what extent was the study blinded? The term “double-blind” does not describe all parties that should be blinded in an RCT. For maximum validity, multiple groups should be blinded, including those selecting patients for randomization (ie, allocation concealment), the patients, those administering the intervention, the data collectors/analysts, and the outcome assessors. When patients or those administering the intervention cannot be blinded (as in trials of certain surgeries or procedures), allocation concealment, as well as blinding of data analysts and outcome assessors, is essential. Was the follow-up complete? Studies should track the outcomes of all participants. Patients may be lost to follow-up if they suffer a negative outcome or find the intervention too difficult to comply with. Both of these reasons would be highly relevant to the results of a study. Were patients analyzed in the groups to which they were first assigned (ie, intention to treat)? The principle of “intention to treat” highlights that in a clinical trial, the offering of an intervention to participants is being tested as much as the other effects of the intervention. If for instance participants do not like the taste of a pill or find a study protocol too hard to comply with and drop out of a trial or asked to be switched to the other arm as a result, these consequences must be recorded as part of the results of the study. Outcomes must be attributed to the group to which participants were initially assigned. A trial that follows the intention to treat principle will give the best estimate of what will happen if a therapy is offered to a population. In a “per protocol analysis,” the study results represent only what happened to those who actually accept the intervention and complete the trial. This type of analysis can inform what effect a therapy would have if taken properly by a highly compliant patient. Was the trial stopped early? Follow-up must be an appropriate length for the outcome measured. For example, 3 days might be an appropriate follow-up period for an intervention to reduce acute pain, but it would likely be too short for an intervention designed to reduce LDL cholesterol or to improve functional status. Randomized controlled trials that are stopped early because of benefit may overestimate the effect of an intervention. A large benefit observed early in a trial may be due chance, and may be greater that what would be observed if the trial were allowed to run to completion.
What are the results
1578185 - McGraw-Hill Professional ©
How large was the treatment effect (ie, what were the relative risk reduction and absolute risk reduction?)? Clinicians should consider results of a study using the absolute risk reduction (ARR), where the ARR% = event rate in comparison group – event rate in experimental group. The relative risk reduction (RRR) is calculated as RRR% = event rate in comparison group – event rate in experimental group/event rate in comparison group. The RRR allows one to determine the effect of a therapy on an individual patient according to their baseline risk.
Consider the study by Sharma et al., published in the American Journal of Gastroenterology in 2013, that randomized 120 hospitalized patients with cirrhosis and overt hepatic encephalopathy to rifaximin versus placebo. In-hospital death occurred in 24% of the rifaximin group and in 49% of the placebo group. Here the ARR is 25% (49% – 24%) and the RRR is 51% (49% – 24%/49%).
Clinicians can use the RRR to estimate the effect a therapy will have on individual patients they treat that may be more or less sick than the average patient in a study. For example, if a patient is estimated to have a baseline risk of dying of 60%, rifaximin will reduce this patient’s risk of dying to 30.6% (60% × 0.51). In this case the ARR will be 60%-30.6% = 29.4% which is higher than what the rifaximin group as a whole experienced in the trial. In a similar way, lower baseline risk will result in lower absolute risk reduction.
PRACTICE POINT
A randomized controlled trial describes the average effect of an intervention across the group of patients studied. The effect an intervention will have on any individual patient can be determined by combining that patient’s baseline risk with the relative risk reduction (RRR) reported in the trial. Clinicians can estimate their patient’s baseline risk by comparing them to the clinical characteristics and comorbidities of patients in a trial, and by using their clinical judgment and expertise.
HOW PRECISE WAS THE ESTIMATE OF THE TREATMENT EFFECT? (WHAT WERE THE CONFIDENCE INTERVALS?)
Confidence intervals provide more information than P-values alone, giving an estimate of the range of possible results. Some high-quality evidence-based summary resources, such as ACP Journal Club, emphasize confidence intervals and the helpful picture they paint of the results.
In the study of rifaximin described above, the RRR = 51% (95% CI, 20-71). In “plain English,” this 95% confidence interval tells us that rifaximin most likely reduces in-hospital death by 51% (the “point estimate”) but it may reduce in death by as little as 20%, or by as much as 71%. There is a 95% chance that the true effect is between 20% and 71%, a 2.5% chance the true effect is below 20%, and a 2.5% chance it is above 71%.
In order to determine whether a trial has found two therapies to be equivalent, clinicians should examine the upper and lower limits of the 95% confidence interval. If either would be clinically significant if true, the two therapies studied cannot be called equivalent, and further research is needed. A 2014 study by Regimbeau et al. published in the Journal of the American Medical Association found that in patients undergoing cholecystectomy for acute calculous cholecystitis, postoperative antibiotics reduced
1578185 - McGraw-Hill Professional ©
infection by an absolute risk reduction of 1.9% (95% CI, –9.0-5.1, P < 0.05). The confidence interval indicates that antibiotics most likely reduce infection by 1.9%, but may reduce infection by as much as 5.1% (in which case most clinicians would prescribe them), or may increase infection by as much as 9% (in which case most clinicians would not prescribe them). In this study, the true effect of antibiotics on postoperative infection could not be determined as they could be either beneficial or harmful, and further study is needed. A common misinterpretation of these results, which could occur if the confidence intervals are not noted, would be: “the P-value is greater than 0.05 so there is no difference between antibiotics and placebo and the two are equivalent.”
Two factors affect the width of a confidence interval: the number of patients and the frequency of the outcome in a study. In our example of the Regimbeau trial, further studies that enroll larger numbers of patients or measure more postoperative infections could result in a narrower confidence interval as well as a different point estimate.
PRACTICE POINT
Confidence intervals are preferable to P-values when considering the results of a clinical trial, as they give more information about the range of possible results, including the best and worst case scenarios.
How can I apply the results to patient care
Were the study patients similar to my patients? The more a patient meets the inclusion criteria, and the less they meet the exclusion criteria, the more confidently the results of a study can be applied to them. Clinicians should consider the setting of a study as well as whether those who administered the intervention had specialized expertise that is not available locally. Were all clinically important outcomes considered? The “grandmother test” can help determine if an outcome is clinically relevant. Outcomes that would be valued by the average person (eg, someone’s grandmother) are clinically important; outcomes that would not be valued are not clinically relevant and should not be measured by clinical trials. For example, outcomes such as a reduction in pain, an increase in survival, or a reduction hospital admission are likely to be meaningful to patients, while biochemical, laboratory, or purely hemodynamic outcomes are not. An exception is when nonclinical outcomes are established surrogate markers for clinically important outcomes. Composite outcomes of clinical endpoints are valid, but if possible studies should make clear how much each individual endpoint is driving the composite result.
PRACTICE POINT
Hospitalists should value studies that measure clinically important endpoints (or surrogate markers of these) over those that measure physiologic or biochemical endpoints.
1578185 - McGraw-Hill Professional ©
ARE THE LIKELY TREATMENT BENEFITS WORTH THE POTENTIAL HARM AND COSTS? (WHAT IS THE NUMBER NEEDED TO TREAT? WHAT IS THE NUMBER NEEDED TO HARM?)
The number needed to treat (NNT) describes how many patients must be treated with an intervention to produce one positive outcome or prevent one negative outcome. The NNT allows clinicians to compare the effects of different therapies, and is calculated as NNT = 100/ARR%. In the study by Sharma et al. discussed above, in-hospital death occurred in 24% of the rifaximin group and in 49% of the placebo group. Here the ARR is 25% (49% – 24%) and the NNT is 4 (100/25). In other words, we need to give four patients rifaximin to prevent one patient from dying in the hospital.
In order to best inform risk/benefit discussions about a therapy, studies should measure important adverse effects. The number needed to harm (NNH) describes how many patients must be treated for one to experience a particular adverse effect. These two numbers can be compared for an intervention and a particular adverse effect to determine the net benefit or harm. In addition to the likelihood of adverse events and their morbidity, the level of concern a patient has about particular side effects must be considered. Many studies do not assess cost, and those that do often determine cost-effectiveness at the population level, which is less relevant to the individual patient. The extent to which a therapy is covered by insurance is highly relevant to patients and should always be considered.
STEP 4: APPLYING THE LITERATURE TO AN INDIVIDUAL PATIENT
For the findings of a study to be useful in clinical care, the critical appraisal process must yield a satisfactory answer to each of the three broad questions discussed above: a study must be valid, it must report important results, and it must be applicable to the patient at hand. If any of these three elements is missing, the study findings may not be appropriate for implementation into practice.
When a study has used valid methods, has reported highly important results, and has enrolled patients clearly similar to the patient in question, the hospitalist can confidently apply its findings. But conducting clinical studies is often difficult work, and few studies are perfect in every way. Clinicians need to learn which validity or applicability issues represent fatal flaws, and which ones still allow the results of a study to be considered. This is a skill that comes with experience.
The hospitalist must remember that best evidence-based decisions incorporate not only the evidence, but also the individual clinical circumstances and preferences of patients. In most cases, patient values and preferences are more important than the other factors.
CONCLUSION This chapter has focused on skills such as the construction of focused clinical questions and how to search and critically appraise the literature. These skills are necessary but not sufficient for the practice of EBM. The hospitalist’s knowledge of the patient is at the heart of evidence-based practice. The right clinical questions cannot be asked unless the hospitalist first has a clear understanding of the patient’s clinical issues, and the literature cannot be applied to a patient without knowledge of their values and preferences. Communication skills, history and physical examination skills, illness scripts, problem re
1578185 - McGraw-Hill Professional ©