The movement to systematize, even automate, the assessment of bodies of evidence has some beneficial features that have attracted influential proponents: ensuring comprehensive inclusion of relevant studies; seeking to evaluate studies solely on their methodologic quality rather than subjective impressions; and providing detailed descriptions of the process used to draw conclusions so that it can be replicated by others. But there has been substantial collateral damage to informed interpretation of evidence and application to policy, particularly when algorithms such as the GRADE and ROBINS approaches are imposed.
- Arbitrary hierarchies of data quality are assigned without consideration of the applicability to a particular topic, with some studies predetermined to be inherently superior to others (e.g., randomized trials over observational studies, cohort studies over case-control studies);
- Individual studies are scored in isolation and the contributions are considered in isolation rather than integrated across lines of evidence to address compensating strengths and limitations, referred to as “triangulation”;
- The benefit of deep subject matter expertise is set aside in favour of simplistic algorithms in an attempt to be “objective,” which often makes the assessment uninformed regarding the nuances that are needed for interpretation;
- The application of algorithmic evidence evaluations to policy fails to recognize that policy decisions reflect an informed judgment, calling for a fully informed consideration of the research that is impeded by the constraint of an algorithm.
The list of ill-informed and likely harmful decisions reached in large part through the application of GRADE and other similar approaches continues to grow. When an arbitrary algorithm leads to judgments that differ from the collective wisdom of the full range of subject matter experts, it is possible but unlikely that the consensus is wrong, but more likely that the algorithm has failed to capture insights that the experts are able to provide. Science, policy, and public health require thoughtful evaluation of evidence including informed debate among to make progress and determine the best use of the information available at a given point in time. Nothing less makes the grade.
The DE-grading Epidemiology (DEEP) Network has been formed to encourage open debate about the use of algorithms to assess epidemiological studies, and maintain contacts between those of us who are working on these issues. It will pursue strategies to improve our ability to methodically and accurately integrate and interpret epidemiologic evidence.
We are a network of more than 100 researchers in epidemiology and related disciplines. The network has been formed to encourage open debate about the use of algorithms to assess epidemiological studies, and to maintain contacts between those of us who are working on these issues. We will pursue strategies to improve our ability to methodically and accurately integrate and interpret epidemiologic evidence.
- Neil Pearce
- David Savitz
- Kurt Straif
For more information, or to join the DEEP network:
The DE-grading Epidemiology (DEEP) Network has been formed to:
- Provide a network to encourage open debate about these issues, and maintain contacts between those of us who are working on these issues
- Provide a repository for publications in this area
- Prepare a series of multi-author publication which explore these issues in depth, and attempts to summarise the current scientific consensus on the strengths and limitations of the algorithmic approach
- Pursue strategies to improve our ability to methodically and accurately integrate and interpret epidemiologic evidence.
- Guyatt GH, Oxman AD, Vist G, Kunz R, Brozek J, Alonso-Coello P, Montori V, Akl EA, Djulbegovic D, Falck-Ytter Y, Norris SL, Williams JW, Atkins D, Meerpohl J, Schunemann HJ. GRADE guidelines: 4. Rating the quality of evidence-study limitations (risk of bias). Journal of Clinical Epidemiology 2011;64(4):407-15.
- Lawlor DA, Tilling K, Davey Smith G. Triangulation in aetiological epidemiology. International Journal of Epidemiology 2016;45:1866-86.
- Pearce N, Blair A, Vineis P, Ahrens W, Andersen A, Anto JM, Armstrong BK, Baccarelli AA, Beland FA, Berrington A, Bertazzi PA, Birnbaum LS, Brownson RC, Bucher JR, Cantor KP, Cardis E, Cherrie JW, Christiani DC, Cocco P, Coggon D, Comba P, Demers PA, Dement JM, Douwes J, Eisen EA, Engel LS, Fenske RA, Fleming LE, Fletcher T, Fontham E, Forastiere F, Frentzel-Beyme R, Fritschi L, Gerin M, Goldberg M, Grandjean P, Grimsrud TK, Gustavsson P, Haines A, Hartge P, Hansen J, Hauptmann M, Heederik D, Hemminki K, Hemon D, Hertz-Picciotto I, Hoppin JA, Huff J, Jarvholm B, Kang D, Karagas MR, Kjaerheim K, Kjuus H, Kogevinas M, Kriebel D, Kristensen P, Kromhout H, Laden F, Lebailly P, LeMasters G, Lubin JH, Lynch CF, Lynge E, Mannetje A, McMichael AJ, McLaughlin JR, Marrett L, Martuzzi M, Merchant JA, Merler E, Merletti F, Miller A, Mirer FE, Monson R, Nordby KC, Olshan AF, Parent ME, Perera FP, Perry MJ, Pesatori AC, Pirastu R, Porta M, Pukkala E, Rice C, Richardson DB, Ritter L, Ritz B, Ronckers CM, Rushton L, Rusiecki JA, Rusyn I, Samet JM, Sandler DP, de Sanjose S, Schernhammer E, Costantini AS, Seixas N, Shy C, Siemiatycki J, Silverman DT, Simonato L, Smith AH, Smith MT, Spinelli JJ, Spitz MR, Stallones L, Stayner LT, Steenland K, Stenzel M, Stewart BW, Stewart PA, Symanski E, Terracini B, Tolbert PE, Vainio H, Vena J, Vermeulen R, Victora CG, Ward EM, Weinberg CR, Weisenburger D, Wesseling C, Weiderpass E, Zahm SH. IARC Monographs: 40 Years of Evaluating Carcinogenic Hazards to Humans. Environmental Health Perspectives 2015;123(6):507-14.
- Pearce N, Vandenbroucke J, Lawlor D. Causal inference in environmental epidemiology: old and new approaches. Epidemiology 2019; 30: 311-316.
- Savitz DA, Wellenius GA, Trikalinos TE. The problem with mechanistic risk of bias assessments in evidence synthesis of observational studies and a practical alternative: assess the impact of specific sources of potential bias. American Journal of Epidemiology 2019;188:1581-85.
- Steenland K, Schubauer-Berigan MK, Vermeulen R, Lunn R, Straif K, Zahm SH, Stenzel T, Arroyave WD, Mehta S, Pearce N. Risk of bias assessments for evidence syntheses of observational epidemiologic studies of environmental and occupational exposures: strengths and limitations. Submitted for publication.
ISEE Symposium August 2020
The International Society of Environmental Epidemiology (ISEE) meeting in August 2020 includes a mini-symposium session on: How GREAT is GRADE? Taking stock of GRADE for evidence review in environmental epidemiology. Speakers include: Hanna Boogaard )HEI), David Savitz (Brown University), Tracey Woodruff (UC, San Francisco), Francesco Forastiere (Kings College London), and Neil Pearce (LSHTM). We will provide a link if and when one becomes available.
Health Effects Institute (HEI) Webinar May 2020
The Health Effects Institute (HEI) recently held a WEBINAR on: The Big Deal About Big Data, Causal Inference, and Accountability Research which included presentations from Jill Baumgartner (McGill University), Douglas Dockery (Harvard University), Neil Pearce (LSHTM) and Roger Peng (Johns Hopkins University).
Watch the webinar recording.