Common Mistakes in Systematic Reviews (And How to Avoid Them)

Peer reviewers of systematic reviews see the same mistakes repeatedly. The pattern has held for a decade: the methodological problems that sink reviews are rarely novel, which is both frustrating and good news — they are preventable. This article catalogs the most common mistakes I see in journal review and in dissertation committees, with concrete fixes.

Mistake 1: Starting before the protocol is written

The single most expensive mistake is starting the search before locking the protocol. Teams drift, criteria change mid-stream, and the resulting review is unreplicable.

Fix: Write the protocol, get team sign-off, register on PROSPERO (see our PROSPERO guide), then search. The protocol must specify: question, criteria, databases, screening method, extraction plan, synthesis plan, risk of bias tool.

Mistake 2: Vague or over-broad question

"Does exercise help depression?" is not answerable. Which exercise? How much? In whom? Compared with what? Measured how?

Fix: Frame in PICO. Run a feasibility search. If the PICO is so narrow that no studies exist, widen one element. If it is so broad that 50,000 studies exist, narrow it. See our search strategy guide.

Mistake 3: Searching only PubMed

Single-database searches miss 20–50% of relevant records depending on the topic. PubMed alone is never sufficient for a systematic review with health outcomes.

Fix: Search at least three databases — typically PubMed/MEDLINE, Embase, and a field-specific database (CINAHL for nursing, PsycINFO for psychology, ERIC for education). Plus grey literature. See our grey literature post.

Mistake 4: Weak Boolean logic

Missing parentheses, inappropriate NOT, over-truncation — these structural search errors are the most common PRESS findings.

Fix: Use explicit parentheses; avoid NOT; peer-review the search using PRESS (McGowan et al., 2016). See our Boolean operators post.

Mistake 5: No second reviewer

Solo screening and solo extraction violate Cochrane methodology and are increasingly rejected by peer reviewers as inadequate.

Fix: Recruit a second reviewer — a co-author, librarian, or collaborator. For small reviews, a second reviewer for 20% of records is the minimum. For systematic reviews claiming rigor, 100% dual screening and extraction is expected.

Mistake 6: Drifting criteria

Criteria that change mid-screening invalidate the review. Yet teams routinely realize at record 2,000 that their criteria do not work.

Fix: Pilot criteria on 100 records before formal screening. Calculate kappa. Revise, re-pilot. Lock before full screening starts. If criteria must change later, document the deviation in PRISMA item 24 and explain the rationale.

Mistake 7: Skipping risk of bias

Some reviews — especially by first-time reviewers — skip risk of bias entirely or use a homegrown tool.

Fix: Use a published, validated tool for your design — Cochrane RoB 2 for RCTs, ROBINS-I for non-randomized comparative studies, QUADAS-2 for diagnostic accuracy, JBI for qualitative, AXIS for cross-sectional. Apply it in duplicate.

Mistake 8: Meta-analyzing heterogeneous studies

Pooling studies with incompatible populations, interventions, or outcomes produces a meaningless number.

Fix: Assess clinical and methodological homogeneity before statistical heterogeneity. If I² > 75% or studies are clinically distinct, use narrative synthesis with SWiM guidance. See our meta-analysis post.

Mistake 9: Weak reporting against PRISMA 2020

Missing flow diagram counts, under-reported searches, no certainty of evidence, no exclusion reasons at full-text.

Fix: Complete the PRISMA 2020 checklist with page/line mappings before submission. Include the full checklist as supplementary material. See our PRISMA 2020 post.

Mistake 10: Not logging protocol deviations

Every review deviates from its protocol eventually. Not logging deviations looks like cherry-picking.

Fix: Keep a deviation log from day one. Deviations are normal; failing to disclose them is a credibility problem. Report each in the methods section and link to the updated PROSPERO record.

Mistake 11: Inadequate de-duplication

Running 5,000 records that are actually 3,200 unique studies wastes screening time and distorts the PRISMA flow.

Fix: Use Bramer's method in EndNote, or a validated de-duplication workflow in Covidence, Rayyan, or a custom R/Python script. De-duplicate before screening.

Mistake 12: Not contacting authors for missing data

Meta-analyses with missing variance data, sample sizes, or subgroup estimates are often rescuable by emailing the original author.

Fix: Email authors systematically. Budget two weeks into the review timeline for this. Document every attempt.

Mistake 13: Conflating included studies with reports

A single study may generate 3–5 papers (trial results, subgroup analysis, long-term follow-up, economic evaluation). Counting each as a separate study inflates your sample and misleads readers.

Fix: Group reports by study. PRISMA 2020's flow diagram explicitly distinguishes records, reports, and studies.

Mistake 14: No interpretation

Some reviews describe every study and stop. Readers cannot tell what the review concluded or what it changes about the field.

Fix: The discussion section must synthesize — name the patterns, grade the evidence (GRADE), describe limitations, and draw implications for practice, policy, and research. A review without interpretation is a bibliography.

Mistake 15: Ignoring the library

Librarians specialize in systematic review search strategies. Teams that skip library consultation produce weaker searches.

Fix: Budget one hour with your health sciences librarian during search development, and one hour before submission for PRESS review. Libraries routinely offer this for free.

A pre-submission sanity checklist

  • [ ] Protocol registered and cited
  • [ ] At least three databases searched, documented per PRISMA-S
  • [ ] Search peer-reviewed (PRESS)
  • [ ] Dual screening and extraction throughout
  • [ ] Risk of bias tool applied in duplicate
  • [ ] PRISMA 2020 checklist mapped
  • [ ] Flow diagram uses 2020 template
  • [ ] Exclusion reasons at full-text stage reported
  • [ ] GRADE (or equivalent) per outcome
  • [ ] Deviations from protocol disclosed
  • [ ] Author contact attempts documented
  • [ ] Limitations section names specific threats to validity

If you can tick every box, you are ahead of most manuscripts I review.

Related posts