A systematic literature review (SLR) is one of the most rigorous forms of research synthesis. Unlike a traditional literature review where you summarise sources that feel relevant, a systematic review follows a predefined, transparent methodology to identify, evaluate, and synthesise all available evidence on a specific research question. For PhD scholars, Master’s students, and anyone preparing a thesis or journal manuscript, mastering this method is not optional — it is essential.
Universities worldwide now expect systematic reviews as standalone chapters or even as independent publications. Journals indexed in SCOPUS and Web of Science increasingly favour systematic reviews because they minimise bias and produce reproducible findings. If you are an international student navigating this process for the first time, this guide walks you through every step, aligned with the PRISMA 2020 guidelines.
What Is a Systematic Literature Review?
A systematic literature review is a structured approach to gathering and analysing research studies that address a clearly defined question. It differs from a narrative or traditional review in three critical ways:
- Predefined protocol: You document your search strategy, inclusion criteria, and analysis plan before you begin searching. This prevents cherry-picking studies that support your hypothesis.
- Comprehensive search: You search multiple databases systematically using carefully constructed search strings, rather than relying on a few familiar sources.
- Transparent reporting: Every decision — which studies were included, which were excluded, and why — is documented so that another researcher could replicate your review.
The gold standard for reporting systematic reviews is the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement. PRISMA 2020 provides a 27-item checklist and a four-phase flow diagram that guides you from identification through screening to final inclusion.
Step 1: Define Your Research Question
Every systematic review begins with a focused, answerable question. Vague questions produce unmanageable results. Use one of these established frameworks to sharpen your question:
- PICO (Population, Intervention, Comparison, Outcome) — best for clinical and health sciences research.
- PEO (Population, Exposure, Outcome) — suited for observational studies.
- SPIDER (Sample, Phenomenon of Interest, Design, Evaluation, Research type) — ideal for qualitative and mixed-methods reviews.
Example using PICO: “Among undergraduate students (P), does flipped classroom pedagogy (I) compared to traditional lectures (C) improve academic performance (O)?”
A well-framed question determines your search terms, your eligibility criteria, and the scope of your entire review. Spend time getting this right before moving forward.
Step 2: Develop a Review Protocol
Before searching a single database, write a formal protocol. A protocol is a detailed plan that describes exactly how you will conduct the review. It should include:
- Your research question and objectives
- Eligibility criteria (inclusion and exclusion)
- Databases you will search
- Search strategy with keywords and Boolean operators
- Screening process and who will screen
- Data extraction form or template
- Quality assessment tool you will use
- Method of synthesis (narrative, thematic, or meta-analysis)
Many researchers register their protocols on platforms like PROSPERO (for health-related reviews) or OSF Registries (for all disciplines). Registration adds credibility and prevents duplication. Even if registration is not mandatory for your programme, writing a protocol forces you to think through the entire process and exposes gaps in your plan early.
Step 3: Construct Your Search Strategy
The search strategy is the backbone of your systematic review. A poorly designed search either misses relevant studies or floods you with irrelevant results. Follow these principles:
Identify key concepts: Break your research question into two or three core concepts. For the flipped classroom example: Concept A = “flipped classroom”, Concept B = “undergraduate students”, Concept C = “academic performance”.
List synonyms and related terms: For each concept, brainstorm synonyms, abbreviations, and alternative spellings. “Flipped classroom” might also appear as “inverted classroom”, “flipped learning”, or “reverse instruction”.
Use Boolean operators:
- OR connects synonyms within a concept: (“flipped classroom” OR “inverted classroom” OR “flipped learning”)
- AND connects different concepts: (Concept A) AND (Concept B) AND (Concept C)
- NOT excludes irrelevant terms (use sparingly as it can accidentally remove relevant studies)
Choose your databases: Search at least two or three databases relevant to your field. Common options include PubMed and MEDLINE for health sciences, Scopus and Web of Science for multidisciplinary research, ERIC for education, PsycINFO for psychology, and IEEE Xplore for engineering and technology. Also search Google Scholar for grey literature, conference proceedings, and dissertations that may not appear in subscription databases.
Document every search you run — the database name, the exact search string, the date, and the number of results. This becomes an appendix in your review and is required by PRISMA.
Step 4: Screen and Select Studies
Screening happens in two rounds. First, you review titles and abstracts. Then, you read the full texts of shortlisted studies. At each stage, apply your predefined eligibility criteria consistently.
Title and abstract screening: Export all results from your database searches into a reference manager such as Zotero, Mendeley, or EndNote. Remove duplicates first. Then read each title and abstract and mark it as “include”, “exclude”, or “maybe”. Be generous at this stage — if you are unsure, keep the study for full-text review.
Full-text screening: Obtain the full text of every study that passed the first round. Read each paper against your eligibility criteria. Record your reason for excluding each study. PRISMA requires you to report exclusion reasons in a flow diagram.
If two or more reviewers are involved (which is recommended for reducing bias), use inter-rater reliability measures such as Cohen’s Kappa to demonstrate agreement. Tools like Rayyan, Covidence, or even a shared spreadsheet can streamline collaborative screening.
Step 5: Assess the Quality of Included Studies
Not all studies are created equal. A systematic review must evaluate the methodological quality (also called risk of bias) of every included study. The tool you use depends on your study designs:
- Cochrane Risk of Bias tool (RoB 2): For randomised controlled trials
- Newcastle-Ottawa Scale (NOS): For observational studies (cohort, case-control)
- CASP checklists: For qualitative studies
- JBI Critical Appraisal tools: Covers multiple study designs including cross-sectional, prevalence, and qualitative
- MMAT (Mixed Methods Appraisal Tool): For mixed-methods reviews
Quality assessment does not mean you automatically exclude low-quality studies. Instead, you note their limitations and consider how quality affects your confidence in the overall findings. Some reviews conduct sensitivity analyses, removing low-quality studies to see whether the conclusions change.
Step 6: Extract Data Systematically
Create a standardised data extraction form before you begin reading. This ensures you capture the same information from every study. A typical form includes:
- Study identifiers (author, year, country, journal)
- Study design and sample size
- Population characteristics
- Intervention or exposure details
- Outcome measures and results
- Key findings and conclusions
- Quality assessment rating
Pilot your extraction form on three to five studies first. You will almost certainly discover fields that need adding or rewording. Once finalised, extract data from all included studies. If your review includes quantitative studies and you plan a meta-analysis, also extract effect sizes, confidence intervals, and sample sizes for each outcome.
Step 7: Synthesise the Evidence
Synthesis is where your review delivers value. You have two broad options:
Narrative synthesis: Organise your findings thematically, by outcome, by population, or by methodology. Describe patterns, agreements, and contradictions across studies. Use summary tables to present key characteristics and findings at a glance. Narrative synthesis is appropriate when studies are too heterogeneous for statistical pooling.
Meta-analysis: If your included studies are sufficiently similar in design, population, intervention, and outcome measurement, you can statistically combine their results. Meta-analysis produces a pooled effect size and a forest plot that visually represents each study’s contribution. Software options include RevMan, Stata, R (metafor package), or Comprehensive Meta-Analysis (CMA). Assess heterogeneity using the I² statistic — values above 75% indicate substantial heterogeneity and may require subgroup analysis.
Whichever approach you choose, address publication bias. Studies with positive or significant results are more likely to be published, which can skew your review’s findings. Funnel plots and Egger’s test help detect publication bias in meta-analyses.
Step 8: Report Using the PRISMA Flow Diagram
PRISMA 2020 requires a four-phase flow diagram that tracks the number of studies at each stage:
- Identification: Total records found across all databases, plus any additional records from other sources (reference lists, grey literature).
- Screening: Records remaining after duplicate removal, records screened by title and abstract, and records excluded at this stage.
- Eligibility: Full-text articles assessed, with the number excluded and reasons for exclusion.
- Included: Final number of studies included in your qualitative synthesis and, if applicable, your meta-analysis.
The PRISMA flow diagram is not optional. Reviewers and journal editors will look for it immediately. You can generate one using free online tools or drawing software such as draw.io.
Common Mistakes International Students Make
Having guided hundreds of PhD scholars and Master’s students through their systematic reviews, we consistently see these pitfalls:
- Starting without a protocol: Jumping straight into database searches without documenting your plan leads to inconsistent screening and wasted time when your supervisor asks you to justify your decisions.
- Searching only one database: A review based solely on Google Scholar or a single subscription database will miss relevant studies and fail peer review.
- Not recording exclusion reasons: You must document why each full-text study was excluded. “Not relevant” is not a valid reason — be specific.
- Confusing systematic reviews with literature reviews: A traditional literature review summarises existing knowledge. A systematic review follows a reproducible, protocol-driven methodology. Many students submit narrative reviews labelled as systematic reviews, and examiners reject them.
- Skipping quality assessment: Omitting risk-of-bias evaluation undermines your entire review. It is one of the defining features that separates a systematic review from other review types.
- Ignoring PRISMA: Even if your university does not explicitly require PRISMA, following it signals methodological rigour and makes your review publishable in high-impact journals.
Tools That Make the Process Easier
You do not need to do everything manually. These tools can save weeks of effort:
- Rayyan: Free web-based tool for collaborative screening of titles and abstracts. Supports blinding between reviewers.
- Covidence: End-to-end systematic review platform (paid, but many universities provide access). Handles screening, extraction, and quality assessment.
- Zotero or Mendeley: Reference managers for organising, deduplicating, and citing your sources.
- RevMan or R (metafor): For conducting meta-analyses and generating forest plots.
- PRISMA Flow Diagram Generator: Free tools that create publication-ready flow diagrams from your numbers.
How Long Does a Systematic Review Take?
A well-conducted systematic review typically takes three to six months when done alongside other research activities. The timeline depends on the breadth of your question, the number of databases searched, and whether you are working alone or with a team. Protocol development and search strategy design take one to two weeks. Searching and exporting results take another week. Screening can take two to six weeks depending on volume. Data extraction and quality assessment take two to four weeks. Synthesis and writing take another two to four weeks.
If you are working under a tight deadline or need expert guidance on any stage of this process, our PhD Thesis & Synopsis Writing service includes systematic review support — from protocol development and search strategy design through to PRISMA-compliant reporting and manuscript preparation.
Final Checklist Before Submission
Before you submit your systematic review to your supervisor or a journal, verify the following:
- Your research question is clearly stated using PICO, PEO, or SPIDER
- Your protocol is documented (and ideally registered)
- You searched at least two databases with documented search strings
- Your PRISMA flow diagram is complete with numbers at each stage
- Inclusion and exclusion criteria are explicit and consistently applied
- Quality assessment was conducted using an appropriate tool
- Synthesis addresses patterns, contradictions, and gaps in the evidence
- Limitations of your review are honestly discussed
- All references are correctly formatted and complete
A systematic literature review is demanding, but it is also one of the most valuable skills you can develop as a researcher. It teaches you to think critically, evaluate evidence objectively, and synthesise knowledge in a way that contributes meaningfully to your field. Follow the steps in this guide, adhere to PRISMA, and you will produce a review that stands up to scrutiny.