12. Effect measures
Specify for each outcome the effect measure(s) (such as risk ratio, mean difference) used in the synthesis or presentation of results.
Essential elements
Specify for each outcome or type of outcome (such as binary, continuous) the effect measure(s) (such as risk ratio, mean difference) used in the synthesis or presentation of results.
State any thresholds or ranges used to interpret the size of effect (such as minimally important difference; ranges for no/trivial, small, moderate, and large effects) and the rationale for these thresholds.
If synthesised results were re-expressed to a different effect measure, report the methods used to re-express results (such as meta-analysing risk ratios and computing an absolute risk reduction based on an assumed comparator risk).
Additional elements
- Consider providing justification for the choice of effect measure. For example, a standardised mean difference may have been chosen because multiple instruments or scales were used across studies to measure the same outcome domain (such as different instruments to assess depression).
Explanation
To interpret a synthesised or study result, users need to know what effect measure was used. Effect measures refer to statistical constructs that compare outcome data between two groups. For instance, a risk ratio is an example of an effect measure that might be used for dichotomous outcomes.1 The chosen effect measure has implications for interpretation of the findings and might affect the meta-analysis results (such as heterogeneity2). Authors might use one effect measure to synthesise results and then re-express the synthesised results using another effect measure. For example, for meta-analyses of standardised mean differences, authors might re-express the combined results in units of a well known measurement scale, and for meta-analyses of risk ratios or odds ratios, authors might re-express results in absolute terms (such as risk difference).3 Furthermore, authors need to interpret effect estimates in relation to whether the effect is of importance to decision makers. For a particular outcome and effect measure, this requires specification of thresholds (or ranges) used to interpret the size of effect (such as minimally important difference; ranges for no/trivial, small, moderate, and large effects).3
Example
“We planned to analyse dichotomous outcomes by calculating the risk ratio (RR) of a successful outcome (i.e. improvement in relevant variables) for each trial…Because the included resilience‐training studies used different measurement scales to assess resilience and related constructs, we used standardised mean difference (SMD) effect sizes (Cohen's d) and their 95% confidence intervals (CIs) for continuous data in pair‐wise meta‐analyses.”4
Training
The UK EQUATOR Centre runs training on how to write using reporting guidelines.
Discuss this item
Visit this items’ discussion page to ask questions and give feedback.