5. Blinding/Masking

What to write

Describe who was aware of the group allocation at the different stages of the experiment (during the allocation, the conduct of the experiment, the outcome assessment, and the data analysis).

Explanation

Researchers often expect a particular outcome and can unintentionally influence the experiment or interpret the data in such a way as to support their preferred hypothesis. Blinding is a strategy used to minimise these subjective biases.

Although there is primary evidence of the impact of blinding in the clinical literature that directly compares blinded versus unblinded assessment of outcomes, there is limited empirical evidence in animal research. There are, however, compelling data from systematic reviews showing that nonblinded outcome assessment leads to the treatment effects being overestimated, and the lack of bias-reducing measures such as randomisation and blinding can contribute to as much as 30%–45% inflation of effect sizes.

Ideally, investigators should be unaware of the treatment(s) animals have received or will be receiving, from the start of the experiment until the data have been analysed. If this is not possible for every stage of an experiment (see ), it should always be possible to conduct at least some of the stages blind. This has implications for the organisation of the experiment and may require help from additional personnel—for example, a surgeon to perform interventions, a technician to code the treatment syringes for each animal, or a colleague to code the treatment groups for the analysis. Online resources are available to facilitate allocation concealment and blinding.

Specify whether blinding was used or not for each step of the experimental process (see ) and indicate what particular treatment or condition the investigators were blinded to, or aware of.

If blinding was not used at any of the steps outlined in , explicitly state this and provide the reason why blinding was not possible or not considered.

Blinding during different stages of an experiment

During allocation

Allocation concealment refers to concealing the treatment to be allocated to each individual animal from those assigning the animals to groups, until the time of assignment. Together with randomisation, allocation concealment helps minimise selection bias, which can introduce systematic differences between treatment groups.

During the conduct of the experiment

When possible, animal care staff and those who administer treatments should be unaware of allocation groups to ensure that all animals in the experiment are handled, monitored, and treated in the same way. Treating different groups differently based on the treatment they have received could alter animal behaviour and physiology and produce confounds.

Welfare or safety reasons may prevent blinding of animal care staff, but in most cases, blinding is possible. For example, if hazardous microorganisms are used, control animals can be considered as dangerous as infected animals. If a welfare issue would only be tolerated for a short time in treated but not control animals, a harm-benefit analysis is needed to decide whether blinding should be used.

During the outcome assessment

The person collecting experimental measurements or conducting assessments should not know which treatment each sample/animal received and which samples/animals are grouped together. Blinding is especially important during outcome assessment, particularly if there is a subjective element (e.g., when assessing behavioural changes or reading histological slides). Randomising the order of examination can also reduce bias.

If the person assessing the outcome cannot be blinded to the group allocation (e.g., obvious phenotypic or behavioural differences between groups), some, but not all, of the sources of bias could be mitigated by sending data for analysis to a third party who has no vested interest in the experiment and does not know whether a treatment is expected to improve or worsen the outcome.

During the data analysis

The person analysing the data should know which data are grouped together to enable group comparisons but should not be aware of which specific treatment each group received. This type of blinding is often neglected but is important, as the analyst makes many semisubjective decisions such as applying data transformation to outcome measures, choosing methods for handling missing data, and handling outliers. How these decisions will be made should also be decided a priori.

Data can be coded prior to analysis so that the treatment group cannot be identified before analysis is completed.

Examples

‘For each animal, four different investigators were involved as follows: a first investigator (RB) administered the treatment based on the randomization table. This investigator was the only person aware of the treatment group allocation. A second investigator (SC) was responsible for the anaesthetic procedure, whereas a third investigator (MS, PG, IT) performed the surgical procedure. Finally, a fourth investigator (MAD) (also unaware of treatment) assessed GCPS and NRS, mechanical nociceptive threshold (MNT), and sedation NRS scores’.

‘… due to overt behavioral seizure activity the experimenter could not be blinded to whether the animal was injected with pilocarpine or with saline’.

‘Investigators could not be blinded to the mouse strain due to the difference in coat colors, but the three-chamber sociability test was performed with ANY-maze video tracking software (Stoelting, Wood Dale, IL, USA) using an overhead video camera system to automate behavioral testing and provide unbiased data analyses. The one-chamber social interaction test requires manual scoring and was analyzed by an individual with no knowledge of the questions’.

Training

The UK EQUATOR Centre runs training on how to write using reporting guidelines.

Discuss this item

Visit this items’ discussion page to ask questions and give feedback.

References

1.
Nuzzo R. How scientists fool themselves – and how they can stop. Nature. 2015;526(7572):182-185. doi:10.1038/526182a
2.
Hrobjartsson A, Thomsen ASS, Emanuelsson F, et al. Observer bias in randomised clinical trials with binary outcomes: Systematic review of trials with both blinded and non-blinded outcome assessors. BMJ. 2012;344(feb27 2):e1119-e1119. doi:10.1136/bmj.e1119
3.
Rosenthal R, Fode KL. The effect of experimenter bias on the performance of the albino rat. Behavioral Science. 2007;8(3):183-189. doi:10.1002/bs.3830080302
4.
Rosenthal R, Lawson R. A longitudinal study of the effects of experimenter bias on the operant learning of laboratory rats. Journal of Psychiatric Research. 1964;2(2):61-72. doi:10.1016/0022-3956(64)90003-2
5.
Hirst JA, Howick J, Aronson JK, et al. The need for randomization in animal trials: An overview of systematic reviews. Thombs B, ed. PLoS ONE. 2014;9(6):e98856. doi:10.1371/journal.pone.0098856
6.
Vesterinen HM, Sena ES, ffrench-Constant C, Williams A, Chandran S, Macleod MR. Improving the translational hit of experimental treatments in multiple sclerosis. Multiple Sclerosis Journal. 2010;16(9):1044-1055. doi:10.1177/1352458510379612
7.
Macleod MR, Worp HB van der, Sena ES, Howells DW, Dirnagl U, Donnan GA. Evidence for the efficacy of NXY-059 in experimental focal cerebral ischaemia is confounded by study quality. Stroke. 2008;39(10):2824-2829. doi:10.1161/strokeaha.108.515957
8.
Percie du Sert N, Bamsey I, Bate ST, et al. The experimental design assistant. PLOS Biology. 2017;15(9):e2003779. doi:10.1371/journal.pbio.2003779
9.
Bustamante R, Daza MA, Canfrán S, et al. Comparison of the postoperative analgesic effects of cimicoxib, buprenorphine and their combination in healthy dogs undergoing ovariohysterectomy. Veterinary Anaesthesia and Analgesia. 2018;45(4):545-556. doi:10.1016/j.vaa.2018.01.003
10.
Neumann AM, Abele J, Kirschstein T, et al. Mycophenolate mofetil prevents the delayed t cell response after pilocarpine-induced status epilepticus in mice. Biagini G, ed. PLOS ONE. 2017;12(11):e0187330. doi:10.1371/journal.pone.0187330
11.
Hsieh LS, Wen JH, Miyares L, Lombroso PJ, Bordey A. Outbred CD1 mice are as suitable as inbred C57BL/6J mice in performing social tasks. Neuroscience Letters. 2017;637:142-147. doi:10.1016/j.neulet.2016.11.035

Citation

For attribution, please cite this work as:
Sert NP du, Hurst V, Ahluwalia A, et al. The ARRIVE reporting guideline for writing animal research articles. The EQUATOR Network guideline dissemination platform. doi:10.1234/equator/1010101

Reporting Guidelines are recommendations to help describe your work clearly

Your research will be used by people from different disciplines and backgrounds for decades to come. Reporting guidelines list the information you should describe so that everyone can understand, replicate, and synthesise your work.

Reporting guidelines do not prescribe how research should be designed or conducted. Rather, they help authors transparently describe what they did, why they did it, and what they found.

Reporting guidelines make writing research easier, and transparent research leads to better patient outcomes.

Easier writing

Following guidance makes writing easier and quicker.

Smoother publishing

Many journals require completed reporting checklists at submission.

Maximum impact

From nobel prizes to null results, articles have more impact when everyone can use them.

Who reads research?

You work will be read by different people, for different reasons, around the world, and for decades to come. Reporting guidelines help you consider all of your potential audiences. For example, your research may be read by researchers from different fields, by clinicians, patients, evidence synthesisers, peer reviewers, or editors. Your readers will need information to understand, to replicate, apply, appraise, synthesise, and use your work.

Cohort studies

A cohort study is an observational study in which a group of people with a particular exposure (e.g. a putative risk factor or protective factor) and a group of people without this exposure are followed over time. The outcomes of the people in the exposed group are compared to the outcomes of the people in the unexposed group to see if the exposure is associated with particular outcomes (e.g. getting cancer or length of life).

Source.

Case-control studies

A case-control study is a research method used in healthcare to investigate potential risk factors for a specific disease. It involves comparing individuals who have been diagnosed with the disease (cases) to those who have not (controls). By analysing the differences between the two groups, researchers can identify factors that may contribute to the development of the disease.

An example would be when researchers conducted a case-control study examining whether exposure to diesel exhaust particles increases the risk of respiratory disease in underground miners. Cases included miners diagnosed with respiratory disease, while controls were miners without respiratory disease. Participants' past occupational exposures to diesel exhaust particles were evaluated to compare exposure rates between cases and controls.

Source.

Cross-sectional studies

A cross-sectional study (also sometimes called a "cross-sectional survey") serves as an observational tool, where researchers capture data from a cohort of participants at a singular point. This approach provides a 'snapshot'— a brief glimpse into the characteristics or outcomes prevalent within a designated population at that precise point in time. The primary aim here is not to track changes or developments over an extended period but to assess and quantify the current situation regarding specific variables or conditions. Such a methodology is instrumental in identifying patterns or correlations among various factors within the population, providing a basis for further, more detailed investigation.

Source

Systematic reviews

A systematic review is a comprehensive approach designed to identify, evaluate, and synthesise all available evidence relevant to a specific research question. In essence, it collects all possible studies related to a given topic and design, and reviews and analyses their results.

The process involves a highly sensitive search strategy to ensure that as much pertinent information as possible is gathered. Once collected, this evidence is often critically appraised to assess its quality and relevance, ensuring that conclusions drawn are based on robust data. Systematic reviews often involve defining inclusion and exclusion criteria, which help to focus the analysis on the most relevant studies, ultimately synthesising the findings into a coherent narrative or statistical synthesis. Some systematic reviews will include a meta-analysis.

Source

Systematic review protocols

TODO

Meta analyses of Observational Studies

TODO

Randomised Trials

A randomised controlled trial (RCT) is a trial in which participants are randomly assigned to one of two or more groups: the experimental group or groups receive the intervention or interventions being tested; the comparison group (control group) receive usual care or no treatment or a placebo. The groups are then followed up to see if there are any differences between the results. This helps in assessing the effectiveness of the intervention.

Source

Randomised Trial Protocols

TODO

Qualitative research

Research that aims to gather and analyse non-numerical (descriptive) data in order to gain an understanding of individuals' social reality, including understanding their attitudes, beliefs, and motivation. This type of research typically involves in-depth interviews, focus groups, or field observations in order to collect data that is rich in detail and context. Qualitative research is often used to explore complex phenomena or to gain insight into people's experiences and perspectives on a particular topic. It is particularly useful when researchers want to understand the meaning that people attach to their experiences or when they want to uncover the underlying reasons for people's behavior. Qualitative methods include ethnography, grounded theory, discourse analysis, and interpretative phenomenological analysis.

Source

Case Reports

TODO

Diagnostic Test Accuracy Studies

Diagnostic accuracy studies focus on estimating the ability of the test(s) to correctly identify subjects with a predefined target condition, or the condition of interest (sensitivity) as well as to clearly identify those without the condition (specificity).

Prediction Models

Prediction model research is used to test the accurarcy of a model or test in estimating an outcome value or risk. Most models estimate the probability of the presence of a particular health condition (diagnostic) or whether a particular outcome will occur in the future (prognostic). Prediction models are used to support clinical decision making, such as whether to refer patients for further testing, monitor disease deterioration or treatment effects, or initiate treatment or lifestyle changes. Examples of well known prediction models include EuroSCORE II for cardiac surgery, the Gail model for breast cancer, the Framingham risk score for cardiovascular disease, IMPACT for traumatic brain injury, and FRAX for osteoporotic and hip fractures.

Source

Animal Research

TODO

Quality Improvement in Healthcare

Quality improvement research is about finding out how to improve and make changes in the most effective way. It is about systematically and rigourously exploring "what works" to improve quality in healthcare and the best ways to measure and disseminate this to ensure positive change. Most quality improvement effectiveness research is conducted in hospital settings, is focused on multiple quality improvement interventions, and uses process measures as outcomes. There is a great deal of variation in the research designs used to examine quality improvement effectiveness.

Source

Economic Evaluations in Healthcare

TODO

Meta Analyses

A meta-analysis is a statistical technique that amalgamates data from multiple studies to yield a single estimate of the effect size. This approach enhances precision and offers a more comprehensive understanding by integrating quantitative findings. Central to a meta-analysis is the evaluation of heterogeneity, which examines variations in study outcomes to ensure that differences in populations, interventions, or methodologies do not skew results. Techniques such as meta-regression or subgroup analysis are frequently employed to explore how various factors might influence the outcomes. This method is particularly effective when aiming to quantify the effect size, odds ratio, or risk ratio, providing a clearer numerical estimate that can significantly inform clinical or policy decisions.

How Meta-analyses and Systematic Reviews Work Together

Systematic reviews and meta-analyses function together, each complementing the other to provide a more robust understanding of research evidence. A systematic review meticulously gathers and evaluates all pertinent studies, establishing a solid foundation of qualitative and quantitative data. Within this framework, if the collected data exhibit sufficient homogeneity, a meta-analysis can be performed. This statistical synthesis allows for the integration of quantitative results from individual studies, producing a unified estimate of effect size. Techniques such as meta-regression or subgroup analysis may further refine these findings, elucidating how different variables impact the overall outcome. By combining these methodologies, researchers can achieve both a comprehensive narrative synthesis and a precise quantitative measure, enhancing the reliability and applicability of their conclusions. This integrated approach ensures that the findings are not only well-rounded but also statistically robust, providing greater confidence in the evidence base.

Why Don't All Systematic Reviews Use a Meta-Analysis?

Systematic reviews do not always have meta-analyses, due to variations in the data. For a meta-analysis to be viable, the data from different studies must be sufficiently similar, or homogeneous, in terms of design, population, and interventions. When the data shows significant heterogeneity, meaning there are considerable differences among the studies, combining them could lead to skewed or misleading conclusions. Furthermore, the quality of the included studies is critical; if the studies are of low methodological quality, merging their results could obscure true effects rather than explain them.

Protocol

A plan or set of steps that defines how something will be done. Before carrying out a research study, for example, the research protocol sets out what question is to be answered and how information will be collected and analysed.

Source

Animal research

When ARRIVE refers to animal research it is referring to in vivo animal research. This is the use of non-human animals, sometimes known as model organisms, in experiments that seek to control the variables that affect the behavior or biological system under study. This approach can be contrasted with field studies in which animals are observed in their natural environments or habitats. Animal research varies on a continuum from pure research, focusing on developing fundamental knowledge of an organism, to applied research, which may focus on answering some questions of great practical importance, such as finding a cure for a disease. Source" The ARRIVE guidelines apply to all areas of bioscience research involving living animals. That includes mammalian species as well as model organisms such as Drosophila or Caenorhabditis elegans. Each item is equally relevant to manuscripts centred around a single animal study and broader-scope manuscripts describing in vivo observations along with other types of experiments. The exact type of detail to report, however, might vary between species and experimental setup; this is acknowledged in the guidance provided for each item. Source

Bias

The over- or underestimation of the true effect of an intervention. Bias is caused by inadequacies in the design, conduct, or analysis of an experiment, resulting in the introduction of error.\n\nSource

Descriptive and inferential statistics

Descriptive statistics are used to summarise the data. They generally include a measure of central tendency (e.g., mean or median) and a measure of spread (e.g., standard deviation or range). Inferential statistics are used to make generalisations about the population from which the samples are drawn. Hypothesis tests such as ANOVA, Mann-Whitney, or t tests are examples of inferential statistics.\n\nSource

Effect size

Quantitative measure of differences between groups, or strength of relationships between variables.\n\nSource

Experimental unit

Biological entity subjected to an intervention independently of all other units, such that it is possible to assign any two experimental units to different treatment groups. Sometimes known as unit of randomisation.\n\nSource

External validity

Extent to which the results of a given study enable application or generalisation to other studies, study conditions, animal strains/species, or humans.\n\nSource

False negative

Statistically nonsignificant result obtained when the alternative hypothesis (H~1~) is true. In statistics, it is known as the type II error.\n\nSource

False positive

Statistically significant result obtained when the null hypothesis (H~0~) is true. In statistics, it is known as the type I error.\n\nSource

Independent variable

Variable that either the researcher manipulates (treatment, condition, time) or is a property of the sample (sex) or a technical feature (batch, cage, sample collection) that can potentially affect the outcome measure. Independent variables can be scientifically interesting, or nuisance variables. Also known as predictor variable.\n\nSource

Internal validity

Extent to which the results of a given study can be attributed to the effects of the experimental intervention, rather than some other, unknown factor(s) (e.g., inadequacies in the design, conduct, or analysis of the study introducing bias).\n\nSource

Nuisance variable

Variables that are not of primary interest but should be considered in the experimental design or the analysis because they may affect the outcome measure and add variability. They become confounders if, in addition, they are correlated with an independent variable of interest, as this introduces bias. Nuisance variables should be considered in the design of the experiment (to prevent them from becoming confounders) and in the analysis (to account for the variability and sometimes to reduce bias). For example, nuisance variables can be used as blocking factors or covariates.\n\nSource

Null and alternative hypotheses

The null hypothesis (H~0~) is that there is no effect, such as a difference between groups or an association between variables. The alternative hypothesis (H~1~) postulates that an effect exists.\n\nSource

Outcome measure

Any variable recorded during a study to assess the effects of a treatment or experimental intervention. Also known as dependent variable, response variable.\n\nSource

Power

For a predefined, biologically meaningful effect size, the probability that the statistical test will detect the effect if it exists (i.e., the null hypothesis is rejected correctly).\n\nSource

Sample size

Number of experimental units per group, also referred to as n.\n\nSource