4  What influences researchers when using reporting guidelines? Part 2: Describing the questions asked in quantitative surveys

4.1 Introduction

In the previous chapter I describe my systematic search and thematic synthesis of qualitative research exploring authors’ experiences of using reporting guidelines. I identified influences that may affect whether an author adheres to reporting guidelines. Many of the studies I included used mixed methods surveys and my search also identified purely quantitative survey studies. Despite not including these quantitative data in my qualitative synthesis, I decided the quantitative questions were important to investigate for three reasons.

My primary reason was to continue pursuing the objective of my previous chapter: to identify influences that may affect whether authors adhere to reporting guidelines. As many authors of these studies were themselves users or developers of reporting guidelines the questions may reflect real influences they have experienced, witnessed, or are trying to avoid or achieve.

Secondly, I wanted to know how many reporting guidelines had undergone any user testing, qualitative or quantitative. My qualitative synthesis found few reporting guidelines had undergone qualitative evaluation. Quantitative surveys might be easier for guideline developers to run (as most come from quantitative disciplines), and so I wanted to know which guidelines had been evaluated quantitatively and how this compared with those from my previous chapter.

Thirdly, I expected this analysis would provide additional context for my qualitative synthesis. In mixed-method surveys, quantitative questions may bias responses to subsequent qualitative questions and, therefore, the findings of my thematic synthesis. For instance, qualitative questions like “Anything else?” or “Please elaborate” may lead respondents to neglect or repeat topics covered by the previous quantitative questions.

My objectives, therefore, were to:

  1. Describe the landscape of quantitative survey studies exploring authors’ experiences of using reporting guidelines.
  2. Identify additional possible influences that were absent from my qualitative evidence synthesis.
  3. Compare themes arising from the quantitative questions with those from my qualitative synthesis.

4.2 Methods

My qualitative synthesis included a systematic search that sought to capture all survey studies investigating reporting guidelines (see chapter 3 for full search details). I found 22 studies using quantitative survey questions, 14 of which also included one or more qualitative questions (see Table 4.1). Although Chinese databases did not yield qualitative studies to include in my previous chapter, I did find two Chinese quantitative surveys [1,2]. Yuting Duan from the Chinese EQUATOR Centre translated these two studies into English. I imported files into NVivo, including the full surveys where available, labelled all questions with descriptive codes, creating new codes when necessary, and then inductively grouped related codes into broad categories.

Table 4.1: Studies that collected quantitative data to explore researcher’s experiences of reporting guidelines
Citation Title Guidelines studied Sample geographics Sample size Quantitative or mixed methods
Brouwers et al. 2016 [3] The AGREE Reporting Checklist: a tool to improve reporting of clinical practice guidelines AGREE Reporting Checklist Not reported 15 Quantitative
Burford, Welch, Waters et al., 2013 [4] Testing the PRISMA-Equity 2012 reporting guideline: the perspectives of systematic review authors PRISMA-Equity Checklist items embedded into survey Not reported 151 Mixed methods
Davies, Donnelly, Goodman, Ogrinc 2016 [5] Findings from a novel approach to publication guideline revision: user road testing of a draft version of SQUIRE 2.0 SQUIRE Guidelines, which are presented as a checklist Not reported but invited participants were from USA, UK Lebanon, Sweden. 44 Mixed methods
Dewey, Levine, Bossuyt et al., 2019 [6] Impact and perceived value of journal reporting guidelines among Radiology authors and reviewers CONSORT, STROBE, PRISMA, STARD checklists USA, Canada, China, South Korea, Japan, Germany, France , Italy, UK, Other European countries, Middle East, Latin America and ‘Other’. 831 Mixed methods
Eysenbach, 2013 [7] CONSORT-EHEALTH: Implementation of a Checklist for Authors and editors to improve reporting of web-based and mobile randomized controlled trials CONSORT-Ehealth checklist Not reported 61 Mixed methods
Fang, Xi, Liu et al. 2016 [1] A survey on awareness of the ARRIVE Guideline and GSPC in researchers field in animal experiments field in Lanzhou City ARRIVE Guidelines and Gold Standard Publication Checklist China 287 Quantitative
Fuller, Pearson, Peters, Anderson, 2015 [8] What affects authors’ and editors’ use of reporting guidelines? Findings from an online survey and qualitative interviews TREND and reporting guidelines in general Predominantly North America 56 Mixed methods
Giray et al. 2020 [9] Assessment of the knowledge and awareness of a sample of young researcher physicians on reporting guidelines and the EQUATOR network: A single center cross-sectional study CONSORT, PRISMA, CARE, GRASS, STARD, STROBE, ARRIVE, SAMPL guidelines Turkey 100 Quantitative
Guo, Qi, Yang et al., 2018 [10] Recognition status of quality assessment and standards for reporting randomized controlled trials of traditional Chinese medicine researchers CONSORT Statement, STRICTA guidelines and CONSORT extension for Traditional Chinese Medicine China 180 Quantitative
Korevaar, Cohen, Reitsma, et al, 2016 [11] Updating standards for reporting diagnostic accuracy: the development of STARD 2015 STARD checklist Not reported for quantitative survey 12 Mixed methods
Ma et al. 2017 [2] Survey of basic medical researchers on the awareness of animal experimental designs and reporting standards in China ARRIVE guidelines and Gold Standard Publication Checklist China 266 Quantitative
Macleod, Collings, Graf et al. 2021 [12] The MDAR (Materials Design Analysis Reporting) Framework for transparent reporting in the life sciences MDAR checklist USA, China, Japan, Germany, Other EU, ‘Other’ 211 Mixed methods
McDonough et al. 2011 [13] Familiarity of non-industry authors with good publication practice and clinical data reporting guidelines CONSORT guidelines USA, UK, Canada, South Africa, Israel, China 23 Quantitative
Öncel et al. 2018 [14] Knowledge and awareness of optimal use of reporting guidelines in paediatricians: A cross-sectional study CONSORT guidelines, STROBE, PRISMA, CARE, SRQR, STARD, SQUIRE, CHEERS, SPIRIT, ARRIVE, TREND, STREGA, the Conference on Guideline Standardization (COGS), Outbreak Reports and Intervention Studies Of Nosocomial infection (ORION) Turkey 244 Quantitative
Page, McKenzie, Bossuyt et al. 2021 [15] Updating guidance for reporting systematic reviews: development of the PRISMA 2020 statement PRISMA statement Not reported 110 Mixed methods
Prady & MacPherson 2007 [16] Assessing the Utility of the Standards for Reporting Trials of Acupuncture (STRICTA): A survey of authors STRICTA Not reported 28 Mixed methods
Prager, Gannon, Bowdridge et al. 2021 [17] Barriers to reporting guideline adherence in point-of care ultrasound research: a cross- sectional survey of authors and journal editors STARD Not reported 18 Mixed methods
Phillips et al 2015 [18] Pilot testing of the Guideline for Reporting of Evidence-Based Practice Educational Interventions and Teaching (GREET) GREET checklist and E&E Not reported 31 Quantitative
Rader, Mann, Stransfield et al., 2014 [19] Methods for documenting systematic review searches: a discussion of common issues PRISMA statement Not reported 263 Mixed methods
Sharp, Glonti, Hren, 2020 [20] Using the STROBE statement: survey findings emphasized the role of journals in enforcing reporting guidelines STROBE statement The full survey was answered by participants in Africa, Asia, Europe, North and South America, Middle East, and Pacific Region. It is unclear who answered the free text question. 1015 Mixed methods
Struthers, Harwood, de Beyer et al., 2021 [21] GoodReports: developing a website to help health researchers find and use reporting guidelines Reporting guidelines in general Not reported 274 Mixed methods
Tam, Tang, Woo, Goh 2019 [22] Perception of the Preferred Reporting Items for Systematic Reviews and Meta Analyses (PRISMA) statement of authors publishing reviews in nursing journals: a cross-sectional online survey PRISMA statement Not reported 230 Mixed methods

4.3 Results

What reporting guidelines were studied?

Between the 22 studies 25 reporting guidelines were mentioned, most frequently PRISMA (n=6), STARD (n=6), CONSORT (n=6) and ARRIVE (n=5) (see my List of Abbreviations for the full titles of each reporting guideline). Thus, only a small proportion of the reporting guidelines indexed in EQUATOR’s database [23] have been evaluated with quantitative questions. Fourteen studies focussed on a single guideline (n=14), with others asking questions about multiple guidelines (e.g., “which reporting guidelines [participants] had known” [9]) or guidelines in general (e.g., “whether they had used reporting guidelines in their publications” [14]). Most studies included participants from the USA, Europe, and Canada and only a few studies were conducted elsewhere (e.g., China and Turkey) (see Table 4.1).

In comparison, my thematic synthesis (chapter 3) identified 18 studies that collected qualitative data. These studies covered only 12 reporting guidelines and were all conducted in western countries, hence were slightly less diverse than the quantitative survey studies.

The focus of quantitative questions

Survey studies asked participants:

  • whether they were aware or familiar with certain reporting guidelines,
  • how often they used them and what for,
  • whether reporting guidelines had influenced their behaviour,
  • whether guidance was usable and useful,
  • their opinions on guidance content,
  • their reasons for using a reporting guideline,
  • their opinions on reporting quality in the literature,
  • whether reporting guidelines were easy to find and access,
  • whose role it was to check for compliance,
  • whether the aim of the guidance was clear,
  • opinions on things explicitly named as a barrier including the length of the guidance, the language it is written in, and the time needed to use it, and
  • opinions on things explicitly named as a facilitator or motivator including endorsements, evidence, explanatory information, training, the behaviour of peers, and the development process of the guidance.

Comparing the focus of quantitative questions with themes derived from qualitative data

The quantitative questions included some novel influences not contained in the qualitative data (shown in bold in Table 4.2), such as training as a possible facilitator [8], whether authors had heard of the EQUATOR Network [8,9], and whether transparency in guideline development is important [8]. One study asked whether language may be a barrier to using reporting guidelines for some [17]. Both quantitative questions and the qualitative data mentioned journals enforcing reporting guidelines, but only quantitative questions asked whether funders and employers should also enforce them.

Table 4.2: Codes describing the focus of questions asked and their code categories. Items in bold did not appear in the qualitative data.
CODE CATEGORY

Participant’s experience [1,2,46,811,13,14,17,20]

Participant’s speciality [1,2,4,5,8,9,11,17,19,20,22]

Participant’s age [1,2,9,10,14,17,20,22]

Participant’s gender [1,9,10,14,17,20,22]

Participant’s geography [6,8,13,20]

Participant’s stage of current research project [4]

Demographics

Awareness of a particular guideline [1,2,4,8,9,13,14,16,17,20,22]

Awareness of EQUATOR [8,9]

How did they first hear about guidelines or EQUATOR? [8,14,20]

When did they first learn about a guideline? [8]

Awareness

How frequently do they use guidelines? [46,8,9,14,17,20,22]

When should guidelines be used? [6,8,9,14,17,20]

Would they use a guideline, hypothetically [3,4,20]

Usage
Did the guidance impact subsequent behaviour? [37,16,21] Impact on behaviour

Is the guidance usable? [17,18,20]

Is the guidance easy to understand? [5,10,11,17,20,21]

Usability
Is the guidance useful? [2,3,6,8,12,20,21] Usefulness
Is the guidance important? [2,5,7,8,10,22] Importance

Are time and length barriers? [7,8,17,20,21]

Is language of guidance a barrier? [17]

Are guidelines lacking for study type? [8]

Barriers

Is the layout OK? [3,11,20]

Should the content be modified? [3,11,15]

Is the guidance relevant? [21]

Are guidelines prescriptive? [8]

Opinions on content

Will using a guideline benefit the manuscript? [3,4,17,20]

Productivity benefits of using guidelines [20]

Using guidelines because of journal requirements [8,20]

Using guidelines because of funder requirements [8]

Using guidelines because of employment requirements [8]

Using guidelines because of other researchers expecting it [20]

Reasons for using a guideline
Opinions on reporting quality of the literature [1,2,8,17] Opinions on reporting quality
Are guidelines easy to find and access [2,8,17] Accessibility
Who should complete the checklist? [8,17] Roles

Are endorsements a facilitator? [8]

Is evidence of increased chance of publication a facilitator? [8]

Is evidence of improved reporting quality a facilitator? [8]

Is explanatory information a facilitator? [8]

Is training a facilitator? [8]

Is the behaviour of peers a facilitator or motivator? [8]

Is the evidence base underlying a reporting guideline a motivator? [8]

Is transparency in guideline development a motivator? [8]

Facilitators and motivators
Is the aim of the guidance clear? [11] Aim of guidance
Table 4.3: Codes and descriptive themes identified from a qualitative evidence synthesis. Items in bold did not appear in the quantitative questions. Items in italic offer possible explanations to some quantitative findings.
CODES DESCRIPTIVE THEMES

What does this term mean? [[5]; [24]; [11]; [15]; [21]]

What does this item mean? [5,11,15,16,21,24]

How are these items different? [7,15,16,24]

Have I understood this as intended? [5,24]

Examples help me understand items [15,25,26]

What does this mean?

Why is this item important? [11,15,22,24]

Who is this item important to? [15,24,27]

Why is this item important?

Have I understood the guideline’s scope as intended? [15,21]

Does this item apply to me? [7,15,16,21,24]

Is this item optional? [16,24]

Does this apply to me?

What are reporting guidelines? [17,27]

How should I use a reporting guideline? [8]

I don’t understand what reporting guidelines are

I find guidelines useful in general [6,21]

Guidelines make me feel confident [27]

Guidelines help me develop as a researcher [4,27]

Guidelines may help me improve my manuscript [4,6,7,24,27]

I believe guidelines may help me publish more easily [28]

Guidelines benefit me

I may use guidelines because journals and editors tell me to [4,8,27,28]

I may use guidelines because other researchers expect it [8,28]

I use guidelines because of other people
Standardized reporting benefits the community [2729] Guidelines benefit others

Immediate benefits are more important than hypothetical ones [27,28]

Personal benefits are more important than benefits to others [28]

Some benefits are more important than others

I use reporting guidelines for planning research [24,27]

I use reporting guidelines for designing research [6,16,17,27]

I use reporting guidelines for writing [6,16,24,27]

I use reporting guidelines for checking my own or other people’s writing [17,27]

I use reporting guidelines to appraise the quality of other people’s reporting [11]

I use reporting guidelines for peer reviewing [27]

Researchers use reporting guidelines for different tasks

I want items presented in the order in which I must do them [25,26,29]

I want design or methods advice [15,24,27]

I want templates for writing [6]

I want checklists that are easy to fill in [12,21]

I want checklists embedded into journal submission workflows [6]

I want items embedded into data collection tools [4]

I want guidance presented in formats that are better suited to the task I am doing

Guidelines take time to read, understand and apply [4,8,28]

Some items require extra work which takes time and effort [5,19,24]

I want an indication of which items to prioritize [16,24]

Perceived complexity [6,12,24,28]

Long guidelines are off-putting [4,7,21,27]

Guidelines take time

Itemization helps me navigate guidance [15]

Itemization summarizes the guidance [6]

Itemization may decrease costs

Itemization makes guidance appear longer [15]

Itemization blocks the bigger picture [24]

Itemization may increase perceived costs

Following reporting guidance can result in long, bloated articles [4,7,16,24]

Long, bloated articles may exceed journal word limits [7,8,12,16]

I want options for where to report this item [5,7,8,15,24,27]

I think guidelines make my manuscripts long and bloated
The benefits of using a reporting guideline may not outweigh the costs [7,8,27] The benefits of using a reporting guideline may not outweigh the costs
Guidelines are more valuable when used early [6,21,24,27] The balance of benefits vs costs may be more favourable when guidelines are used early

I would clarify this item [15,16]

I would move this item [5,24]

I would split this item into two [15,24,26]

I would add or remove items from this guideline [11,15,16,24]

I would add or remove requirements from this item [15,16,22,25,27]

I think the guidance could be improved

Guidelines can become out of date [24]

Guidelines need to be updated [15]

Guidelines need to be kept updated

I cannot report this because I didn’t do it [16]

I cannot report this because of intellectual property issues [7]

I cannot report this because it clashes with journal guidelines [15]

I cannot report this because data was missing from my primary studies [4]

Editors, reviewers or co-authors asked me to remove this item [16,19]

I feel unable to report this

I feel uncertain because I don’t know how to say that I didn’t do it [15]

I feel worried that I will be judged for transparently reporting something I didn’t do [15,27]

I feel nervous or uncertain if I am unable to report an item

I may not know that reporting guidelines exist [6,8,11,21,28]

I may not be able to easily access guidance [21,28]

I can only use what I know about and have

Reporting guidelines may be less valuable to experienced researchers [6,7,27]

Experienced researchers feel that they already know how to report [6,24,27]

Experienced researchers find guidance patronizing and feel untrusted [7,8,12,15]

Reporting guidelines are more valuable to inexperienced researchers
Reporting guidelines can be hard to use at first but get easier with experience [8,24,28] Reporting guidelines can be hard to use at first but get easier with experience

I want design or methodological advice [12,15,27]

I don’t know how to do this item [15,16,24]

I want or need design advice

Guidelines are procedural straightjackets [27]

This guideline is too prescriptive [15,22,27]

I think this guidance prescribes how research should be designed
The guideline’s applicability criteria are not clear [6,11,21] A guideline’s scope can be unclear

This guideline isn’t a perfect fit for me [21]

This guideline doesn’t generalise [6,12,15,22,27]

This guideline is too prescriptive [15,22,27]

A guideline can be too narrow
I don’t want to see optional items that only apply to other types of study [16,21] A guideline’s scope can be too broad

I need to adhere to journal guidelines or other research guidelines [6,8,15,16]

I might need to use multiple reporting guidelines [27]

Authors often need to adhere to multiple sets of guidance

I want reporting guidelines to be linked or embedded [11,15]

I want reporting guidelines to use similar structure [15]

I want reporting guidelines to use similar terms [15]

I want guidelines to harmonize

I don’t like checklists [6,7,21,27]

I may use the checklist instead of the full guidance [25]

I may use the checklist before I read the full guidance [25]

I experience reporting guidelines primarily as, or through, checklists

Most ideas captured in the quantitative questions also appeared in the qualitative data. This may indicate that the quantitative questions asked were pertinent, or perhaps that they influenced participants’ responses to subsequent, qualitative questions.

Overall, although the quantitative questions contained some novel themes, the qualitative data contained many more ideas that were not addressed by the quantitative questions (see bold items in Table 4.3). These included what authors understand reporting guidelines to be, the pros and cons of itemization, ideas of how guidance could be improved, negative feelings when an item cannot be reported as desired, the pros and cons of including design advice in reporting guidance, whether optional items were understood as being optional, and frustration when the scope of a reporting guideline is too broad, narrow, or unclear.

The qualitative data sometimes provided context to or explanation for quantitative answers (see italicised items in Table 4.3). For example, many of the quantitative surveys asked participants whether they could understand the guidance. However, a quantitative answer to this question does not reveal what the participant understands, how they understand it, or whether they understand it as intended. The qualitative data contained reports of people failing to understand the wording of an item, how to report that item in practice, whether an item applies to them, whether a reporting guideline applies to them, what the intended scope of a reporting guideline is, or even what a reporting guideline is at all. One study found that although authors reported understanding an item, their writing showed that they had interpreted it differently to how the reporting guideline developers had intended [5].

4.4 Discussion

The aim of this chapter was to reveal which reporting guidelines had been evaluated using quantitative survey questions, what influences those questions explored, and to compare those influences with the results of my previous chapter.

Few reporting guidelines have been evaluated using quantitative or mixed surveys. Given that even fewer have been tested qualitatively, this means that hardly any of the hundreds of reporting guidelines in the EQUATOR Network’s database have undergone meaningful user testing. To encourage and support future guideline developers, advice on user testing could be included in an update of guidance for guideline developers, the current version of which contains little advice on how to evaluate reporting guidelines [30].

The quantitative survey questions offered few new influences and covered few of the themes I identified from qualitative studies. Quantitative surveys often asked about awareness, usage, usability, usefulness, importance, barriers, facilitators, content, and whether reporting guidelines had led to a change in behaviour. However, many of the themes I identified in my qualitative synthesis did not appear in the quantitative surveys. This suggests that the quantitative questions did not bias or limit the qualitative data, and also suggests that quantitative surveys may often miss themes. Often the qualitative data provided context and explanation for the quantitative data. Because quantitative surveys can miss important findings or fail to explain them, developers seeking actionable feedback should collect qualitative data when assessing how researchers understand or feel about reporting guidelines, or what could be done to improve the guidance.

Limitations of included studies and advice for future research

All included survey studies were subject to recall bias as participants were describing past behaviour or opinions. Future studies could consider methods that allow researchers to document experiences in real time, like observation or think aloud tasks.

The included studies’ participants lacked diversity. Studies should ensure participants represent expected users in terms of academic writing experience, discipline, profession, experience (or naivety) with reporting guidelines, and language, or even focus on differential experiences of specific target groups. For example, the EQUATOR Network website gets similar levels of traffic from Asia and Europe, yet very little research into usability or barriers of reporting guidelines has included authors from Asian countries (see chapter 5). The website also sees many new visitors who abandon the site quickly, without accessing any reporting guidance. These visitors may be authors who are naïve to reporting guidelines and decide not to use one. Understanding why people choose not to use a guideline is equally important as understanding the experiences of those that do. Many of the included studies used snowball sampling via social networks connected to the researchers, and hence may have attracted participants who knew about reporting guidelines and already held opinions on them. Many studies required authors to read the guidance as part of the study itself, thereby forcing participants to engage with it. Consequently these studies do not capture perspectives of less-engaged authors, or explain why authors choose not to use a guideline.

Some included surveys contained leading questions. For example, the Likert rated statements “The STARD 2015 guidelines are easy to follow” [17] and “The time required to adhere to the STARD 2015 guidelines is a barrier to using the guidelines” [17] are both subject to acquiescence bias; the tendency for participants to agree with research statements [31]. Future studies should consider using neutral questions.

Studies used lots of different words to describe reporting guidelines, including guidelines, standards, requirements, checklist, explanation and elaboration, or just an acronym, e.g., CONSORT. This became a problem in studies where participants were not supplied with guidance documents as part of the study, as it was not always clear which document a participant was considering. For instance, asking participants whether PRISMA is easy to understand will not tell you whether they are talking about the PRISMA checklist, statement, or explanation and elaboration document. Future studies should be specific when asking questions and reporting results.

Conclusions

Very few reporting guidelines have been evaluated using either quantitative or qualitative methods. Reviewing the content of quantitative surveys revealed few novel influences which were absent from the qualitative data synthesised in chapter 3. However, the qualitative data contained many more themes than the quantitative questions. This suggests that the results of my qualitative evidence synthesis were not limited by the content of quantitative survey questions within the mixed methods surveys. Because the qualitative data revealed more themes and provided explanation and context to findings, reporting guideline developers who want to make sure their resources are easy to use should consider using qualitative methods, which may produce richer, actionable insights.

Two studies asked participants whether they had heard of the EQUATOR Network, noting that it is a “valuable resource for users and potential users of reporting guidelines” that 44% (19/43) of editors [8] and 38% of authors (38/100) [9] are aware of (published in 2015 and 2020 respectively). Although these studies asked participants whether they were familiar with EQUATOR, authors’ experiences of using EQUATOR’s website has never been explored. In the next chapter I describe EQUATOR’s website and key characteristics of its web traffic, before discussing how well it is helping authors find reporting guidelines.

4.5 Reflections on reporting this chapter

This chapter could have been combined with the one before. They came from the same starting idea, and their findings are intertwined. I separated them because I felt like my first chapter was already getting too long, and I wanted to keep its structure simple, familiar, and in line with ENTREQ. Would I have made this decision if ENTREQ did not exist? Possibly not. Separating the chapters draws a distinction between their objectives and methods, but in drawing this line I am diverging from the true origin story of this chapter. The two were not designed as separate studies, so in presenting them as such am I twisting the truth so it falls in line with a reporting guideline? This might not matter for these exploratory studies, but it made me question whether other researchers may feel pressure to package a messy research journey into a neat-and-tidy reporting guideline format.