Appendix J — BCW Step 5 – Identifying intervention functions

Instructions [1]

Use the APEASE criteria to identify appropriate intervention functions based on the behavioural diagnosis arrived at in Step 4

We identified education, training, persuasion, modelling, and structural changes as more practical, acceptable, and affordable. We felt that coercion, restriction, and incentivisation have too many equity and side effect issues, as well as being unacceptable to many stakeholders. We think that it might be a case of right now we need the education, modelling, etc. Once the structural issues around access and knowledge are removed, then we can think about restriction etc..

Table J.1: Workshop participants’ considerations on the affordability, practicability, effectiveness and cost-effectiveness, acceptability, side-effects/safety, and equitability of intervention functions. Intervention functions and their definitions come from [1].
INTERVENTION FUNCTION EXAMPLES (from step 4) APEASE
Education

e.g., Providing information to promote healthy eating

Telling someone what a reporting guideline is.

Telling someone how to use an RG.

Telling someone when to use an RG.

Seniors prompting juniors to use RGs within collaboration.

Show the difference between good and poor reporting. (modelling)

Affordability: needs to be spread to all universities; affordable

Practicability: practical, possible, feasible.

(Cost)-Effectiveness: potentially cost-effective, because one student well educated may produce lots of good papers; one website can lead to lots of good reports based on RGs. (Needs to be good education, tested, and disseminated. Education that isn’t effective isn’t cost effective)

Equity: adapting education pieces to different publics (for language, for experience with publishing etc.)

Side effect: We can’t see any risk on being educated! People leaving their research group because they don’t use RG?s :-)

Persuasion

e.g., Using imagery to motivate increases in physical activity

Using stories about when good reporting helped someone and when poor reporting caused problems.

Using stories about different times when RGs were used and what the results were for publication / collaboration.

Seniors prompting juniors to use RGs in collaboration.

Using persuasive language/branding in any education or marketing.

Using language to make people feel understood and respected, while promoting RGs (emotional response).

Using design/layout to make the website more friendly and convey the idea that it is simple/easy to use

Affordability: affordable, because we can publish stories in our website for free.

Practicability: practical, although maybe requiring some work hours and different experts. Would look to solicit stories from lots of different people, from different situations.

(Cost)-Effectiveness: Can be effective if a structure is used to create stories

Acceptability: We expect that these stories can be accepted. However, persuasive language can be seen as patronising (seniors can react poorly to the idea of being told what to do). So we would need to adapt for different audiences.

Side-effects/safety:usually safe, but there is a risk that those against a patronising language might resist to use RGs. Small risk of trolling

Equity: necessary to tailor the language to different audiences (seniors x ECR; people experienced with the use of RG x not experienced)

Incentivisation

e.g., Using prize draws to induce attempts to stop smoking

Journals reward good reporting with reduced APCs / fast-track publication.

Funders reward good reporting with prioritised funding.

Universities use reporting quality for selection committees, promotions, awards.

Creating incentives by making products embedding guidance that make writing an easier/quicker task

Creating products that give immediate gratification when guidance is followed (gamification).

Creating badging systems that well-reported articles can be rewarded with by journals (getting a badge is a shiny incentive).

Creating badging systems that well-reported articles can be rewarded with by EQUATOR (getting a badge is a shiny incentive).

Affordability:Ok because already budgeted (no way to calculate cost). Some things very expensive: journals reducing APCs are taking a financial hit. Creating a product with gamification will cost money.

Practicability: not difficult to implement. Many of these rely on some measure of ‘good’ reporting. Who is the creator of this standard? Do people agree with it? Getting all universities /funders/ journals to use reporting quality would be very challenging (lots of time to contact so many people and persuade).

(Cost)-Effectiveness: cost-effective, will bring results (will get people to do what needs to be done) and these things cost zero or very little. Big variability here in cost effectiveness, depending on costs.

Acceptability: OK, no ethics or practical problem in adopting these measures. Likely to be huge push-back by universities / journals / funders, or by the researchers who are then experiencing these things.

Side-effects/safety: Gaming, potentially undermining research integrity (reporting things as done when they were not)

Equity: problematic because some people might not have access to support for good reporting. (Paraolympics?)

Coercion

e.g., Raising the financial cost to reduce excessive alcohol consumption

Funders blacklist people who routinely report poorly.

Universities refuse promotion for poor reporters.

REF does not allow articles that don’t follow reporting guidance.

Media refuse to speak with authors who are not badged as good reporters.

Researchers refuse to collaborate with routine poor reporters.

Name and shame poor reporters (opposite of a badge)

“If you don’t use RGs, peer reviewers will give you a hard time”

“If you report poorly, then you’ll be viewed as a sloppy researcher and people won’t want to work with you”

Affordability: totally, nothing of this has a high cost. Checking by funders etc. would be time consuming and expensive.

Practicability: not practical. Difficult to do.

(Cost)-Effectiveness:

Difficult to evaluate.

Acceptability: Low.

Side-effects/safety: Bad impact on long-term reputation. How to remove people from blacklists (clean cache!)? How to remove the black list from people? If you keep telling people that bad things will happen and then they never do, then the threats are empty and your other statements become less powerful.

Equity: Same as above. You need to support to learn how to report well so that you are not blacklisted. These things will negatively affect ECRs and people without access to education

Training

e.g., Advanced driver training to increase safe driving

Give people opportunities to practice using RGs.

Opportunities to practice finding the right RG for your work.

Practice integrating RGs into workflow/ identifying when RGs can be used.

Affordability: we’ve shown that this can be done in affordable ways, online reaching many people

Practicability: We’ve shown that this is feasible! In person teaching has all the travel/venue/admin practicalities come up. Would need to think abut models for scaling: if small group training, train the trainer and ambassador model. If large groups, develop MOOCs (which is time consuming, but has been done before, eg Coursera examples)

(Cost)-Effectiveness: Depends on the model, some very cost effective (when reaching many people with little effort, or when reaching fewer people but large impact). As with education, needs to be effective training.

Acceptability: Historically, ECRs and students are very receptive to training initiatives. Seniors are less interested for themselves, unless planning to teach their own courses.

Side-effects/safety: If dealing with research integrity issues, people can feel judged. When talking about past experiences, people can have difficult feelings come up.

Equity: Language (could use existing translators – although not systematic choices of language), time zones (for online courses), location (for physical courses), cost for paid courses (need free versions, freemium models, sponsored models)

Restriction

Prohibiting/blocking the submission of manuscripts that are not adherent to RGs by introducing editorial staff prechecks.

(Journals not publish articles that don’t follow RGs.)

Seniors/PIs refuse to sign off on manuscripts that don’t follow RGs.

Funders refuse to fund unless researchers commit to using reporting guidance.

Affordability: cheap for us… Expensive for the ones enforcing (eg journals)

Practicability: ERQUATOR doesn’t have that much clout/weight/influence!

(Cost)-Effectiveness

Acceptability: unlikely to be accepted

Side-effects/safety: guidance adherencecan be subjective, so this could open up folks to being blacklisted for no good reason / underlying bias.

Equity: could really negatively affect lower resources folks; what about people not being able to afford to read RGs.

Environmental restructuring

Instruction to authors pages

  • Changing order of instructions (placing references to reporting guidelines at the top of instructions pages)
  • Highlighting important info (changing colour, boldening fonts of info about RGs)
  • Including indications/links to RGs in authors’ instructions pages that don’t have it already

Submission systems

  • Including reminders (in Editorial manager’s pages, e.g., “are you sure you’ve included the relevant RG alongside your manuscript?” prompt messages)

Linking different resources together within RG resources – eg, within RG website, or within RG itself, link to the E&E.

Make sure all guidance has usable/editable checklists

Making sure all guidance is accessible (copyright)

Alternative ways of interacting with guidance, such as templates (GoodReports)

Social pressure for good reporting – ‘everyone’s doing it’

Affordability: our time for campaigning, but little cost for whoever owns websites etc (particularly if we supply draft text). Possible cost for things like permissions.

Practicability: potentially time heavy, but all doable! Limited by willingness of journals in some things. Making checklists accessible and editable is trickier, as we don’t have a lot of control there, but could influence developers during updates and campaign to have journals make changes.

(Cost)-Effectiveness: potentially very sensible – small changes that can then be accessed by many people

Acceptability: high, little extra work for researchers. RG developers might be more open to us changing the format of guidelines than changing language.

Side-effects/safety: journals might get annoyed with us, affect our relationship with them. But no side effects for researchers.

Equity: no issues we can think of.

Modelling

Publishing model papers (with complete reporting) and citing them; or pointing to exemplary papers published (with a badge)

Modelling ways to use RGs (case studies / stories showing how others use RGs)

Model examples of how to report each item (in E&Es)

Share models with medical writers etc, not just for use in education and training

Affordability: cheap, just our time in collating examples. Badging system might require some investment.

Practicability: Need to find perfect papers, which can be tricky. Badging system might be tricky, if we want something high tech that journals can apply or that is systematic. But can do very low tech version of EQUATOR just locating a few good examples and pointing to them.

(Cost)-Effectiveness: potentially very useful

Acceptability: People are always looking for good examples!

Side-effects/safety: Need to make sure good examples don’t have bad habits within them that might get propagated, or include a disclaimer of what exactly is good.

Equity: need to make sure the model papers/examples are open access, language issues

Enablement

GoodReports templates (under testing in the GRReaT trial);


Space for people to ask experts questions about guidance Eg

  • Medical writing online support (chat)

  • Ask an RG developer

  • Forums

FAQs on guidance websites, EQUATOR website

Making guidance easier to read and follow

  • Replacing difficult language
  • Using clever formatting, spacing, bullets

Affordability: all relatively affordable, except where someone is doing live chat – that could be expensive.

Practicability: things we’re doing or can do, but would need software, outside involvement (experts). All feasible. Changing existing guidance is trickier, as we’re not in control of the RGs themselves. Could influence updates. Could also make these changes through eg GoodReports, but would still need consensus from a wider group.

(Cost)-Effectiveness: FAQs: small changes for lasting effect. Live chat might not be that cost-effective: only helping one person at a time. Updating guidance: big time investment, potentially big reward

Acceptability: RG developers might not like us changing guidance. Might be easier to get buy-in if we have evidence that lots of people struggle with something. Authors would welcome these changes.

Side-effects/safety: if questions move beyond experts’ field of knowledge, how do they react? Do they give misleading advice?

Equity: time zones for Q&As? Language that chats/forums are run in / FAQs are written in. But making guidance easier to understand could increase equity