How transparent and reproducible are studies that use animal models of opioid addiction?

The reproducibility crisis in psychology has caused various fields to consider the reliability of their own findings. Many of the unfortunate aspects of research design that undermine reproducibility also threaten translation potential. In preclinical addiction research, the rates of translation hav...

Full description

Saved in:
Bibliographic Details
Main Authors: Blackwell, Justine C. (Author) , Beitner-Czoschke, Julia (Author) , Holcombe, Alex (Author)
Format: Article (Journal)
Language:English
Published: April 2025
In: Addiction biology
Year: 2025, Volume: 30, Issue: 4, Pages: 1-17
ISSN:1369-1600
DOI:10.1111/adb.70027
Online Access:Verlag, kostenfrei, Volltext: https://doi.org/10.1111/adb.70027
Verlag, kostenfrei, Volltext: https://onlinelibrary.wiley.com/doi/abs/10.1111/adb.70027
Get full text
Author Notes:Justine C. Blackwell, Julia Beitner, Alex O. Holcombe
Description
Summary:The reproducibility crisis in psychology has caused various fields to consider the reliability of their own findings. Many of the unfortunate aspects of research design that undermine reproducibility also threaten translation potential. In preclinical addiction research, the rates of translation have been disappointing. We tallied indices of transparency and accurate and thorough reporting in animal models of opioid addiction from 2019 to 2023. By examining the prevalence of these practices, we aimed to understand whether efforts to improve reproducibility are relevant to this field. For 255 articles, we report the prevalence of transparency measures such as preregistration, registered reports, open data and open code, as well as compliance to the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines. We also report rates of bias minimization practices (randomization, masking and data exclusion), sample size calculations and multiple corrections adjustments. Lastly, we estimated the accuracy of test statistic reporting using a version of StatCheck. All the transparency measures and the ARRIVE guideline items had low prevalence, including no cases of study preregistration and no cases where authors shared their analysis code. Similarly, the levels of bias minimization practices and sample size calculations were unsatisfactory. In contrast, adjustments for multiple comparisons were implemented in most articles (76.5%). Lastly, p-value inconsistencies with test statistics were detected in about half of papers, and 11% contained statistical significance errors. We recommend that researchers, journal editors and others take steps to improve study reporting and to facilitate both replication and translation.
Item Description:Gesehen am 05.03.2026
Physical Description:Online Resource
ISSN:1369-1600
DOI:10.1111/adb.70027