Article Highlight | 12-Feb-2026

Reporting Quality of Trend Analyses Published in Leading Medicine and Oncology Journals during 2008-2018

Xia & He Publishing Inc.

Reporting quality is a cornerstone of evidence-based medicine and ensures the reproducibility of clinical research. While extensive attention has been devoted to evaluating the reporting quality of clinical trials and observational longitudinal studies, the same scrutiny has rarely been applied to trend analyses. Trend analyses are vital for evaluating temporal changes in epidemiological parameters and forecasting future trends. Recent methodological guidelines advocate for the reporting of slopes, beta coefficients, or annual percent change (APC) in such analyses, while the American Statistical Association emphasizes the importance of reporting effect sizes alongside p-values. However, the extent to which these recommended statistical metrics are reported in published trend analyses remains unknown. This study therefore aimed to systematically assess the reporting quality of trend analyses published in leading medical and oncology journals from 2008 to 2018, using the reporting of key statistical metrics as quality indicators, and to identify factors associated with reporting quality.

Materials and Methods
A systematic search of PubMed was conducted for articles published between January 1, 2008, and December 31, 2018, in ten high-impact journals (Ann Intern Med, Ann Oncol, BMJ, J Clin Oncol, J Natl Cancer Inst, JAMA Oncol, JAMA, Lancet, Lancet Oncol, and N Engl J Med). Articles were included if the title contained "trend" or "trends" and the publication type was an original article, research letter, or meta-analysis/systematic review that performed a trend analysis. Articles published after 2018 were excluded to avoid potential distortion from the surge in publications during the COVID-19 pandemic.

Three authors independently extracted data using a standardized form, recording: publication year, journal specialty (medicine/oncology), statistical model type, and the reporting of four key metrics: p-values, effect sizes (defined as confidence/credible/uncertainty intervals or quartiles), beta/coefficient/slope, and APC. Author-related factors such as senior author location (U.S. vs. non-U.S.) and the presence of any authors affiliated with a School of Public Health, epidemiology department, or statistics department were also recorded.

A reporting quality score (0–2 points) was assigned: one point for reporting either a p-value or an effect size, and one point for reporting beta/coefficient/slope or APC in linear models. Statistical analyses included Chi-square tests, Fisher’s exact tests, and multivariable logistic regression to identify factors associated with reporting each metric.

Results and Discussion
From an initial 398 reports, 297 articles met the inclusion criteria. The majority used linear or piecewise linear models (76.1%), with smaller proportions using non-parametric (12.8%) or non-linear parametric models (10.8%).

Overall Reporting Quality:
Among all 297 analyses, 66.0% reported p-values and 72.7% reported effect sizes. For the 226 analyses using linear models specifically, reporting rates were higher: 74.8% reported p-values, 81.0% reported effect sizes, 41.6% reported APC, and 15.0% reported beta coefficients/slopes. Notably, only 13 analyses (5.8%) failed to report any of the assessed metrics (p-values/effect sizes or beta/slope/APC). This suggests a moderate level of reporting quality but reveals significant room for improvement, particularly in the consistent reporting of effect magnitudes and model coefficients.

Factors Associated with Reporting Quality:
Multivariable regression analyses revealed several key associations:

  1. Author Affiliation: Involvement of an author from a statistics department was significantly associated with a higher likelihood of reporting effect sizes (OR=3.28). Conversely, involvement of an author from an epidemiology department was associated with a lower likelihood of reporting effect sizes (OR=0.38).

  2. Senior Author Location: Senior authors based in the U.S. were significantly more likely to report p-values compared to non-U.S. senior authors (OR=2.17).

  3. Other Factors: No factors were independently associated with the reporting of APC. Having an author from a School of Public Health was associated with higher overall reporting-quality scores in univariate analysis but was not significant in the multivariable model for specific metrics.

Interpretation and Implications:
The finding that statisticians' involvement improves effect size reporting aligns with their rigorous training in quantitative methodology and underscores the value of including biostatisticians in research teams. The paradoxical finding regarding epidemiologists warrants careful interpretation; it may reflect historical training emphases that preceded recent strong advocacy for effect size reporting (circa 2016–2019).

The geographical disparity in p-value reporting highlights a potential need for enhanced methodological training and awareness in regions outside the U.S. The generally suboptimal reporting of beta coefficients/slopes and APC—metrics essential for quantifying and comparing trends—indicates a widespread gap between published recommendations and practice.

This study underscores the need for:

  • The development and adoption of formal reporting guidelines specific to trend analyses.

  • Greater vigilance from journal editors and peer reviewers in enforcing the reporting of recommended statistical metrics.

  • Encouraging the involvement of statisticians in the design, analysis, and reporting of trend studies, particularly in oncology.

Limitations
This study has several limitations: 1) The title-based search may have missed relevant articles using synonyms for "trend"; 2) Focusing on high-impact journals limits generalizability to lower-tier publications; 3) Analyses after 2018 were excluded to avoid pandemic-era publication biases; 4) The chosen quality metrics may not be applicable to all types of trend models.

Conclusions
The reporting quality of trend analyses in leading medical and oncology journals is moderate, with inconsistent adherence to recommendations for reporting p-values, effect sizes, and model coefficients. Key factors influencing reporting include author affiliation (statistics vs. epidemiology departments) and senior author location (U.S. vs. non-U.S.). These findings call for increased awareness, targeted training, the development of specific reporting guidelines, and greater collaboration with statistical experts to enhance the transparency, reproducibility, and scientific rigor of trend analysis research. Further studies are needed to validate these findings across a broader range of journals and in more recent publications.

 

Full text:

https://www.xiahepublishing.com/2472-0712/ERHM-2025-00039

 

The study was recently published in the Exploratory Research and Hypothesis in Medicine.

Exploratory Research and Hypothesis in Medicine (ERHM) publishes original exploratory research articles and state-of-the-art reviews that focus on novel findings and the most recent scientific advances that support new hypotheses in medicine. The journal accepts a wide range of topics, including innovative diagnostic and therapeutic modalities as well as insightful theories related to the practice of medicine. The exploratory research published in ERHM does not necessarily need to be comprehensive and conclusive, but the study design must be solid, the methodologies must be reliable, the results must be true, and the hypothesis must be rational and justifiable with evidence.

 

Follow us on X: @xiahepublishing

Follow us on LinkedIn: Xia & He Publishing Inc.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.