Reporting Cost Effectiveness Analyses: Time for Improvement

Chris Carswell, Co-Editor in Chief, PharmacoEconomics

I am not a frequent blogger but one recurring issue has forced me to put finger to keyboard – insufficient detail and transparency in many submitted cost effectiveness analyses. Reporting checklists have been available for many years. According to Web of Science, the 1996 Drummond checklist[1] has been cited over 950 times. CHEERS[2], published in 2013, has been cited over 260 times and is one of the most downloaded Task Force Reports from the International Society of Pharmacoeconomics and Outcomes Research. In addition, authors continue to identify the need for reporting improvement to aid transparency and facilitate model replication.[3] So, why the apparent lack of compliance with well-recognized reporting checklists?

In my experience, it is an issue that spans across all sectors and locations: authors from the pharmaceutical industry, academia, and those from developing and developed countries. Although the standard of reporting is a lot worse in papers from developing countries, poor reporting is by no means uncommon in papers from highly reputed universities, consultancy companies and medical communication agencies based in developed countries. Organizations which have recourse to highly trained health economists and medical writers.

So what is going wrong? Are some authors choosing to cite reporting checklists without reading them, perhaps under the misapprehension that citing a well-recognised checklist will increase the chances of the paper passing peer review? Perhaps it is because of time pressures and the invasive publish or perish culture. Maybe a concern that errors will be found if papers are reported in sufficient detail. In some cases, there is the worry of giving intellectual property or commercial secrets away. A failure to keep up with increasing standards over time, such as recent calls to report model validation efforts and keep a detailed log of search activities may also be partly to blame.[4, 5]  Is the wording of the checklists unclear? Is there a need for additional training? Are reporting skills not taught in university courses? Are journals not specific enough regarding the need to follow recognised checklists? Perhaps the varying reporting requirements from journals are confusing, especially between clinical and speciality journals?

Like many issues, I don’t believe there is one particular cause. Education is an important element. From my experience of running workshops on CHEERS,[2] it doesn’t take long for most to understand the importance (and nuts and bolts) of a fully reported cost-effectiveness analysis. Differing requirements for detail between journals is another. A third is the failure to recognise that most journals have the option to publish supplementary online material.

Whatever the reasons, it is very frustrating for editors, reviewers and readers. From an editor’s perspective it involves unnecessary delays in the peer review process, caused by having to return initial submissions for additional information – sometimes several times. Should the paper sneak past initial review without sufficient information, the peer review process becomes inefficient as numerous revise and resubmit decisions have to be made, frustrating editors,authors and reviewers alike. Finally, in the unfortunate event that a poorly reported paper gets published, readers are unable to properly critically appraise or replicate analyses. Attempts to get further information from corresponding authors can be met with stony silence.

One initiative that helps with the critical appraisal and replication of analyses is the availability of models to peer reviewers and, ideally, to interested readers. PharmacoEconomics and its sister journal PharmacoEconomics Open are the first leading health economics and outcomes research journals to develop formal data sharing policies, and to my knowledge remain the only ones to do so.  Since the beginning of 2017, authors are required to publish a data availability statement at the end of every paper outlining where the data and models supporting the results can be found. Typically this includes a hyperlink to a public repository or supplementary material, or a statement that the data or model will be made available on reasonable request (this appears to be the default statement in many cases and does beg the question what is reasonable?). Some authors of model analyses remain a little confused about this request and provide a statement that “Data sharing not applicable to this article as no datasets were generated or analysed during the current study”. So some education is needed.

For both journals, authors are routinely requested to provide the models for peer review. The availability of the model for peer review has so far been received very enthusiastically by reviewers, and most authors have responded positively to the request. Surprisingly we have encountered most resistance from academia, with concerns regarding intellectual property and future funding opportunities should those concerns be realised. In contrast, the pharmaceutical industry has, by and large, been willing to share their models as long as reviewers undertake not to share the model.

Feedback from reviewers has been very positive, as illustrated with the example quotes below. In one paper, a major error was found in the model which significantly changed the interpretation of the data.

“I found it extremely useful to have the model.”

“It was helpful for me to see the model.  In this case, the model confirmed the concerns I identified in my review, but didn’t lead me to change any comments.  In general, looking at the model really helps me see if the authors are on track and did what their methods said. 

“Accessing the model is really useful particularly to double check the results and to run Sensitivity Analyses”

“I thought having access to the model for the recent review was very valuable. It enhanced my review capability by testing different assumptions I wouldn’t have been able to test given no access to the model”

“While the review took me slightly longer than a usual review without model access, it was worth the extra time and I hope that shows in my review.”

Although early results are promising, routine access to models is still a long way from widespread practice and does not obviate the need to fully report cost effectiveness analyses in accordance to recognised checklists.  In my view, all stakeholders (editors, authors, reviewers, sponsors, professional societies) have a duty to up their game where reporting is concerned and avoid treating checklists as merely a tick box exercise. CHEERS!

  1. Drummond, M.F. and T.O. Jefferson, Guidelines for authors and peer reviewers of economic submissions to the BMJ. The BMJ Economic Evaluation Working Party. BMJ, 1996. 313(7052): p. 275-83.
  2. Husereau, D., Drummond M, Petrou S, Carswell C et al., Consolidated Health Economic Evaluation Reporting Standards (CHEERS) Statement. Pharmacoeconomics, 2013. 31(5): p. 361-367.
  3. Bermejo, I., P. Tappenden, and J.H. Youn, Replicating Health Economic Models: Firm Foundations or a House of Cards? PharmacoEconomics, 2017. https://doi.org/10.1007/s40273-017-0553-x
  4. Vemer, P., et al., AdViSHE: A Validation-Assessment Tool of Health-Economic Models for Decision Makers and Model Users. PharmacoEconomics, 2016. 34(4): p. 349-61.
  5. Edlin, R., et al., Cost-effectiveness Modelling for health Technology Assessment 2015, Adis Publications DOI: 10.1007/978-3-319-15744-3

3 thoughts on “Reporting Cost Effectiveness Analyses: Time for Improvement

  1. indeed and unfortunately unsurprising to me. Someone with both public and private sector relatives said those relatives agreed literature review was better in private sector as they have a higher bar to jump to be taken seriously. Academic groups now forced into cut-throat competition with each other and (IMHO) are cutting corners. For instance, though it’s nice to see my original 2007 paper on Best-Worst Scaling cited so much, 95% of citations do NOT support the point the authors are actually making regarding how they used BWS. They clearly haven’t read the more recent literature where we show how things have moved on immensely in the decade since that paper.

  2. PS – quite a few of the citations to my JHE paper should actually be to articles and book chapters published by Adis so are adversely influencing your impact factors! I think too many researchers these days haven’t been taught the importance of literature review: citing the original paper when mentioning a methodology, citing the first correct application when mentioning a particular newer way to use the methodology or analyse results etc.

  3. Thanks Terry, Interesting comments. I remember being taught to never cite something you haven’t read. Some authors that cite CHEERS either haven’t read it, have failed to understand it or have chosen to ignore some of the recommendations.

Leave a Reply