High reprint orders in medical journals and pharmaceutical industry funding: case-control study
“High reprint orders in medical journals and pharmaceutical industry funding: case-control study” by Adam E Handel and colleagues (BMJ 2012;344:e4212).
Objectives—To assess the extent to which funding and study design are associated with high reprint orders.
Setting—Top articles by size of reprint orders in seven journals, 2002-9.
Participants—Lancet, Lancet Neurology, Lancet Oncology (Lancet Group), BMJ, Gut, Heart, and Journal of Neurology, Neurosurgery and Psychiatry (BMJ Group) matched to contemporaneous articles not in the list of high reprint orders.
Main outcome measures—Funding and design of randomised controlled trials or other study designs.
Results—Median reprint orders for the seven journals ranged from 3000 to 126 350. Papers with high reprint orders were more likely to be funded by the drug industry than were control papers (industry funding versus other or none: odds ratio 8.64, 95% confidence interval 5.09 to 14.68, and mixed funding versus other or none: 3.72, 2.43 to 5.70).
Conclusions—Funding by the drug industry is associated with high numbers of reprint orders.
Why do the study?
The authors of this study set out to discover whether articles published in medical journals that had many reprints of the article purchased were more likely to be funded by the drug industry. (“Reprints” refers to purchased permission to reproduce journal articles in bulk, allowing the article to be distributed to a new audience. The money from reprint purchases is an important source of income for some journals). A second question in this study was whether those articles with high reprint orders were more likely to have a particular design.
Articles published in medical journals, particularly the best known ones, can have considerable impact. A study showing a drug to be highly effective might, for instance, lead to a surge in prescriptions of the drug, whereas a study describing serious side effects might cause its sales to plummet. Drug companies are thus keen to have positive studies published in major journals.
We have lots of evidence that randomised trials published in journals funded by drug companies are much more likely to be favourable for the company than studies funded by other sources. We also know—not least from Bad Pharma, a book by Ben Goldacre, one of the authors of this study—that drug companies often do not publish studies that show their drugs not to be effective. So the positive trials are published in high profile journals and the negative trials might not published at all. Clearly, this systematic bias is likely to make drugs seems much more effective and safe than they are.
The question arises whether journals might themselves introduce bias into what they publish. The accusation has been made—not least by me—that editors face a profound conflict of interest because they know that studies showing a drug to be effective, usually in a large, randomised trial, are likely to attract large purchases of reprints by the company. By publishing such a study, a journal might bring in a million pounds. If a publisher or editor had to cut costs and did not publish a paper expected to attract large reprint sales, he or she might have to fire five to 10 editors. What could be a starker conflict—publish one study or fire five editors?
But these accusations have been made without convincing evidence that reprints that attract large sales are, indeed, funded by drug companies. So that’s why this study is important.
What did the authors do?
One relatively quick and easy way to see if articles that have many reprints purchased are more likely to be funded by the drug industry is to do a case-control study in which articles with many reprints purchased are compared with matched controls that have not had many reprints purchased. The authors then looked to see if those articles with many reprints purchased were more likely to be funded by the drug industry and more likely to be of a particular design.
A case-control study is good for testing a hypothesis and finding possible causes of conditions. For example, a famous case-control study in the 1940s looked at possible causes of lung cancer by comparing the histories of people with lung cancer and matched controls. At the time the favoured hypothesis of Richard Doll, one of the authors of the study, was exposure to traffic fumes. In fact, there was no difference between cases and controls in exposure to traffic fumes, but data on other possible causes had been collected—and showed that cases had much higher smoking rates than controls.
The disadvantages of case-control studies are that they show associations, not causation, and that they look backwards. The case-control study of lung cancer was followed by a cohort study to see whether smokers were more likely than non-smokers to develop lung cancer.
To conduct this case-control study required getting information on which articles have sold the most reprints, which is not publicly available information. The authors thus contacted the publishers of the “big five” general medical journals—the Annals of Internal Medicine, BMJ, JAMA, Lancet, New England Journal of Medicine. One of the most interesting “findings” of the study is that all three US publishers (of the Annals, JAMA and the New England Journal) refused to provide the information. All three publishers are owned not by for-profit companies but by organisations of US doctors. Inevitably readers are left wondering if the organisations have “something to hide.” What they probably have to hide is that they are making huge sums of money from the sale of reprints, particularly because some 50% of drugs worldwide are prescribed in North America. This is speculation, but it’s well informed speculation as we know that the three journals publish many drug company sponsored studies and actively sell reprints.
The publishers of the BMJ and Lancet did provide information. The Lancet is the only one of the “big five” published by a for-profit company, Reed Elsevier, and I find it interesting that a for-profit company released information when the organisations of American doctors would not.
Once they had their “cases,” the authors had to select controls, and they went simply for the previous article in the journal of the same type—that is, research article, editorial, etc. They could have matched by design (randomised study, case-control study, etc), but this wouldn’t have made sense because they were also asking if the articles that had many reprints purchased were more likely to be of a particular type.
Two of the authors then independently categorised the type of study and the source of funding of cases and controls. They were categorised as either industry funded; mixed funding (some industry but also other sources); other; or none. Other types of articles—like commentaries—don’t have funding, but the authors classified these as industry funded if “the body of the article referred to a pharmaceutical product and the author or authors received funding, honorariums, or salaries from the pharmaceutical company related to that product.” We are not told how many of the articles fell into this category, and it might have been cleaner to exclude them.
If the two authors doing the classification did not agree then “discrepancies were resolved by consensus with a third author.” However, we are told that inter-rater reliability was “excellent,” which strengthens our confidence in the results.
What did the study find?
The authors did find that articles with high reprint sales were eight times more likely to be funded by drug companies than those that had “other” or no funding. Confidence intervals ranged from five to 14, which I always think of as meaning that the “true” result is very likely to be between five and 14. In other words, studies with high reprint sales are much more likely than other articles to be funded by the drug industry, and it’s very unlikely that the link is explained by chance. It could be explained by some sort of confounding—an unidentified factor that was associated both with the high reprint sales and the funding by a drug company. Perhaps, for example, drug studies that are more likely to be funded by pharmaceutical companies are more likely to be associated with high sales, whether or not the study was funded by the industry. Such a question might be examined by conducting a case-control study with controls being drug studies not funded by industry.
There was no association between reprint sales and study design, but, looking at table 1, we can see that 74 of the 88 Lancet articles with high reprint sales were randomised trials. And of the 88 studies, 80 had industry or mixed funding. In other words, for the Lancet, which had median reprint sales of £288 000, very few high reprint sales were not of randomised trials and not industry funded. Not only were median reprint sales for the Lancet £288 000 but the maximum sale of one reprint was £1.55m. The information is not given in the article, but I know that the profit margin on reprint sales is extremely high—around 80%.
|Journal||High reprint articles||Control articles||Median reprints and range*|
|No||RCT||Other design||Industry funding||Mixed funding||Other or none||No||RCT||Other design||Industry funding||Mixed funding||Other or none|
|Lancet||88||74||14||63||17||8||88||49||39||23||13||52||126 350 (24 000-835 100)|
|Lancet Neurology||25||8||17||8||9||8||25||3||22||1||5||19||5200 (200-50 000)|
|Lancet Oncology||26||10||16||11||4||11||26||7||19||2||3||21||10 500 (500-63 500)|
|BMJ||72||19||53||10||19||43||72||13||59||2||10||60||13 248 (1000-526 650)|
|Gut||46||22||24||18||12||16||46||5||41||3||6||37||3000 (1000-322 000)|
|Heart||35||11||24||12||7||16||35||3||32||3||2||12||3000 (1000-350 000)|
|Journal of Neurology, Neurosurgery & Psychiatry||46||17||29||15||17||14||46||3||43||0||5||41||4008 (1000-107 110)|
|Combined journals||339||161||178||45||36||258||339||83||256||6||13||320||13 495 (200-835 100)|
Many readers will, I think, be astonished that the Lancet sold one article for £1.55m, and this seems to me a study where the data are in many ways more interesting than the statistical analysis. It’s often worth looking at the tables in studies in some detail.
What are the strengths and weaknesses of this study?
It’s a strength of this study that the authors did manage to get information on reprints from two of the five publishers of the “the big five” general medical journals, but the biggest weakness is that they didn’t get any information from any of the US journals or publishers. It is thus difficult to generalise the results of the study.
As I mentioned above, it might be that a confounder was associated with both funding by the drug company and high reprint sales—perhaps, for example, being on a common condition. This is a problem with all case-control studies.
The study might be criticised for selecting controls by taking the article before the case from the same section rather than picking randomly from a sample of studies from the same section. I can’t see, however, that it would make much difference—unless journals have a tendency to publish articles in a systematic sequence, which they don’t do.
It seems to me a weakness that the authors lumped together studies that were funded by “others” and studies that had no funding. In the case of the BMJ, 60% of the studies fell into this combined category. It’s not clear whether a study funded by a medical device company would have been included under industry funding or other funding, and there could be an important—but lost—difference between studies funded by a major non-pharmaceutical funder of research (perhaps a big foundation like the Bill and Melinda Gates Foundation), which might have purchased many reprints, and one with no funding, where the purchase of a large number of reprints would be unlikely.
What does the study mean?
The authors have shown convincingly that the articles in these journals that sell the highest numbers are much more likely to be funded by drug companies. What, importantly, the authors have not shown, and they emphasise this, is that conflict of interest is leading editors and publishers to favour articles funded by industry. Conflict of interest is a state, not a behaviour, and so a conflict of interest exists—but we don’t know how it affects decision making.
The possibility that the conflict of interest may influence behaviour is of great interest, but producing evidence on effects on behaviour is hard. Researchers have in the past shown biased behaviour in selecting young doctors for job interviews by sending employers curriculum vitae that were different only in the gender and ethnic background of the applicants. It’s hard to see how such a study could be done with the submission of major studies to medical journals—but perhaps somebody might try.
Competing interest: RS was the editor of the BMJ and chief executive of the BMJ Publishing Group from 1991 to 2004. He has published a book, The Trouble With Medical Journals, in which he describes the problems explored in this article.
Provenance and peer review: Commissioned; not externally peer reviewedRichard Smith, director, United Health Group chronic disease initiative, and former editor, BMJ
Correspondence to: email@example.com
- Lexchin J, Bero L, Djulbegovic B, Clark O. Pharmaceutical industry sponsorship and research outcome and quality: a systematic review. BMJ 2003;326:1167-70.
- Goldacre B. Bad Pharma. 2012, Fourth Estate.
- Smith R. Medical journals are an extension of the marketing arm of pharmaceutical companies. PLoS Med 2005;2:e138.
- Doll R, Hill AB. Smoking and carcinoma of the lung. BMJ 1950;221:739-48.
- Doll R, Hill AB. Mortality in relation to smoking: ten years’ observations of British doctors. BMJ 1964;248:1399–1410, 1460–67.
- Esmail A, Everington S. Racial discrimination against doctors from ethnic minorities. BMJ 1993;306:691-2.
Cite this as: Student BMJ 2012;20:e7727