Francois T pointed to a post at the blog Health Care Renewal that summarizes an important insider report at the British Medical Journal on how much so-called medical research is of dubious validity, and performed to give talking points for marketing rather than to improve the lives of patients.
The reports on the corruption is big Pharma “research” are so rife that this account hardly qualifies as news. For fun, I dug up the notes from a 2004 study in which I interviewed some experts on drug company marketing. The reason? Even then, it was seen as the most effective, and a big financial services client was keen to see what techniques they could adopt from it. Even then, it was clear “research” was seen as key to effective selling. Per one interviewee, on sales reps:
Creativity is NOT what you want for this job. You do not want someone who is creative in their dealings with doctors. Everything they say is according to very strict guidelines…The words they can share with the doctor are all carefully crafted and screened by the FDA. They can’t deviate from the script. If they deviate, they get in trouble, they get the company in trouble along with them. All the scandals in the industry were the salesforce management and the company, not the sales reps.
One reason Pfizer is effective is new product introductions. There is so much revenue and expectation behind it that they get the sales force keyed up about it. They know from the FDA when a product is going to be coming to market. They have the launches in nice places. They fly all the salesforce in, 500 people to a launch in Florida. They bring in entertainment, motivational speakers. Senior management flies down. There is a rigorous presentation fo the program and the product, information on the product, its advantages and disadvantages relative to the competitors.
They also have three times a year meetings to update the sales management on all the currently marketed products. They are called POA, plan of action meetings. It is whatever that will be new from a marketing standpoint over the next six months Each brand marketing group has to come up with new material to keep the sales force interested, like new data from recent studies. If they don’t have something new, the salesforce loses its edge. It also includes promotional pieces, like notepads, pens, It may seem silly, but the doctors love this stuff. And if a doctor says, bring me 5 more of those pens, my friends like them, it does maek a difference. You don’t like to think that stuff like that influences what a doctor prescribes, but it does.
And keep in mind, the costs of manipulated research findings are real. Cathy O’Neil, aka mathbabe, wrote up one of the most deadly cases, Vioxx. The summary of her detailed post:
Madigan has been a paid consultant to work on litigation against Merck. He doesn’t consider Merck to be an evil company by any means, and says it does lots of good by producing medicines for people. According to him, the following Vioxx story is “a line of work where they went astray”.
Yet Madigan’s own data strongly suggests that Merck was well aware of the fatalities resulting from Vioxx, a blockbuster drug that earned them $2.4b in 2003, the year before it “voluntarily” pulled it from the market in September 2004. What you will read below shows that the company set up standard data protection and analysis plans which they later either revoked or didn’t follow through with, they gave the FDA misleading statistics to trick them into thinking the drug was safe, and set up a biased filter on an Alzheimer’s patient study to make the results look better. They hoodwinked the FDA and the New England Journal of Medicine and took advantage of the public trust which ultimately caused the deaths of thousands of people.
To give an idea of the significance of the Vioxx withdrawal: per O’Neill, it led to a meaningful drop in the overall death rate in the US in the following 12 months.
Nevertheless, what is chilling in this insider account at BMJ is the sense of how pervasive and institutionalized the subordination of science, and worse, concern for the public, is to pushing drugs.
I strongly urge you to read the post in full (I’d love to read the BMJ article, but it is seriously paywalled). This section is key (emphasis original):
Research Studies Designed Primarily as Marketing Vehicles
In general, the anonymous author suggested that at least some studies were done for marketing, not scientific purposes:
some of the studies I worked on were not designed to determine the overall risk:benefit balance of the drug in the general population. They were designed to support and disseminate a marketing message.
Whether it was to highlight a questionable advantage over a ‘me-too’ competitor drug or to increase disease awareness among the medical community (particularly in so called invented diseases) and in turn increase product penetration in the market, the truth is that these studies had more marketing than science behind them.
Furthermore, the studies were supervised not by physicians or scientists, but by marketers in the marketing department,
Although the medical department developed the publication plans, designed the study, performed the statistical analysis, and wrote the final paper (which when published was passed on to marketing and sales to be used as marketing material), the marketing team responsible for that product were directly involved in all stages. They also closely supervised the content of other educational ‘scientific’ materials produced in the medical department and intended for potential prescribers. Instructions from marketing to the medical staff involved were clear: to ensure that the benefits of the drug were emphasised and the disadvantages were minimised where possible.
Manipulation of Research Design, Implementation, or Analysis
The author described how the marketers manipulated research studies so they would produce the results desired from a marketing perspective, regardless of their underlying truth,
Since marketing claims needed to be backed-up scientifically, we occasionally resorted to ‘playing’ with the data that had originally failed to show the expected result. This was done by altering the statistical method until any statistical significance was found. Such a result might not have supported the marketing claim, but it was always worth giving it a go to see what results you could produce. And it was possible because the protocols of post-marketing studies were lax, and it was not a requirement to specify any statistical methodology in detail. On the other hand, the studies were hypothesis testing (such as cohort studies, case-control studies) rather than hypothesis generating (such as case reports or adverse events reports), so playing with the data felt uncomfortable.
Other practices to ensure the marketing message was clear in the final publication included omission of negative results, usually in secondary outcome measures that had not been specified in the protocol, or inflating the importance of secondary outcome measures if they were positive when the primary measure was not.
Given how much of what passes for medical research is performed by drug companies, and how much of medical education revolves around pharmaceuticals, the adulteration of research is profoundly disturbing. We’re rolling back the underpinnings of medicine to the days when quackery reigned, except this time, with the authority of the medical profession and the mysticism of science behind it.
So to summarize, the marketers would control the statistical analyses, promoting multiple analyses to attempt to come up with the “right” result that would support the marketing message (although the more kinds of analyses one tries, the more likely one is likely to come up with false results by chance alone). Presumably the marketers did not care whether or not the results were really true, which is perhaps why even they felt “uncomfortable” in some circumstances.
They would also foster the suppression of negative results, and the dredging of data for extra outcome measures when analysis showed no advantage in terms of the real primary outcomes. Suppression of negative results could be viewed as plain lying. Deliberate analysis of multiple end-points again risks identifying random error as true results.