The Crisis in Scientific Publication: Peer Review, Authorship, Responsibility, and the Future of Science

By KLG, who has held research and academic positions in three US medical schools since 1995 and is currently Professor of Biochemistry and Associate Dean. He has performed and directed research on protein structure, function, and evolution; cell adhesion and motility; the mechanism of viral fusion proteins; and assembly of the vertebrate heart. He has served on national review panels of both public and private funding agencies, and his research and that of his students has been funded by the American Heart Association, American Cancer Society, and National Institutes of Health.

I admit it.  The undoubtedly targeted ad that appeared in the right margin of my Firefox homepage the other day said, “It’s weird to be the same age as old people.”  True.  This reminded me that I have been doing science for a long time and cannot imagine having done anything else in what has become my professional life.  From the moment I walked into the libraries (both main and science) in my undergraduate institution, which is the “flagship” state university in my home state, I was enthralled.  Most of my friends could not wait to get out and “get on with their lives,” even though our alma mater was then and has maintained its deserved reputation as Party Central.  And in retrospect they probably had the better idea.  But it was also a serious university, and I never left academic life.  I have seen a lot.

From the beginning, the Current Journals table in the Science Library was a revelation: As many as 50 new issues of journals from all over the world appeared every day.  From Geography to Quantum Mechanics and everything in between (1).  Naturally, I concentrated on the biology journals, from American Naturalist to Evolution to Journal of Biological Chemistry.  I got my first job as a student worker as a dishwasher in a teaching laboratory at the beginning of my second year, and that one thing led to a long apprenticeship followed by a series of positions up the so-called chain. I no longer have a laboratory of my own, full of students and leavened with the essential research technicians, and while I miss that, other activities have become just as rewarding and probably more useful.  Besides, part of me believes I may one day return to the lab.  As both Gandhi and Erasmus may have said: “Work as if you will die tomorrow. Study like you will live forever.”  Or at least until you get back in the lab.

What follows is both a description and personal lament at the state of my profession (2).  While it is indeed true that I have always had stars in my eyes when it came to academic life, I was early and often reminded that scientists are simply people, some more straightforward than others, some more interested in their “careers” than the quality of their work.  But it is also true that at the beginning of my life in science, the idea of the disinterested scientist whose goal was to understand the natural world was very real (pre-Bayh-Dole Act of 1980).  Not that this has disappeared, but institutional imperatives from the Dean’s or Director’s office to the Office of the Director of the National Science Foundation have made such research much more difficult.

Which brings me back to my beginnings when the world of scientific literature was truly something special.  Stuart Macdonald has recently published a review in the journal Social Science Information with the title “The gaming of citation and authorship in academic journals: a warning from medicine.” (3). This paper is regrettably behind a rather stout paywall that was surmounted by my institutional library, but I will do my best to describe it here.  The primary argument is that “peer review no longer maintains standards in academic publishing, but rather covers up the gaming of citation and authorship that undermines these standards.”

It is a fair statement that this is a direct result of the development of the Journal Impact Factor (JIF), which was introduced by Eugene Garfield.  Basically, JIF has become a proxy for the importance of a journal in its field and by extension the importance of the work published in the journal (4).  At one level this is, of course, perfectly natural.  Nature has a high impact factor and is where Watson and Crick published their one-page paper in 1953 on the DNA double helix that resulted in a Nobel Prize in 1962.  But the structure of DNA came before JIF.  And it is beyond ridiculous this paper is behind a paywall seventy years later.  I digress, but this is another telling issue about the scientific research and its publication that is on my agenda for this series.

As a “scientometric” (Ugh!) tool, JIF has its uses.  For example, the data compiled as part of the initial work by Eugene Garfield allows one to easily track when a concept or term first appeared in the literature.  But JIF itself is easily gamed and the exact method for how it is calculated remains a money-making proprietary secret.  From Wikipedia, but this is an accurate statement of facts based on my reading and long experience:

“Impact factors began to be calculated yearly starting from 1975 (when I began my first full-time research position) for journals listed in the Journal Citation Reports (JCR). ISI was acquired by Thomson Scientific & Healthcare in 1992, and became known as Thomson ISI. In 2018, Thomson-Reuters spun off and sold ISI to Onex Corporation and Baring Private Equity Asia.  They founded a new corporation, Clarivate, which is now the publisher of the JCR.”

High impact factors mean more money for publishers, especially online open-access journals, and one of their primary business tools is publishing the most cited articles rather than the best articles, which are those that advance a particular field, if not today or tomorrow.

So, what does this mean for the practice of science?  Somewhat unintuitively, what is conventional is what gets cited.  Therefore, authors should stick to the known or the popular.  From the beginning of the JIF era, it became clear that scientists should not go beyond the acceptable if they wanted to thrive: “The latest research and bright ideas are to be avoided because they link to little else and this makes articles difficult to cite.  Demand is for run-of-the-mill, water-is-wet articles, old standards that everyone has been citing for years and which serve as evidence that an article is embedded in the literature.”  This practice has also led to the proliferation of the LPU – least publishable unit – which are often strung together to produce a list, and little else.  How do journals and editors game citation?  I was taken aback at what is in the actually interesting literature on JIF and related topics (in the form of other papers cited by Macdonald, not all of which I have read so far):

We … [used] … to make our acceptance criterion those articles that we felt would make a contribution to the international literature.  Now our basis for rejection is often ‘I don’t think this paper is going to be cited.’ (editor of medical journal as quoted in Chew et al., 2007, p.146)

We have noticed that you cite Leukemia [just once in 42 references]. Consequently, we kindly ask you to add references of articles published in Leukemia to your present article. (editor of Leukemia to author as quoted in Smith, 1997)

Given the state of biomedical publishing and its connection to Evidence-Based Medicine, I have no idea why these two passages surprised me.  But they did.  A related practice when writing a grant proposal is to salt the bibliography with likely members of the review panel, for which rosters are sometimes available.  Some reviewers are likely to be influenced by this accepted form of “grantsmanship” (I still hate this word, but I have also done it).

And this brings us to the explicit treatment of biomedical practice and publishing in Stuart Macdonald’s review:

Medicine provides a particularly vivid example of the failure of peer review to cope with the reality of academic publishing (see Jefferson et al., 2007; Cochrane Database Systemic Reviews 2: MR000016; see here for analysis of a recent Cochrane production on a subject of the day).  In medicine, peer review serves less to guarantee academic standards than to make even the most egregious publishing practices look respectable (see Fanelli, 2009, no paywall; Bosch et al., 2012, (no paywall). ‘The journal editor says: what’s wrong with publishing an industry-funded editorial or review article as long as it gets appropriate peer review?’ (Elliott, 2004, p.21, another paywall)

It is also important to remember that editors of some leading journals are aware of this and have been for a long time:

We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong. [Editor of the Lancet: Richard Horton, Genetically modified food: consternation, confusion, and crack-up, 2000, p.248, paywall (see endnote 5)]

Horton again: “(M)edical journals have devolved into information laundering operations for the pharmaceutical industry.”  Yes, we know this now, thanks to The Illusion of Evidence-Based Medicine, reviewed here previously.  This is also covered extensively using different but complementary sources by Stuart Macdonald, who is particularly attuned to the history of sketchy practices in scholarly publishing.

And this brings us to the problem of peer review itself.  Where did it come from and what is wrong with it? 

Peer review of academic publications is said to have begun with the first scientific journal in the West: Philosophical Transactions of the Royal Society (1655).  This is an exaggeration, but for the past 200+ years peer review has been the rock upon which scientific research has been established as an essential foundation for understanding the natural world. 

But “(W)hat is the role of peer review when the frequency of citation has become the primary means of measuring the quality…with little regard for any other assessment of quality?”  I have been a peer reviewer for 30 years.  What once was a duty has become a chore, unrewarded and underappreciated.  Still, when asked to review, I do, especially in reviewing research applications for the funding agency that has supported the work of my laboratory and graduate students since I was a postdoctoral fellow (6). 

Since the early 1990s editors have sometimes had to beg for reviewers, and this has now reached a crisis across the scientific literature.  This is described in a very good “Career Feature” by Amber Dance in the 16 February 2023 issue of Nature: Peer Review Needs a Radical Rethink.

The usual critiques are presented.  Peer review is a terrible time sink: From Belazs Aczel and colleagues, who used a dataset of 87,000 scholarly journals to show that in 2020 alone, peer reviewers spent 15,000 cumulative years, mostly working for free:

Background: The amount and value of researchers’ peer review work is critical for academia and journal publishing. However, this labor is under-recognized, its magnitude is unknown, and alternative ways of organizing peer review labor are rarely considered.

Results: We found that the total time reviewers globally worked on peer reviews was over 100 million hours in 2020, equivalent to over 15 thousand years. The estimated monetary value of the time US-based reviewers spent on reviews was over 1.5 billion USD in 2020. For China-based reviewers, the estimate is over 600 million USD, and for UK-based, close to 400 million USD. (emphasis added)

As noted by the author, “Many scientists are increasingly frustrated with journals – Nature among them – that benefit from unpaid work of reviewing while charging high fees to publish in them or read their content…(a Springer Nature spokesperson says)…we’re always looking to find new and better ways of recognizing peer reviewers for their valuable and essential work…(and pointed out that)… in a 2017 survey of 1,200 Nature reviewers…87% said they considered reviewing their academic duty, 77% viewed it as safeguarding the quality of published research, and 71% expected no reward or recognition for reviewing.”  I suppose it is something to add Nature to that line in your CV where you list peer reviewing as “Professional Service.”  My CV has that section, and playing the game, my publication list includes citation numbers and JIF information, when I get around to updates. 

Still, there can be few business models more lucrative than getting 15,000 years of work (in 2020) valued at $2.1 billion in the three largest scientific communities – US, China, UK – for free.  Which is why many, including yours truly, have started limiting most peer reviewing to non-profit journals of professional societies, which still exist, and public and governmental funding agencies.

Peer review can be very slow, too.  This has led to the proliferation of preprint servers, which publish unreviewed manuscripts.  Preprints have been a thing in physics for years, but they are relatively new to biology and biomedical sciences.  They do get the results out quickly, but a preprint is just that, preliminary.  And preliminary does not count as one of the professional contributions necessary but not nearly sufficient to ensure funding and career advancement for academic scientists.

Dance begins her article with the complaint of an editor who sent 150 invitations for review of an article from April 2022 to November 2022 with no takers.  The journal is Frontiers in Health Services, one of the 196 titles published online by Frontiers Media, a for-profit open-access publisher established in 2007.  Frontiers journals have created a niche in the world of academic publishing.  Their journal dashboards are attractive and easy to use.  And perhaps the most useful for the modern scientific author, real time links to number of views and citations, social buzz (mentions in blogs, social media) and demographics (location of readers).  The more important, if not unprecedented, practice associated with Frontiers journals is that editors and reviewers are acknowledged on the title page of each article.  This solves two problems with peer review as currently practiced: (1) Editors and reviewers get credit if not payment for their work, and (2) Reviewers are held publicly responsible for the quality of the research, to the extent the manuscript contains all necessary information to review it fairly and completely.

But there is also this, which requires another disclosure: I reviewed one manuscript for a Frontiers biology journal in 2022.  Which brings us to the question implied in the study of Aczel and colleagues mentioned previously: 87,000 scholarly journals?  Really.  How many scientific journals are there?  SCOPUS currently lists 41,462 indexed titlesgoing back to 1788  Whatever the total, it is very large. 

Can we possibly need this many journals?  Obviously, the answer is “no.”  As noted in many studies, most scientific articles are rarely cited.  This assertion, which is based on research using the Science Citation Index that was developed by Eugene Garfield, has been disputed.  It may be true, but few working scientists of my acquaintance are in the un-cited category, even if all of us have a few papers that received little attention, probably deservedly so.  Sometimes the result of even the most ingenious experiment is somewhat underwhelming.  Such is the nature of research when the answer is unknown, as it should be in every original experiment.

This brings us back to the nature of the scientific literature and what it all means.  I have covered this before regarding COVID-19 and will avoid repeating myself too much, but if the public is to believe what scientists and scholars from all disciplines write and say, academic and scientific publishing must regain its footing.  Essentially all scientific journals are online these days, so that is not the problem. 

But a breaking point has been passed.  Too much is published too fast and those who are not disinterested readers are able to pick and choose a piece of the literature that suits their purpose of the moment.  The business of scientific publication has taken over the practice of scientific research.  Not all of the so-called scientific literature has been effectively peer-reviewed, and that includes many of the 336,686 “COVID” entries that have appeared in PubMed in about 40 months.  After 40 years, “AIDS” returns 300,212 entries.  Not to denigrate the significance of COVID-19, but this says much more about the business of scientific publishing than the practice of biomedical science.

Moreover, some “hyperprolific” authors “publish” more than one scientific paper a week.  This is not possible (OK, not legitimate), either physically or mentally, and can only mean that authorship is disconnected from the research reported.  As John Ioannidis of Stanford states here:

There are two main reasons we have authorship: credit and responsibility. I think both are in danger.

In terms of credit, if you have a system that is very vague, idiosyncratic, and nonstandardized, it’s like a country with 500 different types of coins and no exchange rate. And in terms of responsibility, it also raises some issues about reproducibility and quality. With papers that have extremely large numbers of contributors, is there anyone who can really take responsibility for all that work? Do they really know what has happened?

For the five consecutive years 2018-2019-2020-2021-2022, John Ioannidis has 89-81-82-74-46 publications indexed in PubMed.  That would be a total of 372, or 74 per year (I have not audited this total manually).  In the first eight weeks of 2023, Dr. Ioannidis has 17 publications indexed in PubMed as of 27 February.  This will be a truly banner year for him if that trend holds.

By way of comparison, Francis Crick published fewer than 55 papers (some on the list are duplicates) beginning a 40+ year career in 1952 when he was 36 years old.  And I found a reprint of the Nature paper that resulted in the Nobel Prize, in the American Journal of Psychiatry (2003) as part the 50th-year anniversary of the paper that started modern molecular biology (including mRNA vaccines)!  For those of you who have not read it, enjoy!  It requires very little specialized knowledge to appreciate the beauty of the work and the elegance of the result (8).

More on the reproducibility crisis in the biomedical sciences later.  But for now?  Be careful of what you read in the scientific literature and what is reported about the same.  It pains me no end to say that, but this too will pass.

Note added in proof:  I do get emails, this time in the form of another “Work” Career Feature from Nature entitled “Hyperauthorship and What It Means for ‘Big Team’ Science (2 March 2023, p. 175-177).  Peter Higgs posited the eponymous boson in 1964.  Alone.  Eventual experimental confirmation required 2,932 authors.  A subsequent accurate measurement of the mass of the Higgs boson required 5,154 “coauthors.”  A 9-page paper on the effect of SARS-CoV-2 vaccination on post-surgical outcomes included 15,025 “coauthors” in a consortium.  Maybe so.  But: “The more authors you’re working with, the more complicated things get (and) that requires some pretty new thinking, from both researchers and journals, and the people who evaluate science.” (emphasis added)  Yes, indeed.  A hyperauthored paper reminds me of the philosopher Timothy Morton’s hyperobjects – things of “such vast temporal and spatial dimensions that they defeat traditional ideas about what a thing is in the first place.”  I read about half of that book.  Traditional ideas do not  lead to an understanding of anthropogenic global warming, for example.  True enough, and many traditional ideas need to disappear.  But my spidey sense is activated when I see a biomedical research paper with more than 10 authors from two or more institutions.

Print Friendly, PDF & Email

35 comments

  1. Elizabeth Petroske Ph.D

    Thank you for this well written summary of how science has been corrupted over the last 50 years. I have a Ph.D from one of the top 10 public research universities and my first job in a research lab was in 1972. I had 3 different degrees (AS, BS, PhD) and three different resumes to use depending on the requirements of the job I was applying for. The only thing my Ph.D was good for was the occasional part time teaching positions, which I did mostly for the fun of seeing the the lights go on in the eyes of the few good students that were actually paying attention. Most of my paychecks for five decades came from state and federal government regulatory jobs, and they only wanted a BS degree.

  2. John R Moffett

    Hyper-prolific science authors get their names on papers because they run a big lab. They can get their name on many papers that they had little or nothing to do with. Any author that has over 500 publications is probably just weaseling in on other people’s work. I have known scientists who insist that their name is on any paper that comes out of their department. Writing a highly technical paper takes months, sometimes many months. I personally have spent well over a year on very long review papers. At that pace, you can’t be an author on 500 or more papers in a lifetime.

  3. The Rev Kev

    Re Crick and Watson’s 1953 seminal paper. Being only two pages long it still stands in a class of its own and have had a copy of that paper on my ‘puter for years now. But I loved the classic understatement where it said ‘It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.’ Talk about your typical British understatement. Here is a scanned copy of that nature article in pdf form-

    https://dosequis.colorado.edu/Courses/MethodsLogic/papers/WatsonCrick1953.pdf

    As for the post here – with thanks to KLG for doing the hard yards putting it all together – perhaps an idea would be a limited strike for reviewers. By that, I mean that if the reviewers do the work but have nothing to show for it, then perhaps reviewers can say that they will only review papers that are not hidden behind a paywall. If the paper is free to a access, they will review it. If it is hidden behind a paywall,then forget it. And form KLG’s description, there will be fewer and fewer reviewers going forward.

    1. Vandemonian

      When I did my late career PhD (possible ‘vanity project’?) I started doing peer reviews out of a sense of obligation to the scientific community. My firm preference was to review only for open access journals.

  4. William

    This article is very timely for me. I just finished ushering a couple papers thru the review process and am now getting the conference presentations together. Honestly I did a lot of R&D in my career but most of it was proprietary and I was not allowed to publish. So take anything I say as the opinion of an experienced engineer who is a complete novice at the publishing game. And that this recent work was that of engineering research and not pure science.

    The peer review process was VERY difficult.

    There was a lot of good that came of it, tbh. And I do not regret choosing to publish in a journal and present at a conference that is considered premier in my specialty. But I inadvertently stepped on some toes (thru my own ignorance of citation and publishing etiquette) and the reviewers demolished my first draft because of it. I’ll just add a few pros/cons and leave it at that.

    Pros:
    1) The process really ironed out a lot of novice type mistakes. Typos, paragraphs that went nowhere, etc.
    2) A couple times the expertise of the reviewers added some insight into issues that we two authors had not recognized.
    3) There were times that I thought certain issues were plainly explained only to find out that in reality my familiarity with the data and experiment clouded my eyes. It needed more and clearer explanation.
    4) The finished product was streamlined and very high quality, in large part because of the rigor of the criticism.

    Cons:
    1) At times the criticism bordered on the personal… and/or the preening. “Look how smart I am silly authors!”
    2) The work sat under a protective reg that disallowed publishing for several years. Because of pushback we ended up citing a bunch of work that was done AFTER ours. It was only when it became clear that this had been hidden from public eyes for years due to a protective agreement that the reviewers relaxed. IOW there is a ton of assumptions that reviewers carry onto the job.
    3) Citation hunting became an issue. Enough said.

    My field is not as bad as the medical one by any means. I guess in spite of the “bring me a rock” nature of the process in this case, I still feel it worked out well. But it is unnecessarily messy and I do feel that it could stand some house cleaning.

    1. Questa Nota

      p-hacking takes on new meanings for the following:

      Peer Review
      Professor Reputations

      join the fun, add your own

      When everything in life seems to be monetized and scrutinized for the last shreds of value added, then what is left gets devalued. Too bad today’s orts include honesty, decency, truth, community and so many other quaint ideas. Seems like there is a spiritual crisis at universities and labs everywhere.

    2. Jeremy Grimm

      The Pros: #1 and #3 in your comment suggest, to me, a further enhancement to the peer review process that might prove of value. Besides Peer Review I believe many papers would benefit from informal reviews by laymen. I believe there is great truth contained in this quote:
      “You do not really understand something unless you can explain it to your grandmother.”
      ― Albert Einstein

  5. Rolf

    Excellent post, thank you. I have been a longstanding critic of the monopolistic concentration of scientific publishers — consisting now of only a handful of billion dollar houses, still paywalling the results of publicly-funded research many, many decades after first publication at $40-$100 a pop, all of which should now be public property.

    Of course, the most “successful” in academic science have long figured out how to game the system from start to finish. This despite the fact the release of findings (IIRC correctly, by Science or Nature, of course, decades ago) that the majority of papers in science were never cited* (excluding self-citation). The nexus of universities too-big-to-fail, the hopeless quagmire of proposal evaluation, and career advancement/tenure based on the number (not the quality, a problem JIFs and other citation data was supposed to remedy) of publications, does not ensure good or even mediocre science: only a great deal of paper, one that no beginning student can ever hope to work through.

    *Of course, this does not mean that these papers had no influence: only that recognition of that influence was scrupulously avoided.

  6. LAS

    Human ambition produces a lot of “runs toward wealth and greatness” and in the industries and universities producing science there is a lot of opportunism in the internal publishing review process. Getting your name on a paper is not the achievement; the historical or time-tested endurance/concordance of a paper with subsequent science is what tells its worth.
    You’d think that it would be felt as humiliating to have your name on 500 “science” papers and not one of them be acknowledged as particularly insightful; it’s evidence of an addiction to power as opposed to anything else. But -hey- maybe the next one will be great, like general relativity.

  7. AG

    thx very much for this piece

    here a 2010 interview with the very late David F. Noble (he would die the same year) on the problem of “peer review as censorship”.

    Counterpunch
    “February 26, 2010
    Peer Review as Censorship
    by Suzan Mazur”

    https://www.counterpunch.org/2010/02/26/peer-review-as-censorship/

    For Noble the battle against the system started when he had to “sell” his PhD findings as far as I know.

    May be some people remember Noble. Some think he was crazy. He was one of the lucky ones who managed to be too radical for M.I.T. and pissed of the entire US academic system.

    May be he was a bit too much like Finkelstein.

    Anyway I have learned a lot from Noble.

    Noam Chomsky, his former colleague at MIT, stated once that Noble to his knowledge (this was late 90s) was the only guy who had really studied automation.

    excerpt:
    “(…)
    SUZAN MAZUR: It’s all for sale.

    David Noble: IT’S ALL FOR SALE.

    In 1980, the Birch Bayh-Robert Dole amendment to the Patent Act was passed. This is another watershed. The Bayh-Dole amendment laid to rest the controversy that began in WWII over the patenting of publicly-funded research. Up until 1980, it remained ambiguous.

    What the Bayh-Dole amendment said was that the universities automaticaly now own all patent rights on publicly-funded research. What that meant was that universities were now in the patent-holding business and they could license private industry and in that way give them the rights over the results of the research funded by the taxpayer. It was the biggest give-away in American history.

    (…)

    SUZAN MAZUR: You also say those roots are intertwined with mysogynism.

    David Noble: Ok. That’s another issue. Let’s keep it simple. The point here is that science became like God. But since WWII, in part because of Hiroshima and other events, other products of science, critique of science became a very serious matter. And the Left was very much involved in looking anew at looking at science as political. And scientists as human beings and as people with interests, etc. So they de-mythologized science.

    It went by many different names. Social construction of science, whatever. For decades people were, and still in some quarters are, looking very critically at this whole enterprise. And then along comes this global warming campaign. And you have these people like George Monbiot and others acting as if there had never been any critical examination of science.
    (…)”.

    1. Realist

      Thanks AG, i clicked through and i found that very interesting and informative indeed!

      Quite a rabbit hole too! Lots of interesting offshoots to explore.

  8. Hayek's Heelbiter

    With regard to the opening graf. Gen Z, despite their virtue signaling that they are the most tolerant generation ever, are actually the most age-ist people ever.
    I never realized this until one of my younger housemates pointed out to me that this is actually a well-known meme. Gen Z’s contempt begins with MIllennials!
    A few days ago at my local coffee shop, I overheard some Gen Zers at the table next to me. One of them actually remarked, “How could he be out on the dance floor? He was like 25.”
    So much for learning ANYTHING from anybody over the age of 20.
    If Newton could see farther because he stood on the shoulders of giants, how far will Gen Z be able to see standing on the shoulders of social media influencers?
    Secondly, KLG brings up another major issue:

    Which is why many, including yours truly, have started limiting most peer reviewing to non-profit journals of professional societies, which still exist, and public and governmental funding agencies.

    All that positive research that you, my dear taxpayer, have paid for, promptly disappears behind a paywall, as the outcomes are monetized to extort even more dollars for the poor patients. Negative results naturally disappear into the Memory Hole.
    All government research grants should mandate that ALL research results, whether positive or negative, be published in non-paywalled journals.

    1. Questa Nota

      One Gen Z variant seems to stand on those shoulders after knee-capping them. Not a long-term survival strategy, but not unexpected when prices are on everything including heads.

  9. John Thurloe

    I used to keep a file of articles on the debasement of peer review. How rotten and corrupt ‘science’ had become. After the lying and racketeering of Covid there’s no point. I see Russia is dumping the Bologna system for stricter standards. China operates that way too. We shall see who performs better.

  10. Dick Swenson

    I have written enough about the faux prize in economics so I’ll try to be brief.

    In the book The Nobel Factor that accurately explains how the Swedish Central Bank finagled its prize into being described as a “Nobel ” prize, the authors showed the “awareness” effect that prize winners received from being designated winners. The number of citations to their publicationd increased and then generally decreased. Prize winners enjoyed an “aura” but this decreased after a while. I would like to see how many books were published unnecessarilly simply because the covers carried a message that the author was a prize winner.

    Marketing an economist by describing him (and rarely her) as a prize winner is much the same as marketing a computer application as “artificial intelligence.” Branding an avertising is important in a commercial world.

  11. Jams O'Donnell

    And of course the elephant in the room. You live by the Capitalist system, you die by the capitalist system. (Or you can try to get rid of it . . .)

    1. Kris Alman

      If not for the commodification and politicization of science, we wouldn’t have faith based #BelieveScience tweets, memes and gifs while censoring critical thinking.

      Profit over progress.

  12. Mikel

    “There are two main reasons we have authorship: credit and responsibility. I think both are in danger. ”

    Me: looking sideways at ChatGPT…

  13. marku52

    Chris Martenson has remarked that he no longer reads abstracts. He skips to the conflicts, and may decide not to even read the abstract.

    I concur. Covid science publications have turned basically to pharma propaganda. There is not point reading any test of IVM or HCQ if pharma has had any hand in the study. Ditto vaccine safety/efficacy. Non of it has any credibility anymore.

    The Surgisphere disaster was eye opening. This was a peer reviewed paper that discouraged the use of HCQ for Covid, Peer reviewed. Yet within days med bloggers had destroyed the paper, pointing out the the organization that created the paper barely existed, and the data they presumed to have used was impossible (health data from hospitals all over the world) in the time allowed. The whole thing turned out to be a fraud.
    https://www.the-scientist.com/features/the-surgisphere-scandal-what-went-wrong–67955

    1. DrVic

      The paper’s corresponding, and senior author was The William Harvey Distinguished Professor of Cardiovascular Medicine at Harvard. The first and other authors were no name nobodies. When caught out, professed complete ignorance re data, blamed it all on the first author. Considering it was his stature and gravitas that led to consideration for publication, one can safely assume that the trade-off was – you use my name/ I get two free papers in NEJM. What’s not to like ? Anyway the upshot is – no-name is even more of a no name and Dr Mandeep Mehra is still William Harvey- unexpectedly.

  14. ChrisPacific

    This reminded me of the following article, linked here a while back, which said many of the same things in a very readable way:

    https://experimentalhistory.substack.com/p/the-rise-and-fall-of-peer-review

    He points out that science has existed for far longer than the peer review process, producing a great many important results in that time, and we shouldn’t assume that peer review is essential. He also notes that alternatives exist, more so than ever in the Internet age of instant communication.

    I would add that there are also other examples of peer review-like processes out there, like the open source software movement, that accomplish a lot of the same goals without the same problems, or at least with fewer of them.

  15. Tom Verso

    What are the implications of such per review issues for climate publications …. I wonder?

    1. marku52

      I would suspect the research to be called into question if it was funded by some big money interest,like say the fossil fuel industry. Other wise, not so much

    2. Jeremy Grimm

      Hansen has posted his last few papers before their formal peer review and subsequent publication. As he explained at his website Hansen posted Hansen et al. 2016 “Ice melt, sea level rise and superstorms …” as a preprint at arXiv to make the paper’s findings available as soon as possible based on his view of the seriousness of what the paper reported. The peer review process can delay publication of findings certain people prefer should not receive timely notice.

  16. Jeremy Grimm

    Though I can disagree with nothing in this post, I fear that the sorry state of science and scientific publication — both process and products — is one symptom of a much more serious disease afflicting Science:
    “… the concerted political project to wean the university sector away from the state over the past three decades, and to render both instruction and research more responsive to market incentives … motivated by the political project of neoliberalism, which takes as its first commandment that The Market is the most superior information processor known to mankind.” [“The Future(s) of Open Science”, Philip Mirowski, p.23]
    This post finally motivated me to read this paper. I still have not read “ScienceMart” [Mirowski, 2011] or absorbed much of Mirowski’s writings on “Open Science”.

    1. KLG

      Hopeful, not optimistic. The first is the mature response, I think, the latter the opposite. Digging deeper using Mirowski as one guide as I make time, JG. I have tried to get colleagues to read his work, with no success. My slow steady pressure on the sorry state of healthcare seems to be useful, maybe, and I have found that most medical students are fairly outraged. I generalize about my colleagues, but politics seems to be completely beneath them, except for irritable and futile mental gestures caused by chronic TDS. It’s as if the success rate at NIH of 20%, affecting those of us in flyover country the worst, is either a force of nature or an act of God, to be redundant. That the effects are felt mostly in flyover is due to skewed peer review, if you ask me. Which nobody did. Many thanks to you and other readers for keeping me honest.

    1. JBird4049

      Yes, but this being so, the question is really how do we reduce the gaming and resulting corruption? Even with good intentions, even basic research gets distorted or blocked. The current neoliberal society that we live in reduces everything to money, which makes the problem worse.

  17. m-ga

    One of the overlooked aspects of peer-review is that even bad review improves a piece.

    This follows inevitably from the dynamic of having to revise. So, suppose you have spent many days/weeks/years perfecting a piece of research. And you submit to a learned journal what you think is the final and most perfect version. You are expecting (in your wildest dreams) that this will be one of the few articles published without suggested corrections. Alternatively, you are expecting constructive comments which will help you to improve the article.

    Instead, it comes back with a bunch of garbage comments and misunderstandings. You sigh. But, the reviewers were onto something. Maybe the research really wasn’t explained as well as it could have been. So, you rewrite, making things easier for both novice and idiot readers. And you fix a few things which you didn’t notice on the last edit, and which the reviewers didn’t notice either.

    The article is now better. You resubmit, and it is published.

    It’s not a perfect process. But, if there was even minimum wage type compensation for reviewers, the current model could be fixed. For example, there are at any time a ton of recently-completed PhD students who don’t have good university career prospects, and who will soon end up out of academia (they’ll go into teaching, industry and so on). They know enough about their topics to provide reviews. And, as I’ve described, it doesn’t matter too much if the reviews are a bit crappy. So, these recently-completed PhD students can often be ideal reviewers. Just as long as the reviewers are making reasonable enough suggestions that the authors have something to fix, the peer review process can work.

  18. Dave

    This article reminds me of this book on how to do bad science to get tenure! It’s a parody book, but very appropriate to the subject. It’s by Dr. Douche Leroux, and it’s called: Doing Bad Science to Get Tenure, Fame And Riches: A Career Advancement Guide on this EXCITING New “Scientific Method”

    https://www.amazon.com/Doing-Science-Tenure-Fame-Riches/dp/171984058X/ref=sr_1_1?crid=A8DTS91ORKQ4&keywords=douche+leroux+bad+science&qid=1677797950&sprefix=douche+leroux+bad+science%2Caps%2C144&sr=8-1

    1. Paul

      Fun book! The author’s a friend of mine, and I know that he didn’t want to use real names, including his own, because he didn’t want to get beaten up by other scientists.

  19. Dave

    In the book Doing Bad Science to Get Tenure, Fame And Riches: A Career Advancement Guide on this EXCITING New “Scientific Method” the following sleazy “scientific” methods are discussed:
    1) always do science where you can get funding: seek the deep pockets
    2) never admit an error that others point out, just change your interpretation
    3) be a doubting Thomas: go against a major theory and you’ll be invited to all the scientific meetings for debates
    4) be a project manager and attach your name to everything your group creates AS FIRST AUTHOR;
    5) using comedy to increase your citation index: science is boring
    6) finding and occupying an obscure scientific niche;
    7) re-writing other people’s papers by reading a bunch of people’s papers and making your own conclusion. Don’t do anything new!
    8) how to shingle your papers (record is 8). The author offers 4 chapters on shingling, showing how to do it!
    9) attaching the name of someone famous to your paper (discussed in previous comments above);
    10) how to steal other people’s ideas and claim them as your own;
    11) using “over-complexification” in your research;
    12) elements of theory tweaking;
    13) cascading…. how to search for progressively crappier journals to publish in, following rejection from a major journal.
    14) Using a single data point to make a universal trend
    How many of these scientific miscarriages have you seen in your research?

Comments are closed.