Prolegomena to an Understanding of the Replication Crisis in Science

Yves here. I trust readers will enjoy this important piece on the replication crisis, here in science (we have a link today in Links about how the same problem afflicts economics. From KLG’s cover note:

My take follows the post that was based on the work of Nancy Cartwright last month, in which I extend her arguments in direction that she may not have intended:
https://www.nakedcapitalism.com/2024/02/our-loss-of-science-in-the-21st-century-and-how-to-get-it-back.html

Basically, replication is possible for “small world” questions but impossible for “large world” questions. A small world can be a test tube with enzyme and substrate or a mission to Saturn (used in the post). A large world can be a single cancer cell. This is the key difference for replication, which nobody does anyway, whether the “research finding” (an Ioannidis term) is a large world or a small world problem.

By KLG, who has held research and academic positions in three US medical schools since 1995 and is currently Professor of Biochemistry and Associate Dean. He has performed and directed research on protein structure, function, and evolution; cell adhesion and motility; the mechanism of viral fusion proteins; and assembly of the vertebrate heart. He has served on national review panels of both public and private funding agencies, and his research and that of his students has been funded by the American Heart Association, American Cancer Society, and National Institutes of Health.

The Replication Crisis™ in science will be twenty years old next year, when Why Most Published Research Findings are False by JPA Ioannidis (2005) nears 2400 citations (2219 and counting in late-March 2024) as a bona fide sextuple-gold “citation classic.”  This article has been an evergreen source on what is wrong with modern science since shortly after publication.  The scientific literature, as well as the journalistic, political, and social commentary on the Replication Crisis, is large (and quite often unhinged).  What follows is a short essay in the strict sense of the word attempting to understand and explain the Replication Crisis after a shallow dive into this very large pool.  And perhaps put the door back on its hinges.  This is definitely a work in progress, intended to continue the conversation.

This founding article of the Replication Crisis makes several good points even after beginning the Summary with “There is increasing concern that most current published research findings are false.” (emphasis added)  I had long been a working biomedical scientist in 2005, but I did not get the sense that what my colleagues and I were doing led to conclusions that were mostly untrue.  Not that we thought we were on the path of “truth,” but we were reasonably certain that our work led to a better understanding of the natural world, from evolutionary biology to the latest advances in the biology of cancer and heart disease.

Much of the Replication Crisis lies in the use and misuse of statistics, as noted by Ioannidis: “the high rate of nonreplication (lack of confirmation) of research discoveries is a consequence of the convenient, yet ill-founded strategy of claiming conclusive research findings solely on the basis of a single study assessed by formal statistical significance, typically for a p-value of less that 0.05.”  Yes, this has been my experience, too.  I remember well the rejection of a hypothesis based on the notion that the difference in the levels of two structural proteins required for the assembly of a larger complex of interacting proteins in diseased heart after maladaptive remodeling subsequent to heart damage were not “statistically different” from the levels in normal heart, with 50% less not being significant.  This was true, according to a p-value that was attached to the data.  Unsuccessful was the argument by analogy that a house framed with half as many studs holding up the walls and 50% of the number of rafters supporting the roof would not be able to withstand static stresses due to weight and variable stresses due to heat, cold, wind, and rain.  A victory for statistics that made no biological sense, and one of these days I hope to return to this problem from a different perspective.

The examples used by Ioannidis in Why Most Published Research Findings are False are well chosen and instructive.  These include genetic associations with complex outcomes and data analysis of apparent differential gene expression using microarrays that purport to measure the ultimate causes of cancer.  Only 59 papers had been published through 2005 that included “genome wide association study” (GWAS) in the body or title of the paper (there are currently more than 51,000 in PubMed).  The utility of GWAS in identifying the underlying causes of any number of conditions with a genetic component have not been particularly useful, yet.  For example, the “ultimate causes” of schizophrenia, autism, and Type-1 diabetes remain to be established.  Kathryn Paige Harden has recently reanimated the Bell Curve argumentfor a determinant genetic basis of human intelligence.  This game of zombie Whac-a-Mole is getting tiresome.  Professor Paige’s book has naturally exercised those likely to agree with her and those who do not (NYRB paywall).

Measures of gene expression using microarrays in cancer and many other conditions have held up at the margin, but not as well as the initial enthusiasm led us to expect.  The experiments are difficult to do and difficult to reproduce from one lab to another.  This does not make the (statistical) heatmaps produced as the output of microarray experiments false, however (more on this below in the discussion of small versus large systems).  The thoroughly brilliant molecular biologist who developed microarrays is now working on Impossible Foods.  Perhaps plant-based hamburgers (I would like mine with cheese, please) will rescue the planet after all.

Getting back to Ioannidis and the founding of the Replication Crisis, he is exactly right that bias does produce faulty outcomes.  The definition of bias is “the combination of various design, data analysis, and presentation factors that tend to produce research findings when they should not be produced.”  There can be no argument with this.  Nor can one dispute that “bias can entail manipulation in the analysis or reporting of findings.  Selective or distorted reporting is a typical form of such bias.”  Yes, and this has been covered here often regarding in posts on Evidence Based Medicineand clinical studies run by drug manufacturers that reach a positive conclusion.

A series of “corollaries about the probability that a research finding is indeed true” are presented by Ioannidis.  These are statistical, and according to the formal apparatus used they are unexceptional, if one accepts the structure of the argument.  A few stand out to the working scientist who is concerned about the Replication Crisis, with provisional answers not based on statistical modeling:

Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.

Answer: This describes any research at any important frontier of scientific knowledge.  One example from the perceived race to beat Watson and Crick to the structure of DNA, Linus Pauling proposed that DNA is a triple helix with the nucleotide bases on the outside and the sugar-phosphate backbone in the center (where repulsion of the charges would have made the structure unstable).  That Pauling was mistaken, which is not the same as false, was inconsequential.

Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.

Answer: This is so “true” that it is trivial, but it is a truism that has been eclipsed by marketing hype along with politics as usual.

Corollary 6: The hotter the scientific field (with more scientific teams involved), the less likely the research findings are to be true.

Answer:  Perhaps.  In the early 1950s few fields were hotter than the search for the structure of DNA.  Twenty years later, the discovery of reversible protein phosphorylation mediated by kinases (enzymes that add phosphoryl groups to proteins) as the key regulatory mechanism in our cells led to hundreds of blooming flowers.  A few wilted early, but most held up.  As an example, the blockbuster drug imatinib (Gleevec) inhibits a mutant ABL tyrosine kinase as a treatment of multiple cancers.  That cells in the tumor often develop resistance to imatinib does not make anything associated with the activity of the drug “false.”

But “true versus false,” is not the proper question regarding “published research findings” in the terminology of Ioannidis.  As Nancy Cartwright has pointed out in her recent books A Philosopher Looks at Science and The Tangle of Science: Reliability Beyond Method, Rigour, and Objectivity (with for coauthors), recently discussed here added comments in italics in brackets:

The common view of science shared by philosophers, scientists, and the people can be described as follows:

  • Science = theory + experiment.
  • It’s all physics really.
  • Science is deterministic: it says that what happens next follows inexorably from what happened before.

This tripartite scheme seems about right in the conventional understanding of science, but Nancy Cartwright has the much better view, one that is more congenial to the practicing scientist who is paying attention.  In her view, “theory and experiment do not a science make.”  Yes, science can and has produced remarkable outputs that can be very reliable (the goal of science), “not primarily by ingenious experiments and brilliant theory…(but)…rather by learning, painstakingly on each occasion how to discover or create and then deploy…different kinds of highly specific scientific products to get the job done.  Every product of science – whether a piece of technology, a theory in physics, a model of the economy, or a method for field research – depends on huge networks of other products to make sense of it and support it.  Each takes imagination, finesse and attention to detail, and each must be done with care, to the very highest scientific standards…because so much else in science depends on it.  There is no hierarchy of significance here.  All of these matter; each labour is indeed worthy of its hire.”

This is refreshing and I anticipate this perspective will provide a path out of the several dead ends modern science seems to have reached.  Contrary to the conceit of too many scientists [and hyper-productive meta/data-scientists such as Ioannidis], the goal of science is not to produce truth [the antithesis of falsity].  The goal of science is to produce reliable products that can used to interpret the natural world and react to it as needed, for example, during a worldwide pandemic [emphasis added].  This can be done only by appreciating the granularity of the natural world.

Thus, the objective of scientific research is not to find the truth.  The objective is to develop useful knowledge, and products, that lead to further questions in need of an answer.  When Thorstein Veblen wrote “the purpose of research is to make two questions grow where previously there was only one” (paraphrase), he was correct.

One example of this from my working life, which is in no way unique: Several years ago, I reviewed a paper for a leading cell biology journal.  The research findings in that article superseded those of a previous article.  The other anonymous reviewer was absolutely stuck on the fact that the article under review “contradicted” the previous research, which has been done in my postdoctoral laboratory but not by me (I had nothing to do with that work but was present at its creation).  We went through three rounds of review instead of the usual two, but we all eventually came to an agreement that the new results were different because ten years later the microscopes and imaging techniques were better.  Had I not been the second reviewer, the paper would have probably been rejected by that journal.  This did not make the earlier “research finding” false, however.  The initial work provided a foundation for the improved understanding of cell adhesion in health and disease in the second paper.  All research findings are provisional, no statistical apparatus required [1].

Reliability and usefulness are more important in science than the opposite of false.

More importantly, there is also a much larger context in which the Replication Crisis exists.  In the first place, scientists do not generally replicate previous research only to determine if it is true, i.e., not false, according to Ioannidis, other than as an exercise for the novice.  If the foundation for further research is faulty, this will be apparent soon enough.  Whether research findings can be replicated sensu stricto depends on the size of the world in which the science exists.

What is meant by “size of the world”?  Again, this comes from Nancy Cartwright in A Philosopher Looks at Science.  In her formulation as I understand it, the Cassini-Huygens Mission that placed Cassini spacecraft in orbit around Saturn from 2004 to 2017 was a “small-world” project.  Although the technical requirements for this tour de force were exceedingly demanding, there were very few “unknowns” involved.  The entire voyage to Saturn, including the flybys of Venus and Jupiter, could be planned and calculated in advance, including required course corrections.  Therefore, although the space traversed was unimaginably large, Cassini-Huygens was a small-world project, albeit one with essentially no room for error.

Contrast this with the infamous failure to reproduce preclinical cancer research findings.  The statistical apparatus involved in the linked study is impressive.  But going back to Ioannidis’s Fourth Corollary, “The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.”  This describes cancer research perfectly.  Although not explicitly recognized by many scientists and virtually all self-interested critics of science, the cancer cell comprises a very large world.  And this large world extends to the experimental models used at the cellular, tissue, and organismal levels.

None of these models recapitulate the development of cancer in a human being.  Very few can be replicated precisely.  They can be exceedingly useful and productive, however.  Imatinib was developed as an inhibitor of the BCR-ABL tyrosine kinase fusion protein and confirmed in the test tube (very small world) and in cells.  The cell, despite it very small physical size, is a very large world that might be described by several thousand nonlinear equations with an equal number of variables.  Scientists in systems and synthetic biology are attempting this.  Imatinib was subsequently shown to be effective in cancer patients.  Results vary with patients, however.  Experimental results in preclinical cancer research will also depend on how the model cell is cultured, for example, either in two dimensions attached to the bottom of a plastic dish or in three dimensions in the same dish surrounded by proteins that poorly mimic the environment of a similar cell in the organism.  This was not appreciated initially, but it is very important.  These variables affect outcomes as a matter of course.  As an aside, the apparent slowness of the development of stem cell therapy can be attributed in part to the fact that the stem cell environment determines the developmental fate of these cells.  A pluripotent stem cell in a stiff environment will develop along a different path than the same cell in a more fluid environment.

Thus, replication depends primarily on the size of the scientific world being studied.  The smaller the world, the more likely any given research finding can be replicated.  But small worlds generally cannot answer large questions by themselves.  For that we need the “tangle of science,” also described by Nancy Cartwright and colleagues with new comments in italics in brackets:

Rigor is a good thing; it makes for greater security.  But what it secures is generally of very little use [while remaining largely confined to small-world questions].  And that “of very little use” extends to what are called evidence-based policy (EBP) and evidence-based medicine (EBM).  The latter has been covered here before through the work of Jon Jureidini and Leamon B. McHenry (Evidence-based medicine, July 2022) and Alexander Zaitchik (Biomedicine, July 2023) and Yaneer Bar-Yam and Nassim Nicholas Taleb (Cochrane Reviews of COVID-19 physical interventions, November 2023), so there is no reason to belabor the point that RCTs have taken modern biomedical science straight into the scientific cul de sac that is biomedicine [replication of clinical studies and trials has been a major focus of the Replication Crisis].  They are practically and philosophically the wrong path to understanding the dappled world in which we live, which is not the linear, determined, mechanical world specified by physics or scientific approaches based on physics envy [and statistics envy].

Which is not to say the proper use of statistics is unessential.  But it is not sufficient, either.  Neither falsity nor truth can be determined by statistical legerdemain, especially the conventional, frequentist statistics derived from the work of Francis Galton, Karl Pearson, and R.A. Fisher.  We live in a very large Bayesian world in which priors of all kinds are more determinative than genetics, sample size, or statistical power.  Small samples are often successful when dealing with large world questions such as ultra-processed foods, while large sample sizes can lead to positive results when the subject is utter nonsense such as homeopathic medicine, as shown in a recent analysis by Ioannidis and coworkers (2023), summarized here:

Objectives: A “null field” is a scientific field where there is nothing to discover and where observed associations are thus expected to simply reflect the magnitude of bias. We aimed to characterize a null field using a known example, homeopathy (a pseudoscientific medical approach based on using highly diluted substances), as a prototype.

Study design and setting: We identified 50 randomized placebo-controlled trials of homeopathy interventions from highly cited meta-analyses. The primary outcome variable was the observed effect size in the studies. Variables related to study quality or impact were also extracted.

Conclusion: A null field like homeopathy can exhibit large effect sizes, high rates of favorable results, and high citation impact in the published scientific literature. Null fields may represent a useful negative control for the scientific process.

True as the opposite of false is a matter for philosophy, not science.

Finally, the Replication Crisis™ has often been conflated with scientific fraud, especially in accounts of misbehaving scientists.  This is as it should be regarding scientists who lie, cheat, and steal in their research.  But perceived non-replication and fraud are not the same thing, as Ioannidis notes with the inclusion of bias as a confounding factor leading to “false” research findings.  Making “stuff” up is the very definition of High Bias.  In my view, it seems obvious that the title of the founding paper of the Replication Crisis™ was meant to be inflammatory.  It was and remains the ur-text of the apparent crisis.  I will also note that seventeen years after Why Most Published Research Findings are False was published, an equation in the paper was corrected.

Dishonest science practiced by dishonest scientists is a pressing problem that must be stamped out, but that will require reorganization of how scientific research is conducted and funded.  Still, all scientific papers have a typo or three.  One of ours was published without removal of an archaic term that we used as a temporary, alas now permanent, placeholder.  But the long-delayed correction of one of Ioannidis’s earliest of ~1300 publications and most cited (>2000) since 1994 (71 in 2023 and already 24 in 2024) could well mean that the paper has been used primarily as the cudgel it was taken to be by others rather than as serious criticism of the practice of science?  If the correction took so long, how many people actually read the paper in detail?

[1] Ernest Rutherford (Nobel Prize Chemistry, 1908) to Max Planck (Nobel Prize in Physics, 1918), according to lore: “If your experiment needs statistics, you ought to have done a better experiment.”  True enough, but not in the world of the quantum or in most properly designed and executed clinical studies and trials.  We do not sense our existence in a quantum world.  Newtonian physics works well in the physical world of objects at the level of whole atoms/molecules and above (Born-Oppenheimer Approximation; yes, that Oppenheimer).  In the world of biology and medicine, the key is dose-response.  If this does not emerge strongly from the research, as it did in the recognition of the link between smoking and lung cancer (the fifth criterion) long before any molecular mechanism of cancer was identified, a new hypothesis should be developed forthwith.

Print Friendly, PDF & Email

19 comments

  1. VTDigger

    Very helpful, I would just add that the actual purpose of Science is to grant us Power over nature and others.

  2. Not Bob

    Important topic, great article. Very much aligns with the reason I don’t do research any more…”this process is too noisy to draw firm conclusions from” is a recipe for a talking to by a CI. I don’t mean to defend the system, but I do think with better statistical education we might have less obvious junk passing for valuable research. An end to “kitchen sink” models would be a nice start, or even just a recognition that most remotely-complex processes are poorly approximated by linear models. But given the wave of people trying to pass off fitting neural networks (to hopelessly small/noisy datasets) as legitimate research, perhaps I should moderate my expectations.

    For the benefit of other readers: Andrew Gelman often writes about this topic, and I follow the highlights on his blog. (Full disclosure: Gelman fanboy here.) Turns out there’s a whole range of pitfalls for the unsuspecting researcher, where even those trying to do the right thing can end up causing all kinds of problems. I’ll spare the readership a stats nerd-out, but this is the tenor of the writing and the community over there. Hope some readers find a rich vein of interesting ideas!

  3. DJG, Reality Czar

    Some of the replication crisis comes down to basics: I don’t accept the formula “theory + experiment,” because I don’t accept the word “theory,” when what is going on is:

    hypothesis (opinion, likely faulty) + experiment (a process in which the observer affects the observed)

    So observations are changeable, variable. Replication will be hard. It is much like baking bread. Dough is always different. Humidity affects the flour. The temperature of the potato water varies. The yeast are cranky today. Dealing with the variety of dappled life is going to produced different effects.

    Yet I’m also skeptical that Bayesian statistics have much to say. Bayesian propositions always strike me like a recipe that calls for salt to taste, a good pinch of cinnamon, and “some raisins.” One can risk being more definite.

    The word not mentioned in this article (thanks) is “belief”–KLG writes about truth and falsity. I certainly agree that science isn’t about truth. Science is knowledge (scienza in italiano) and knowledge evolves over time. So science doesn’t engage belief. On the one hand, we shouldn’t have to listen to slogans about believe the science. And on the other hand, rejecting the well-founded theory of evolution because you don’t “believe” in it is missing the point. Science isn’t a matter of faith (and one wonders if these same peeps also don’t “believe” in gravity or in color theory).

    In short, some of the crisis is in not using terms strictly.

    1. Steve H.

      > Yet I’m also skeptical that Bayesian statistics have much to say.

      It’s not so much what they say, taking ‘saying’ as a positive rejection of a null hypothesis. It’s assigning a level of certainty to each equation variable, instead of just taking each as True. That makes it less likely that something is said.

      Instead of ‘Assume a can opener’ you get ‘certainty there is a can opener’. We do this implicitly when we assess the provenance of a fact. Every fact is an opinion, it’s just how informed the opinion is.

  4. Steve Ruis

    Re “Thus, the objective of scientific research is not to find the truth.” It never has been, except in the minds of early scientists. Recent critics of “scientism” rail against natural laws as not being created from nature and instead assume they were created by some supernatural agent. This is a common misunderstanding of “natural laws,” which are merely behaviors in nature that are trustworthy. If we can count upon them, they can lead us forward. And that is all.

    All scientific conclusions are considered provisional because we don’t know what the future will bring, so truth is out the door, as it were.

    A field in which there is rampant problems with reproducibility is the current state of particle physics. Sifting through tons of data, looking for a glimmer of something new has become an almost mystical pursuit. The use of noise filters is probably hiding what is being looked for (all previous findings are considered noise for additional look-sees).

    Another field ripe for upheaval is cosmology. The “standard theory” produces predictions that seemingly only require the theory to be modified to account for them when data are actually checked, but so many “researchers” (mostly theorists) have invested much of their careers in the Big Bang theory, that finding any who will consider alternatives is very, very hard.

    1. cobetia

      Steve Ruis’ comment should be taken seriously. In it he implies that any process with multiple input variables may generate diverse outcomes and therefore the significance of a result is likely to require statistical tools. Those who doubt the reliability of experimental results should be challenged to suggest valid experimental approaches that would resolve the problem at hand. If such individuals are serious about supporting progress, especially in the life sciences, they should first recognize that conclusions are probable and therefore falsifiable if wrong.

    2. witters

      “Thus, the objective of scientific research is not to find the truth. The objective is to develop useful knowledge”

      So science is merely ‘pragmatic’ – if it works, it works, so forget the truth as the Cardinal said to Galileo…

      But that absolutely fails to distinguish science from anything else, most closely, of course, technology.

      What distinguished science (that thing the ‘early scientists’ got going) was its explanatory competition with theology and metaphysics. Science got THE truth, not these empty imposters.

      Nietzsche saw this: “it is still a metaphysical faith upon which our faith in science rests—that even we seekers after knowledge today, we godless anti-metaphysicians still take our fire, too, from the flame lit
      by a faith that is thousands of years old, that Christian faith
      which was also the faith of Plato, that God is the truth, that
      truth is divine.”

      Lose that connection of science and TRUTH and you lose science, its autonomy and value. Perhaps it is this we see as well around us today…

  5. voislav

    In my experience, there are two main factors to the replication crisis. One is malice and other is incompetence. Yes, there are people who are intentionally publishing bad data/conclusions for a variety of reasons. But they are a minority because most scientific studies have little importance, either scientifically or financially.

    Majority of issues I see are due to incompetence or lack of skill. There is a big difference in accuracy and precision of the experiment performed by someone motivated, skilled and experienced vs. someone who is underpaid, rushed and inexperienced. Even simple tasks like pippeting technique can have major effects on a study. Add to that generally poor documentation practices, where most experiments are not sufficiently documented so that they can’t be reproduced without the original researcher guiding you through the process.

    People don’t understand how much science has expanded over the last 40 years, the number of scientists as a fraction of the population has greatly increased, and scientific research has largely became a commodity. The price of such expansion is the quality.

    1. A Reader

      “In my experience, there are two main factors to the replication crisis. One is malice and other is incompetence.”

      Money (as in research grant)? I think that trumps all other factors.

  6. Miiiiike

    From the article:

    Thus, the objective of scientific research is not to find the truth. The objective is to develop useful knowledge, and products, that lead to further questions in need of an answer.

    This is delightfully reminiscent of Mao’s “On the Relation Between Knowledge and Practice, Between Knowing and Doing”:

    However, so far as the progression of the process is concerned, the movement of human knowledge is not completed. Every process, whether in the realm of nature or of society, progresses and develops by reason of its internal contradiction and struggle, and the movement of human knowledge should also progress and develop along with it.

    But Marxism emphasizes the importance of theory precisely and only because it can guide action. If we have a correct theory but merely prate about it, pigeonhole it and do not put it into practice, then that theory, however good, is of no significance. Knowledge begins with practice, and theoretical knowledge is acquired through practice and must then return to practice.

    I think the west is often hobbled by the relentless urge to sort things into trivially simplistic dichotomies like “true” / “false”, “good” / “evil”, etc., as opposed to the Marxist Dialectical position that all things contain contradictions and are in a constant state of change, the primary purpose being to identify these contradictions and learn to deal with them in the real world.

  7. michael maratsos

    I can’t help thinking there is something wrong with the idea that ‘science does not produce truth.’ I completely agree that of course, what is currently held up as the best result or best interpretation insome domain, can always be overthrown by some future finding or analysis. Yet it is also true, I think, that in psychological reality, each of us is always forming an idea of what is true, partly so we have some basis for practical living, and partly because we just want to know. I think the basic account of the existence of DNA and its basic structure and how it works in replication (and so on) is so well established that it does form something we could call truth for scientists. Now again, it needs to be absolutely conceded that a basic idea of science is that things can change. But I still think that what could be called “current truth” or “working truth” has such force for most of us, that it deserves some sort of status in discussions of these problems, more than just being treated as a common delusion. (Not incidentally, if all ideas currently accepted could be wrong, does this apply to the idea that science does not give us truth?).

  8. Kouros

    I know a lot of individuals that would protest against this settler-colonialist mindset and would teach you how to decoonize the data…

    1. Retired Carpenter

      Kouros,
      Did you forget to include the /sarc tag?
      BTW, “decoonize the data” is, I assume, a typo. Dangerous mistake. Remember Khayyam’s quatrain 51:
      ” The Moving Finger writes; and, having writ,
      Moves on: not all thy Piety nor Wit
      Shall lure it back to cancel half a Line,
      Nor all thy Tears wash out a Word of it..”

  9. Clark Landwehr

    And the Lord said: “…let us go down, and there confound their language, that they may not understand one another’s speech.” Genesis 11:1-9. Where we are at intellectually. Timeless wisdom.

  10. Clark Landwehr

    You know science is in trouble when serious people claim: “True as the opposite of false is a matter for philosophy, not science.” The false and disingenuous separation of science and philosophy is exactly the problem. Science was once known as natural philosophy. This stance is result of the Americanization of science post Manhattan Project. Newton was not afraid of a little meta-physics.

  11. Dave

    There are lots of angles on this, e.g.

    – traceability

    – measurement uncertainty

    – software reproducibity

    – ‘bad’ software

    Going back to an old favourite

    The Unreliablity Of Excel’s Statistical Procedures

    “It’s not as if Microsoft would have to develop new algorithms to solve these problems. For most of the inaccuracies, good algorithms have already been developed and are well known in the statistical community.
    Microsoft simply used bad algorithms to begin with, and it never bothered to replace them with good algorithms”

    – the practical challenges of real world coding, e.g. IEEE 754 numbers, parallel processing with multicore processors etc.

    What Every Computer Scientist Should Know About Floating-Point Arithmetic

    Doing sufficiently accurate computations of infinite precision real numbers using finite precision computer approximations requires a solid understanding of floating-point formats, computational issues, and standards.
    When implementing numerical algorithms involving real-world data solving ill-conditioned mathematical problems, a single sloppy floating-point expression can break an implemented algorithm and make it impossible to compute a solution to the needed precision (or any solution at all).

    For the asake of brevity I’ll focus on supporting people in the research environment

    According to the editorial team at Nature

    Challenges in irreproducible research

    “There is growing alarm about results that cannot be reproduced. Explanations include increased levels of scrutiny, complexity of experiments and statistics, and pressures on researchers. Journals, scientists, institutions and funders all have a part in tackling reproducibility.”

    “Papers in Nature journals should make computer code accessible where possible”

    Code share

    I’ve enclosed some more examples of the potential for problems with data and analysis of data that can arise when people have access to computers and they aren’t supported properly

    Consistency of Floating Point Results or Why doesn’t my application always give the same answer?

    Improving Reproducibility in Research: The Role of Measurement Science

    The way that vested interests tend to introduce ‘artificial intelligence’ into society is adding to the mixup in relation to problems with reproducibility

    Could machine learning fuel a reproducibility crisis in science? (pay wall) 

    Consider medical science

    Dr. Richard Horton, the current, editor-in-chief of the Lancet

    “‘A lot of what is published is incorrect.’ I’m not allowed to say who made this remark because we were asked to observe Chatham House rules.”

    Dr Marcia Angell, a physician and longtime Editor-in-Chief of the New England Medical Journal

    “It is simply no longer possible to believe much of the clinical research that is published or to rely on the judgement of trusted physicians or authoritative medical guidelines.
    I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as an editor of the New England Journal of Medicine.”

    Dr. Richard Horton

    “The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue.
    Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.”

    Talented people can do peculiar things with computers, mislabel stuff and describe it as ‘data’ and attach their names to this stuff.

    Referees subsequently hand out gold stars and this stuff becomes a part of the scientific literature.

    A couple of points

    1. Firstly ‘science has to be read on a user basis’.

    2. These problems are a sign of failings in the work place. Organisations are not doing a great job when it comes to providing training and support for their staff.

    By the way the medics aren’t the only dodgy operators in town.

    What about forensics Sherlock?

    Obama’s science advisors: Much forensic work has no scientific foundation

    We simply don’t know enough about the accuracy of a number of forensic techniques.

    FBI and DOJ vow to continue using junk science rejected by White House

    The U.S. Department of Justice said it will ignore a White House report calling for rigorous scientific testing of forensic techniques.

    Justice is blind? Just sayin!

Comments are closed.