5 Ways Capitalist Logic Has Sabotaged the Scientific Community

By Justin Podhur. Originally published at Alternet

At a time when federal employees are prohibited from uttering the phrase “climate change,” the right routinely attempts to undermine universities’ legitimacy, and tuitions have skyrocketed alongside student debt, it seems perverse that academics would further endanger their mission to educate and enlighten. Yet by embracing a malignant form of pseudoscience, they have accomplished just that.

What is the scientific method? Its particulars are a subject of some debate, but scientists understand it to be a systematic process of gathering evidence through observation and experiment. Data are analyzed, and that analysis is shared with a community of peers who study and debate its findings in order to determine their validity. Albert Einstein called this “the refinement of everyday thinking.”

There are many reasons this method has proven so successful in learning about nature: the grounding of findings in research, the openness of debate and discussion, and the cumulative nature of the scientific enterprise, to name just a few. There are social scientists, philosophers, and historians who study how science is conducted, but working scientists learn through apprenticeship in grad school laboratories.

Scientists have theorized, experimented, and debated their way to astounding breakthroughs, from the DNA double helix to quantum theory. But they did not arrive at these discoveries through competition and ranking, both of which are elemental to the business world. It’s a business, after all, that strives to be the top performer in its respective market. Scientists who adopt this mode of thinking betray their own lines of inquiry, and the practice has become upsettingly commonplace.

Here are five ways capitalist logic has sabotaged the scientific community.

1. Impact Factor

Scientists strive to publish in journals with the highest impact factor, or the mean number of citations received over the previous two years. Often these publications will collude to manipulate their numbers. Journal citations follow what is known as an 80/20 rule: in a given journal, 80 percent of citations come from 20 percent of the total articles published: this means an author’s work can appear in a high-impact journal without ever being cited. Ranking is so important in this process that impact factors are calculated to three decimal places. “In science,” the Canadian historian Yves Gingras writes in his book Bibliometrics and Research Evaluation, “there are very few natural phenomena that we can pretend to know with such exactitude. For instance, who wants to know that the temperature is 20.233 degrees Celsius?”

One might just as easily ask why we need to know that one journal’s impact factor is 2.222 while another’s is 2.220.

2. The H-Index

If ranking academic journals weren’t destructive enough, the h-index applies the same pseudoscience to individual researchers. Defined as the number of articles published by a scientist that obtained at least that number of citations each, the h-index of your favorite scientist can be found with a quick search in Google Scholar. The h-index, Gingras notes in Bibliometrics, “is neither a measure of quantity (output) nor quality of impact; rather, it is a composite of them. It combines arbitrarily the number of articles published with the number of citations they received.”

Its value also never decreases. A researcher who has published three papers that have been cited 60 times each has an h-index of three, whereas a researcher who has published nine papers that have been cited nine times each has an h-index of nine. Is the researcher with an h-index of nine three times a better researcher than their counterpart when the former has been cited 81 times and the latter has been cited 180 times? Gingras concludes: “It is certainly surprising to see scientists, who are supposed to have some mathematical training, lose all critical sense in the face of such a simplistic figure.”

3. Altmetrics

An alternative to Impact Factors and h-indexes is called “alt-metrics,” which seeks to measure an article’s reach by its social media impressions and the number of times it’s been downloaded. But ranking based on likes and followers is no more scientific than the magical h-index. And of course, these platforms are designed to generate clicks rather than inform their users. It’s always important to remember that Twitter is not that important.

4. University Rankings

The U.S. network of universities is one of the engines of the world’s wealthiest country, created over generations through trillions of dollars of investment. Its graduates manage the most complex economies, investigate the most difficult problems, and invent the most advanced creations the planet has ever seen. And they have allowed their agendas to be manipulated by a little magazine called the US News and World Report, which ranks them according to an arcane formula.

In 1983, when it first began ranking colleges and universities, it did so based on opinion surveys of university presidents. Over time, its algorithm grew more complex, adding things like the h-index of researchers, Impact Factors for university journalism, grant money and donations. Cathy O’Neil of the blog mathbabe.org notes in her book Weapons of Math Destruction that, “if you look at this development from the perspective of a university president, it’s actually quite sad… here they were at the summit of their careers dedicating enormous energy toward boosting performance in fifteen areas defined by a group of journalists at a second-tier newsmagazine.”

Why have these incredibly powerful institutions abandoned critical thought in evaluating themselves?

5. Grades

The original sin from which all of the others flow could well be the casual way that scientists assign numerical grades and rankings to their students. To reiterate, only observation, experiment, analysis, and debate have produced our greatest scientific breakthroughs. Sadly, scientists have arrived at the conclusion that if that a student’s value can be quantified, so too can journals and institutions. Education writer Alfie Kohn has compiled the most extensive case against grades. Above all, he notes, grades have “the tendency to promote achievement at the expense of learning.”

Only by recognizing that we are not bound to a market-based model can we begin to reverse these trends.

Print Friendly, PDF & Email
Tweet about this on TwitterDigg thisShare on Reddit0Share on StumbleUpon0Share on Facebook0Share on LinkedIn0Share on Google+0Buffer this pageEmail this to someone

45 comments

  1. synoia

    Exams at year end at my university. Pass/Fail.

    The “grade” was passed out only upon graduating.

    1 st Hons, Upper 2nd Hons, Lower 2nd Hons, 3rd, Pass.

    The exams were graded (marked) anonymously. No favoritism possible.

    If you were believed capable of achieving a 1st or Upper 2nd you were invited for a Phd, on the condition you received a 1st or Upper 2nd.

    The US system embodies cronyism, favoritism and grade rigging. It teaches a level of dishonesty which is appalling.

    Reply
    1. Larry

      Actually, the US system allows anybody in the world to obtain a PhD, with no need for a flawed single sit down test. What good does a sit down test do towards measuring the criteria to be a scientist?

      Reply
      1. a different chris

        ???? Only if they have sat thru a (familyblog) load of tests to get to the point where they are even considered for a PhD program…. high school grades + standardized tests to get into college, class after class, test after test, in college. One bad semester and fuggadaboutit.

        Interestingly enough, people say that once you get into Med School you have to like kill a professor and maybe another student to be sure to be dropped. There was a great book by a woman who made it thru med school when, shortly after she began, she started having severe psychotic episodes! (Turned out it was chemical and eventually, I think before she graduated, they were able to fix her with regular dialysis treatment).

        Explains a lot, if true.

        Reply
      2. Isotope_C14

        Larry, this is patently false.

        The US system only wants one thing. Captive, obedient labor. The best way they can do that is to get a visa-dependent student who can’t leave the laboratory they are in, usually for over 6 years.

        What American wants to work for over 6 years at 12.5-30k/year (usually in the middle) and after graduation get 25-45k/year?

        They wouldn’t even remotely accept me, even though I had recommendations from Fermilab scientists, and scored in the top 2% of the analytical part of the GRE, when they had that section, they got rid of that because the word literacy and the accounting part is all they *really* want. I’m a native English speaker, but I have a habit of typing like I talk, and well, that isn’t preferred. I consider it thoughtful honesty.

        Sadly I’m one of those unfortunate souls that will only do science, everything else is wage-slavery. Germany got me now. #MAGA, have a country with no scientists, Jesus-Freak teachers, and only bullies and drug-dealers running the show.

        Hey American scientists, Germany is pretty nice, though they are pretty racist toward the middle-eastern folks here, I point that out very often in the lab. It rains a bit, but you don’t need a car, and the food isn’t completely poisoned by Monsanto, they actually have real laws, and real labels that mean something. Also, I’m not scared of the Police here, it’s a real trip, they pretty much ignore everyone!

        Reply
    2. a different chris

      Your school wasn’t in the US was it? Because where I got my undergrad three decades ago – a state school – they had already gone over the edge with 4 decimal place QPAs and of course the then-new plus/minus system.

      The funny thing was that, on the other hand, they had gone to a real effort to keep class sizes small and run by real professors. I can count the number of “auditorium” classes I had on one hand, T/A’s were only seen in labs. But this combination had the unfortunate side effect of driving “professor shopping”… one guy would give 10% A’s, another guy for the same class would say “only one student gets an A.”

      The pre-meds would work and work and work their schedule to get Professor A, and would even undersize their class load (to be made up with an overload next semester) if they couldn’t get all “easy graders”.

      Reply
      1. lyman alpha blob

        There was an apocryphal story about one of my tenured physics professors that one semester he failed every student in his fairly small class, his argument being that nobody put in enough effort to warrant better. After the administration told him that he couldn’t do that, next semester he gave everyone an A. They didn’t let him teach much after that.

        Reply
        1. Jim A

          I had a class in College that started out with more than 30 students (the bookstore had to order more texts) and by the end of the drop period, there were 4 of us left. Several of those who dropped it decided that they weren’t going to graduate that year after all. I’m not sure how that affected the career of the instructor…. The title of the text Introduction to Statistics and Probability by Dudewicz was a bit deceptive, considering that it was a 400 level math class.

          Reply
    1. Yves Smith Post author

      I fixed the link. The worst is that the HTML for all of the links were via Google searches, which is a terrible way to do it, and it may be that Google inserted the “https” into the older URL out of its “https” fixation. Gah. I’ve redone the rest.

      Reply
  2. jCandlish

    Peer reviewed journals are being held hostage.

    Every curious person should know how to access them.

    Although the DNS servers are regularly under attack their service is reachable via onion protocol routers, and the links in WikiPedia are kept up to date.

    More info here ==> https://en.wikipedia.org/wiki/Sci-Hub

    Reply
    1. dcrane

      Yes, this deserves an article all its own. Much research is paid for by taxpayers dollars, and the results and all datasets of those studies should be made publicly available in a timely fashion.

      And a related point: Private publishers, often publicly traded and paying dividends, are making sometimes very high profits by publishing peer-reviewed science behind paywalls for subscribers (and, in my field, being paid page charges by the authors to publish as well). The craziest aspect of the system is that the key value of the “product”, the hours and hours of peer review labor, is still traditionally provided for free by academics. These days, I decline such requests from this sort of journal (which is most of them).

      Reply
    2. ewmayer

      I got e-mail yesterday asking me to review a submission to a maths/crypto journal, with a link to an online page containing the abstract, list of references and paired yes/no buttons for persons asked to act as reviewers to accept/decline said request. Had never heard of the particular journal before (not that I make any attempt keep up with the insane proliferation of such), so first looked it up online, then sent the following e-mail reply to the editor who contacted me:

      Thanks for considering me as a possible reviewer – I am replying via e-mail because the online yes/no choice offered no opportunity to provide one’s reasons for declining. Before looking at the abstract I checked up on the journal, I see it is a recent joint venture between Hindawi and Wiley & Sons, despite the verbiage about being “open access”.

      While I take the role of peer review very serious in the scientific process, I’m very loath to provide free labor to global for-profit academic-publishing behemoths like Wiley and Elsevier, thus am declining on those grounds.

      Best regards,
      -E

      Elsevier has probably – and deservedly – gotten the most flak for their shamelessly rent-extractive business model, but Wiley is also a yuuge for-profit, the Wikipedia page on them is littered with stawk market, revenue and profit references. Ugh.

      Reply
  3. The Rev Kev

    Forgive me asking but all these rankings and grades and the like in this article – it all sounds like some sort of oversize popularity contest amongst scientists. Could it be this simple?

    Reply
    1. Craig H.

      It’s Foucalt power play. The current system is perfectly configured so the guys at the top have it as easy as possible. The people who run it think the system works pretty good.

      There’s a Max Weber goof here though. Protestant ethics have no causal relation to capitalist economics activity whatsoever. There are human characteristics which lead independently to Protestant ethics and also to capitalist world beating (and also to professorial prominence). Not sure what to call it. Primate politics maybe? Similar sociology is observable in chimpanzee living arrangements.

      Reply
  4. Disturbed Voter

    Most public research is done at taxpayer expense. Hence subject to politics. Most private research is done at corporate expense, and you have no business in it. Hence subject to cronyism.

    It is hard being a Renaissance man. Didn’t Rafael compare artists to whores?

    Reply
    1. nonsense factory

      Actually academics has been overrun by corporate interests under the guise of the ‘public-private’ partnership under a set of rules set up during the Reagan era, Bayh-Dole, which allows universities to exclusively license patents generated with taxpayer dollars to private corporations.
      The best discussion of this:
      https://www.goodreads.com/book/show/1476144.University_Inc
      “University, Inc: The Corporate Corruption of Higher Education by Jennifer Washburn”

      Reply
  5. Dirk77

    Here is one thing I’ve wondered about through anecdotal evidence and possibly others could enlighten me. In this environment a journal wants to improve its impact factor, right? So, since it knows the h-index of the scientists who submit papers to it, isn’t it lead then to bias paper acceptance toward those with current higher h-indexes?

    Reply
    1. a different chris

      I can’t enlighten you, but does seem to match with the whole “science advances one death at a time” thing.

      Reply
  6. TG

    Indeed, I agree with much of this, well said. And yet, although a system that uses numerical indices to rate scientific quality is highly flawed, it may nevertheless be better than no rankings at all and making it all completely subjective. It is certainly valid in the extremes: a journal with an impact factor of 0.1, or a scientist with an H-index of 2, are unlikely to be very good.

    A big problem is simply that there are too many scientists, and far, far too many journals and articles. We can’t read them all, and we can’t objectively evaluate every scientist. Making this worse, the increasing pressure on scientists to produce means an explosion of crud, and eventually nepotism and corruption will drive out quality science, as it has in overpopulated India (apologies to any Indian nationals, this is not about individuals but the entire system – and you know this to be true). The neoliberals say that we need more and more competition between workers (not between billionaires of course, here competition is somehow seen as destructive). And indeed competition is good, but if it becomes extreme, if smart people can’t make a career in science just by doing good science, they will do what everyone does when playing by the rules does not pay, and game the system. As with journalism and economics, a flooded market for scientific labor makes it much easier for the rich to get the ‘results’ that they want, because scientists will be so desperate for funding and so much less likely to offend wealthy patrons. That’s why overpopulated third-world countries are always so corrupt – it is the natural response of people to impossible situations.

    To a great extent, US science is still quite honorable (with notable exceptions), but we are coasting on past moral capital. As the pressures build, it will slowly but surely rot, as does all human institutions under pressure. Only the rare saint will risk a steady job on principle when steady jobs are in short supply, and not having a job means dire poverty, and most people everywhere are not saints. Nor should we expect them to be.

    Reply
    1. Jeremy Grimm

      Reading your comment made me wonder about your assertion “there are too many scientists, and far, far too many journals and articles”. “Too many scientists and articles” for what? I know an easy answer to that question — but I’m asking it from the standpoint that there seem to be plenty of unexamined and unexplained mysteries for scientists to investigate. And for that matter is our society really too poor to employ more scientists on more problems … maybe even do a little basic research using grants?

      Reply
  7. Mel

    Tony Hoare wrote something like “You can only control what you can measure.”
    To apply it to management you have to change it: “You can only manage what you can recognize.”
    When managers can only recognize what they can measure, then you get this.

    Ran into this in George “Adam Smith” Goodman’s The Money Game yesterday:

    Fill in the missing words:
    To get rich, you find a stock whose __________ have been compounding at a very fat __________ and then the stock zooms and there you are.

    if you filled in “earnings” and “rate”, you may have a great future at taking Programmed Instruction, but actually you are in trouble because it is a catch question and what you should have done was to mark the whole question False. Now in twenty-five words or less, write an essay on why the statement was false.
    Write here:
      
      
      
      
    If you wrote “because records show the past, and the market cares about the future”, or something similar, you get to come back in the class.

    Reply
    1. Jeremy Grimm

      I think you’ve hit on a root cause of much of the malaise in our society. “Modern” management theory seems intent on Taylorizing all things much as and much in step with Neoliberalism’s efforts to compel Markets for all things. With “good” management any business — like ‘science’ — can be optimized to produce most efficiently according to the rules of the Market. Creating a Market requires measures of costs, profits, and productivity where the profits may be intangible. “Modern” managers have become adept at creating measures and management techniques for science — and other areas of our society — helping all become optimally productive in producing crapified “products”. Software arrives on-time and on-budget bloated and buggy and profitable … at least for a while. Hardware mimics software. We can buy more kitchen gadgets and electronic gadgets that don’t work and throw away more clothes and more trash and plastics all benefiting “growth”. We have police working to “juke”-the-stats as the phrase went on the “Wire”. I can go to a parent-teacher to find out how my children are faring and in response I am entertained with lists of grade-points and grade-point averages and medians and perhaps in a school near you soon we may see learning Gant charts or even learning path Pert charts for the more ambitious. When I ask — but how is my child doing? After receiving a puzzled expression from their teacher comes a reprise of grade-points and grade-point averages and medians.

      I believe C. Wright Mills “Managerial Demi-urge” is live and well. Neoliberalism drives to commodify and handle all things through Markets. The “Managerial Demi-urge” is like the Market’s “Ariel” striving to control all individual creativity and initiative but unlike “Ariel” no Prospero need command its workings. It works for and in itself.

      Reply
  8. jabawocky

    I would hope that anyone who uses such metrics does so in full knowledge of their limitations. Indeed many of the world’s top institutions have signed up to the DORA code that forbids use of simple impact metrics in promotion and funding decisions. At the end of the day I question whether this is ‘capitalist logic’ however. They are shorthand evaluation tools and a real truth is that scientists need efficient ways to identify potentially important contributions from the dross.

    Reply
    1. Adam Eran

      This is the essence of irony. The “soft” sciences got Newton envy, and tried their darnedest to imitate that kind of mathematical exactness and predict-ability. This is especially true of the “scientific” business administration widely evangelized with MBA degrees.

      Matthew Stewart’s The Management Myth really lays this out in detail. Oddly enough, he points out that the promoters of “scientific” management, like Frederick Winslow Taylor would cook the books so their “experiments” would agree with their foregone conclusions. It was on such foundations that American Business schools were built.

      Management is a liberal art, says Stewart. Trying to make it merely a matter of calculation ignores the obvious and promotes some kind of Kafka-esque parody of actual management.

      Reply
  9. Enquiring Mind

    Gresham’s Law applies everywhere these days, with bad anything driving out the good.

    Riffing on commenter Mel’s notes above, what about all that isn’t measured, or what is excluded? The latter would seem to get found out in a methodical, scientific process, so how does the average person protect oneself against such hidden exclusions or manipulations?

    Reply
    1. TheCatSaid

      EM: “The latter would seem to get found out in a methodical, scientific process”

      Really? One of the limitations of the scientific method in its current manifestation is that it is self-referencing, persistently requiring and reinforcing the use of inside-the-box thinking and incremental investigations. Find out something radically new and the system spits it out, regardless evidence.

      Genuinely new understanding often involves a change in paradigm that is difficult or impossible to even evaluate using the terms of the current consensus, much less integrate at the personal, societal or institutional levels.

      One can easily think of examples in numerous fields, from agriculture to astronomy to medicine/healing.

      The scientific method has limitations–something not considered in this post. Not surprising, as questioning the scientific method itself is almost taboo in polite company.

      Reply
      1. Jamie

        Genuinely new understanding often involves a change in paradigm that is difficult or impossible to even evaluate using the terms of the current consensus, much less integrate at the personal, societal or institutional levels.

        And yet, paradigms do shift and are integrated. Must be magic!

        Reply
        1. TheCatSaid

          Integrated after a hundred or two hundred years, maybe. (Lord Eric Ashby has written about the slowness of new ideas to be accepted by institutions.) In some cases longer or never.

          In other cases we seem to have gone backwards in our understanding. There are many ancient technologies which modern scientists are not able to duplicate. Magic? Or natural laws we have not yet discovered (or re-discovered)? Sometimes our reverence for science and incremental thinking keeps us stupid.

          Reply
      2. Enquiring Mind

        Should’ve worded as caught out or noted by absence instead of found out. I have been concerned that there is so much that either defies conventional investigation or otherwise isn’t so easily reducible via some neo-libish method to some overquantified metric.

        Reply
  10. Altandmain

    Another big issue is that not as much money is being spent on basic research.

    There is a focus on short term profit in the corporate world, which has also affected how academia perceives itself.

    So in other words, there is a publish or perish culture combined with less funding for real research.

    Another challenge is how endowments can have a chilling effect on research. An example, say we have a drug that researchers of that university find causes really bad side effects and is not very effective. If the pharmaceutical company that makes the drug donates to the university endowment fund, there could be strong incentives for the university to keep its findings quiet.

    Meanwhile the teaching is outsourced to adjuncts who are often paid a poverty level wage…

    Reply
  11. ultrapope

    I work as a research assistant in a neuroscience/psychiatric research lab and have seen how these metrics corrode the quality of scientific research performed both by our lab and other labs in our field. However, to really understand the “capitalist logic” of these metrics, an understanding of the current funding environment is helpful.

    Our university dictates that 70% of lab salaries must come from external grants. No grants = no jobs. At the same time, grant funding from federal agencies is extremely tight these days. When we apply for a grant at the NIMH we are typically competing with about 150 other applications. Since only about 14-15 of these applications will be funded (a rant for another time), a massive amount of triaging is done. While not the only criteria, H-Index plays a large role in this process. In this sense, the employment of both the researcher AND the staff working for them are tied to the submitting researcher’s H-Index.

    Ok, that sucks, but how does it degrade the science?

    First, it has made researchers extremely risk averse. Despite all of the noise being made about the importance publishing null results, no journal will publish them. Therefore researchers are being extremely conservative with their research designs and are pursuing experiments which are basically guaranteed to get results. Just to be clear, these are not replications per se. Rather, they are previously performed experiments that are tweak just enough to be considered novel.

    Second it has given birth to something called the LPU, least publishable unit. The basic concept is try to squeeze as many publications out of one experiment as possible so that you can inflate your H-index. Fragmenting results as publishing across several papers puts out a lot of redundant information in the literature and makes literature reviews a huge time sink.

    Finally, it encourages researchers to perform data dredging and commit a lot of post hoc fallacies. In other words, if you didn’t find anything in your analysis then you weren’t looking hard enough. Run every statistical test you can until p<.05. Make up a story and publish that. Its not unethical, but it sure isn't scientific.

    I get why these metrics are needed and so valued by grant agencies, reviewers, etc. But there are huge flaws in these metrics and they can be gamed. In fact, given the importance of these metrics to a researchers employment and the hyper-competitive environment researchers are in, there are huge incentives for gaming these metrics. In turn, they have had a drastic effect on the quality of research being performed.

    Reply
    1. Jeremy Grimm

      Something funny happened on the way to posting my comment — apologies if it is duplicated.

      Would you agree that the drive toward “competition” in research has also tended to foster the growth of specialization and jargon — effectively compartmentalizing scientific knowledge much as the CIA compartmentalizes “secret” information? I recall stories about the changes in Bell Laboratories from about the time research became a handmaiden of the Market in the 1970s and 1980s. Around that time the sharing of knowledge and ideas across disciplines and between different research groups was replaced with cypher locks and research reports were held like poker hands. … Made me think of Watson and Crick and wonder whether the structure of DNA would have been discovered without what little data they were able to purloin from the X-ray crystallography group in the research labs where they worked. Even in those more halcyon days there were lanes and pressures to stay within the lines.

      From what I recall of Veblen’s “Theory of Business Enterprise” businesses are most definitely NOT proponents of “innovation” and “invention”. Both tend to undermine extracting the last nickel from capital already invested. From this vantage the Market would seek to control and constrain science — which, in my opinion, does indeed seem to be the case. Worse still Phillip Mirowski builds what to me is a convincing case that Neoliberalism seeks to own and control “science” as a tool — particularly climate science. The best climate science money can buy will help build the case for climate interventions after straightforward denialism defers to Market solutions like cap-and-trade — which don’t work — finally deferring to climate engineering projects. [A scary development at RealClimate.org is the addition of “a new class of open thread for discussions of climate solutions, mitigation and adaptation”. I been too uncomfortable to try parsing through that open thread — but seeing at RealClimate.org is not a happy sign of the times now and future.]

      Reply
    2. Wisdom Seeker

      @ultrapope – Yes, the metrics are having deeply adverse unintended consequences on the system.

      But you wrote something unthinkingly that you might want to ponder:

      70% of lab salaries must come from external grants. No grants = no jobs […] At the same time, grant funding from federal agencies is extremely tight these days.

      Grant funding will ALWAYS be tight in such a system, because the money goes to salaries. The more funding there is, the more workers there will be, and the more competition there will be for the available funding. Even worse, so long as salaries are inflated each year through university salary raises, the number of workers supportable on a fixed federal budget must steadily decrease. So unless the federal budget is increasing as fast as the wage inflation, headcount must drop.

      Scientists have become too dependent on the federal milk cow.

      Reply
  12. BigStupid

    I would suggest that the real damage has come as a result of misunderstanding the concept of scientific literacy, or rather equating literacy in a language with it.

    Literacy is a continuous measure, not discrete. “Able to recognize the alphabet” at the ‘not so much’ end moving towards “Able to interpret subtext and recognize subtleties”. Scientific literacy requires a fairly strong level of literacy to begin with (something that far too many of even the educated in our societies lack) and then adds requirements both mathematical and theoretical related to the field in question (and often other related fields). [Legal literacy is similar, but closer to language literacy than scientific literacy]

    If you can read and interpret the original text of something like Hobbes’ Leviathan I would suspect you are strongly English literate. If you can also perform advanced statistical analysis and some relatively high level mathematics you stand a good chance of being at least moderately scientifically literate, provided you have beyond an introductory level of familiarity with the field. Many people can pickup a research paper and ‘read’ it, but the skills needed to derive meaningful insight, let alone critique it, are much more rare. (Those who can do this across more than a small handful of fields are nearly non-existent)

    When speaking with laypeople my most common frustration is hearing “I know what that mean/you don’t have to explain it to me/I’m not stupid” when from the context of the discussion clearly they don’t/I do/irrelevant (I know I can come off as an ass). To suggest to someone generally intelligent that they’re not scientifically literate is an unimaginable insult.

    The end result is that we have far too many people who claim to understand science without being scientifically literate. Some of these people then go on to form their own self-reinforcing protective bubbles funded by industries that need talking points to support otherwise unsupportable positions, because ‘science’. The arbitrary measures discussed above seem to have arisen as a non-scientific method of validating mostly the non-scientific.

    Reply
    1. Jeremy Grimm

      @BigStupid — Wow!Wow! … Wow! Your comment angers me on so many levels.

      If you are so literate in scientific literacy try reading a paper in Science Magazine on chemical synthesis in an organic chemistry article. Unless your field is chemical synthesis in organic chemistry I think you might have some difficulties. You distinguish the position of a “lay” person. The proliferation of jargon makes “lay” persons of all outside a small coterie of the blessed and baptized in a field — in fields increasingly squeezed by proliferating jargons known to a very small number of the chosen. Is this a necessity or does it serve another purpose?

      Stealing a phrase from your comment: “we have far too many people who claim to understand science without being scientifically literate” and previous to that you bemoan the pretense of a layman’s supposed scientific literacy. I am a layman and I believe I understand science and I believe I perceive scientific bullshit and obfuscation when I see it. Are you claiming that what scientists know can only be revealed through the mysteries of their particular scientific language — the jargon privey to a field of specialization? Do you ever branch out of your specialization? How quickly do you find yourself relegated to us unwashed hordes of the scientifically “illiterate” — or do you refrain from going outside your lane? I suspect the latter.

      Sorry but I don’t stay in my lane, don’t color inside the lines, and I don’t believe science is so special or sacred that no vulgate might make its insights available to a literate public. Remember the story [apocryphal?] of Einstein [Gamov?] explaining relativity to his grandmother?

      Your closing statement: “The arbitrary measures discussed above seem to have arisen as a non-scientific method of validating mostly the non-scientific.” — is perhaps most disturbing to me. This post described some of the ways science has been turned into a commodity. The “arbitrary measures discussed”, as you put it, are used to distribute grants and impact the progress of science as — we — regard it. [I may be a layman but I do hold Science in high regard as I believe you do.] Please read the post again and get upset for the right reasons. Science as it was … faces grave threat.

      Reply
  13. H. Alexander Ivey

    Well, I don’t quite see the link or cause between (present day) capitalism and science. ‘Everyone’ can agree that present day science is not done in a scientific manner but what is the link? How does management (capitalists) pocketing obscene amounts of money at the expense of the people who made the goods and services (labour) influence how science is conducted and presented?

    The article looks like a not-so-subtle stab at the US university system (which I heartily agree needs poking!). The give-away line is:

    scientists assign numerical grades and rankings to their students

    Uh, teachers assign grades, scientist discover ‘truths’ or understandings.

    Reply
    1. Jeremy Grimm

      Ugh! Try again to spot the link between present day capitalism and science.

      The problem isn’t that — as you put it — “management (capitalists) pocketing obscene amounts of money at the expense of the people who made the goods and services (labor) influence how science is conducted and presented”. The problem is how science is funded and assessed. In the not so distant past scientists were given grants to study interesting problems based on the novelty and potential in their ideas … but not so long after that everything changed. Scientific research was funded by research contracts which required some sort of measurable assessments of know results — not a great way to learn about the unknown. Science became subservient to purely commercial interests — but that serves neither the interests of Science nor the interests of the Common Good.

      Reply
      1. Wisdom Seeker

        Apropos of this: “In the not so distant past scientists were given grants to study interesting problems based on the novelty and potential in their ideas ”

        That was a very short-duration “golden age”. Prior to that, scientists were mainly self-funded, sometimes supported indirectly through businesses selling to military uses.

        The idea that most of our present-day science was done by a bunch of government-funded curious geniuses, walking around studying whatever they felt like, is historically misinformed.

        Reply
  14. RBHoughton

    The scientific method works. It may be not be entirely right but it works like Newtonian mechanistic physics still works for the preponderance of calculations. We should trust it implicitly until there is some communicable progress in identifying another way of understanding the universe than our present one. I am convinced of the case.

    Capitalism is the same – it can work satisfactorily if the fraudulent bits are excised and competition allowed. Until then its just a nasty way of letting the man with the money call the shots.

    Reply
  15. Patrick

    This article seems pretty unscientific to be honest. The goal of being quantitative is central to science. No metric is perfect(e.g. slocs) but they’re better than no metrics.

    As for things that scientists measure to more than 4 sig figs….did u even take physics? The speed of light in a vacuum, the gravitational constant, the elemtary charge, pi…the list goes on.

    Reply
    1. Chris

      “The goal of being quantitative is central to science.”

      That may be true in the physical sciences, Patrick (and we could have a debate about whether it’s universally true), but in the social sciences quantitative methods are much less of a ‘given’. It’s not fair to say that an imperfect metric is better than no metric.

      First there’s the challenge of defining the ‘thing’ to be measured. In the case of academic rankings that’s by no means clear.

      Next comes the problem of ‘gaming’. Any metric used to manage performance will induce a change in behaviour to maximise the score, regardless of whether that changed behaviour benefits the overall performance objective.

      Lastly is the issue of crucial but non-measurable components of the enterprise. Not all that matters can be measured; not all that can be measured matters. This is true in business as much as it is in the ‘soft’ sciences.

      Reply
  16. Jim A.

    The problem with Impact Factor isn’t so much what it IS but how it is used. As a way to decide whether the library should spend limited funds subscribing to Journal of (Specialized Topic) or Quarterly Journal of the (Specialized Topic) Society it is a reasonable good approximate measure of the utility of the journal. But what university administrators and faculty try to use it as is a measure of the individual articles accepted into them. Impact factor is MUCH more affected by the journals ability to capture those rare, superstar 1% very highly cited articles and not affected nearly as much by the lowest quality article that they will accept and publish. So as a measure of the quality of an academic’s work it is terrible. It is used this way not because it is a GOOD measure, but because it is a FAST measure. The people making hiring, firing, and tenure decisions don’t want to wait a couple of years to see how well the article in question gets cited. In that, it is like using stock market prices to measure how well the economy is doing overall.

    And all of this ignores the problems with using citations to measure the quality of a work. Those have been gamed so heavily these days that they are at best a very approximate measure.

    Reply
  17. Wisdom Seeker

    Apropos of the OP, there’s a HUUGE flaw in this article:

    What is the scientific method? Its particulars are a subject of some debate, but scientists understand it to be a systematic process of gathering evidence through observation and experiment. Data are analyzed, and that analysis is shared with a community of peers who study and debate its findings in order to determine their validity. Albert Einstein called this “the refinement of everyday thinking.”

    Only the first sentence is right; the rest is hopelessly wrong. Validity, of scientific results, has nothing whatsoever to do with peers, study or debate. The only source of validity is whether the experiment and results are reproducible. True science “just works”, and can be reproduced by anyone willing and able to try.

    Put another way – we know the earth is round because anyone willing to look into the matter can see that it’s round. The validity of that result arises because anyone can replicate it. It has nothing to do with what peers think, nothing to do with how many paper studies are done debating the issue, nothing to do with debate at academic meetings. Dozens of academic-pet theories have died through detailed replication of competing results. The luminiferous ether. Pre-tectonics geophysics. The sun as a big ball of fire. Acquired genetic traits. The continuum theory of matter.

    This is why the teaching of actual science in schools is so important. Science isn’t what’s written in the textbooks. It’s what you can prove with your own hands and your own eyes, every day in your own life. The stuff in the textbooks can start you off closer to the frontier of knowledge, but to cross the border you have to know how to walk forward on your own.

    Reply

Leave a Reply

  • Keep it constructive and courteous
  • Criticize ideas, not people
  • Flag bad behavior
  • Follow the rules

Please read our Comments Policies here.

Your email address will not be published. Required fields are marked *