Why Won’t Academia Let Go of ‘Publish or Perish’?

Yves here. The pressure to publish is corrosive in the sciences, particularly medical and biological sciences, where it usually take a lot of funding to do well-constructed studies, and that sort of dough comes most readily from corporate-connected sources. But it’s not healthy in the social sciences or humanities either.

By Paul M. Sutter, a research professor of astrophysics at the Institute for Advanced Computational Science at Stony Brook University and a guest researcher at the Flatiron Institute in New York City. He is also an author, host, and speaker. Originally published at Undark

When I was a graduate student and, later, a postdoctoral researcher, I would ask senior scientists what I could do to best position myself for a tenure-track faculty position. Their response, repeated almost verbatim to a person, was deceptively simple: Just keep writing papers.

“Publish or perish” has become a mantra in academia, a tongue-in-cheek recognition of the sad state of academic affairs, and a not-so-subtle warning of the brutal expectations of the profession’s various gatekeepers. The words embody the irresistible pressure to publish as many papers as possible, as if that were the central — if not sole — measure of a researcher’s merit. The more you publish, the more likely you’ll seemingly advance, get tenure, win grants, and gain accolades.

Scientists and others have been calling out the shortcomings of this narrow-minded approach for more than a decade. They’ve noted how the insatiable need to feed the academic beast with ever more papers pushes scientists to sacrifice quality for quantity, leading to rushed, shoddy, and even fraudulent research. They’ve explained how this pressure has led to the rise of so-called “predatory journals,” which offer little-to-no barriers to publication for a price (although for many scientists outside of mainstream research centers, these journals can be one of the only ways to gain recognition). And critics have also pointed out how publish or perish results in the gaming of the publication system, with scientists seeking to gain as high a “score” as possible — as measured by publication metrics like h-indices — ignoring the principles of scientific integrity in the process.

Some of these same scientists have proposed concrete solutions and recommendations for preventing fraud and protecting research integrity. Notable examples include the San Francisco Declaration on Research Assessment, in which signatories support avoiding mere journal rankings as a measure of research success, or the Hong Kong Principles, which, among other avenues to increase the integrity of science, encourage member institutions to employ a broader range of metrics for measuring success.

Yet here we are, years later, and nothing much has changed. Predatory journals still run rampant. Fraud is on the rise. Scientists remain suspicious of the integrity of their colleagues’ work. And yup, you still need to publish to get ahead. While department chairs may argue that they have a holistic approach to hiring and promotion, their recipes can be a closely guarded secret — and only one American university has signed onto the San Francisco Declaration. What’s going on?

I believe that progress in reforming the publication and advancement model in science has stalled because the vast majority of scientists are too entrenched in the current system to do anything about it. It’s a trap of inertia that has locked in incentive structures for everyone, from first-year graduate students to full professors. Although most of them mean well, and would probably prefer a more just and sane reward model, the system — an entire hiring, promotion, and award apparatus constructed around the locus of publish or perish — is just too big for any individual to fight.

The senior scientists who advised me years ago to “just keep writing papers,” for instance, were almost certainly giving their honest assessment. They were told that they needed to publish a lot of papers, they did, and they got jobs. They won their positions in the current system, and because the system worked for them, they naturally didn’t feel any tremendous urge to change how things work.

Young scientists, on the other hand, typically want a different system and are willing to try new ideas. But they can’t change the system from within because they’re too busy just trying to survive: They’re told the rules of the game — publish more than your peers — and if they decide not to play by those rules, they will be passed over in faculty searches for peers who do.

And then there’s the academic publishing machine, which pulls in tens of billions of dollars of annual revenue and enjoys profit margins that would make an oil baron blush. The have little, if any, incentive to encourage scientists to publish less, as that would directly impact their bottom line.

And so the calls for reform of the publish-or-perish model are met with resigned shrugs and exhausted repetitions of this is just the way it is.

As a scientific community, we prioritize publication above everything else because it’s an easy, lazy way to filter out the rafts of applicants competing for seemingly every open position, grant, and award. Ultimately, we need to recognize that this is an institutional issue operating at all levels, from grant funders to graduate students, and that the only way to solve this is through systematic reform, which takes both time and determination.

There are some promising signs that science is moving the right direction, albeit slowly. Some departments at some universities are beginning to embrace a holistic and transparent approach to hiring, counting cross-collaboration, outreach, and leadership elements. And institutions like Syracuse University, along with departments and programs within other universities, such as the City University of New York, deserve some credit for at least signing the San Francisco Declaration. There is a growing awareness — exemplified in articles, essays, conference splinter-sessions, and online discussions — that publish or perish is a problem.

The solution to the problem is simple but not easy. Universities should make their hiring decisions public and transparent, allowing applicants to see how they will be ranked and sorted before they apply. More departments should endorse the San Francisco Declaration. Universities should award dramatically fewer Ph.D.s every year, so there’s less competition for the scarce open faculty positions, and so hiring committees needn’t rely so much on simple metrics to filter candidates. Community organizations, from professional societies to graduate student unions, need to pressure universities to broaden their definition of scientific success. But ultimately, publish or perish will only perish when we, the community of scientists, collectively change our minds and decide that we can, should, and will do better.

 

Print Friendly, PDF & Email

32 comments

  1. Robert Hahl

    It’s not just universities, and not just the last ten years since this has been an obvious problem. I did a brief stint as a chemist at Merck on my way to grad school in the early ’80s. Once they were about to hire a senior chemist that everyone I knew was excited about, but the CEO vetoed it on grounds that his publication to age ratio was too low. I think this nonsense started as a way to preserve power within a small circle of prestigious research institutions. Before publish-for-pay journals existed, it was much more difficult for outsiders to publish anything.

    1. lyman alpha blob

      Definitely not a new phenomenon. A favorite prof of mine who was a tremendous teacher got pushed out based on his publication record when I was in college 30 years ago, causing me to switch majors. He was beloved by the students who pushed the higher ups to grant him tenure, but it was to no avail as the department heads wanted him gone. I’m sure we students weren’t given all the details, but it sure seemed like he was canned because the others in the department were jealous of his popularity, and he was popular because he could teach, and made an extremely difficult subject (differential equations) manageable and fun to learn.

      I get that some research is expected, and if you’re going to get a Phd it’s probably something you like doing anyway. But the emphasis on publishing over everything else really is detrimental, and I’ve never understood why teaching isn’t more valued.

      1. Samuel Conner

        > and I’ve never understood why teaching isn’t more valued.

        perhaps “teaching the next generation” isn’t seen as core to the institutional mission.

        I believe that the institutions get a “cut” of grants funding, sort of a fee for ‘administering’ grants, and this can be significant. So the pressure may not be the value of the publications themselves, but the value of the funding that is drawn in to perform the research described in the publications.

        This system evolved in an era when faculty had more authority than they do now, and now it serves someone else and they appear to be trapped.

        1. John Zelnicker

          When I worked as a lab assistant for a research professor after graduating college in 1972, his grant included an “overhead” component that was paid to the university for providing him with space, certain basic equipment, and the use of department support staff for clerical needs.

  2. MarkT

    Thanks for posting this. As the article points out, it’s impossible to change a system when you’re a small cog in the machine (and your career depends on fitting in). But talking about the problem is a step in the right direction.

  3. kriptid

    Love this being posted under ‘Ridiculously obvious scams’!

    Academic science, and especially the soft sciences, have become narrative machines. You learn to navigate the waves of changing sentiments, nudging the tiller of the narrative machine slightly if you can, in the pursuit of more success. Success is defined as more grant money and prestige in your field. The unit of account required in exchange for grant money and prestige is papers published. Papers in higher-level journals (i.e. often cited) are worth more. It all, of course, results in some form of dopamine hit when you succeed. The formula is simple and everyone knows it.

    This formula informs the entire incentive structure, whether talking about publishers, grant funders, professors, post-docs, hiring committees, etc. Because ultimately, your success as a professor/fund raiser for the university depends on your bureaucratic skills (grant writing, securing tenure, establishing relationships with the right people in your department, wooing students/post-docs). Everything exists in orbit around the gravity of this fact.

    Mastering bureaucratic skills is a much more pragmatic way to secure a future in science than merely focusing on ‘doing good science.’ As someone who has published in some high-tier journals in the biological sciences, I can tell you that polishing a turd that fits a narrative bias in the field is often all you need to get a paper published; groundbreaking scientific insights are much more difficult to get people to pay attention to and accept for publication, and they also take 10x as much work. Thus, focusing exclusively on ‘good’ or ‘disruptive’ science is a path towards ruining your career, most likely, should you wish to have one, especially before you’ve become firmly established. Even then, that type of science can only ever occupy a small percentage of your time.

    The effects of this perversion in the incentive structure are manyfold: gradually projects get smaller, more incremental, less imaginative, less risky, more sensationalized, more hivemind-driven. Narrative-breaking negative data is held back, and instead of a cataloging a chronological unfolding of events with notes regarding setbacks and adjustments, you instead reveal, at the end of your study, some elegant-looking zeitgeisty narrative with the data tortured desperately to look as dramatic and interesting as possible. And if you decide against doing things this way, you can be assured that you are competing at a profound disadvantage.

    And nobody will ever be willing to take the first step towards that direction, no matter how hard I or the author of the article wishes. The entire system needs to be dismantled and rebuilt, in my opinion, but don’t hold your breath; there are too many hands in the cookie jar from the funding agencies on down to accept any type of changes that could encroach on their respective shares.

  4. Lex

    Academia adjacent because my wife is a director of research (grants) at a MAC university and before that a small state university. Academia’s weird, man. When she was getting her PhD it was clear that 2/3 of the difficulties in completing a degree at that level resemble hazing more than academics. So I asked the many faculty who populate my social circle. They pretty much all said, “it was done to me so I will do it to others.”

    The publishing and focusing on grants seems like it is also related to the weird socialization of academia. It’s weird and clique based, like high school. Sometimes I wonder if never having left school has a tendency to retard social development. The tenure application is similar to the PhD in terms of hoop jumping. But there are so few tenure positions left that the pressure is intense. And at least at the local uni, new faculty barely make $50K … with a PhD.

    1. Bun

      When she was getting her PhD it was clear that 2/3 of the difficulties in completing a degree at that level resemble hazing more than academics

      Not a joke. I came to realize that the greatest value in a PhD is identifying those stubborn, pigheaded , driven, yet highly intelligent people who are able to fight through all the BS and still get it done (namely, go from a novice to an expert in a handful of years). That is a rare and highly sought after combination of traits not only in academia, but in the ‘real world’ in general.

      So contrary to what others say, I do not think the numbers of PhDs ought to be reduced. I DO think that the generic benefits of PhD training for the ‘real world’ ought to be emphasized a lot more, with more ‘real world’ aka business skills added to standard curricula, to prepare grads better. The fact is most will not end up in research position. Yours truly among them.

      1. BlakeFelix

        I feel like they might be putting all that energy into something more useful if they weren’t in academia though. Some of that grinding is productive but I don’t know how much, or the opportunity cost.

    2. flora

      Yep. This from the original post:
      Their response, repeated almost verbatim to a person, was deceptively simple: Just keep writing papers.

      How does this advance science or other disciplines when the ‘Publish or Perish’ dictum results in early and mid-career academics merely regurgitating, in other forms, a tiny subset of a larger work, for example. Their earlier publications now cannibalized in smaller increments to meet the
      ‘Publish or Perish’ dictum? (Academics know what I’m talking about.) Important research may take several years of analysis and follow-up. “Publish or Perish” seems much like the Wall Street myopia about the quarterly returns focus, as if that is any meaningful metric of the real economy. My 2 cents.

      1. flora

        adding: “Publish or Perish” is, imo, a neoliberal drive toward “efficiency” where efficiency has nothing to do with the scientific quality and importance of the result.

        1. flora

          Much shorter: Thank goodness Galileo and Newton and Leibniz and Curry and Einstein didn’t have to deal with ‘Publish or Perish.’

  5. GramSci

    The path to academic success is well-known. I call it the Will Smith technique: find the highest-grossing genre and replicate the ouevre , copiously citing the members of the most prestigious editorial boards. The irony payoff is that, when the practitioners achieve tenure, they have no ideas of their own for academic freedom to protect.

  6. Terry Flynn

    The clue is in the name: publish or perish. Here is my Google Scholar profile. My h-index is, frankly, phenomenal compared to most people in my former field, including those still publishing and not bad considering I left the field a few years ago. It’s a first mover advantage. Things won’t move on til enough old schoolers die off – if there’s a better theory of human decision-making then that expression also relates to me and those who use my methods.

    Look at my most highly cited work. It SHOULD NOT BE. Why not? It has a BIG FAT ERROR IN IT! Thankfully the “main results” if you do things according to the methods advocated there aren’t affected but the log-likelihood is wrong – I spotted it at a point at which it was too late to correct (and the peer reviewers missed it too) and I got the CORRECT algorithm published in a (DELIBERATELY) open access journal within a year. Phew. Trouble is that journal isn’t part of the publishing “PMC” (Elsevier etc) so doesn’t have the same “clout”. Result? Everyone quotes “Flynn et al 2007” without familly-blogging reading the article when they should really quote Flynn et al 2008.

    This is an illustration of a wider problem in academia, referred to only a week or so ago here regarding vitamin D: the article calculating the minimum recommended dose for people in winter at high latitudes is out by an order of magnitude and nobody spotted the mistake and everyone just quotes the original article! I was taught back in 1998 when starting my PhD to always always go back to the ORIGINAL paper and read it carefully. Chinese Whispers is distressingly common.

    Postscript – I got the chance to put the record straight via a book deal – thankfully the book is my 4th most cited work and increasing rapidly. I just wish academic royalties were higher :-(

  7. Dean

    It is all about the money. Faculty in the sciences are encouraged (forced) to bring in grant support so that the department and university can share the overhead. I have seen department build new research facilities without providing individual faculty labs. Rather, they provide large open research space. The more grant money you bring in the more space you get. Anyone losing a grant is in trouble because the space needed to continue research is limited. Departments may require faculty to bring in their own salaries, or match their salaries if they are on hard money. I know of a department whose policy (unspoken but well known) was:
    No R01 = no promotion to full professor.
    You are not getting the money without the publication record.

    1. Grumpy Engineer

      It is all about the money.

      Yes. That’s especially true in engineering and the hard sciences. I was looking at a career in academia when I was wrapping up my Ph.D., but it became pretty obvious that the top priority was actually to bring in as much money as possible through industry funding or grants. The number of papers you published was of secondary priority. And the quality of your teaching? Pfft. That was way down on the list. You merely had to be “not utterly awful” at teaching.

      And given that I’m a rotten salesman, I already knew that I’d do poorly grubbing for dollars. So I headed out for industry.

    2. Bun

      That certainly was a factor in my experience at US schools. It was highlighted in my job interviews each and every time.

      Up here in Canada it is much less so in my experience. Still, publication record is important, up to a point. Its more a threshold effect – below a certain standard is not good and one might perish, and above that you won’t, though your standing within the department will certainly depend on the quality of your work.

  8. The Rev Kev

    This problem goes way back. I have a book called “The Making of a Surgeon” by William A. Nolen who talked about his training as a surgeon in the late 50s and the early 60s, mostly at New York’s Bellevue hospital. In one part of the book he talked about one fellow surgeon that wanted to work for a prestigious research doctor but was told he would have to have a good track record of research papers first leading to a few misadventures trying to do this but this indicates that this problem goes back at least sixty years. It was a long time coming.

    Since I referenced ‘misadventures’ I think that I should explain that. So this guy decided to research maggot’s ability to clean wounds so he ordered a consignment of maggots from some place in the south. As it was late Friday when they arrived, he decided to leave the blowfly maggots in the lab he shared with others and tackle them come Monday morning. On Monday the maggots were gone but alas, the blowflies were not.

  9. John Mc

    The intertwined toxicity of neoliberalism, careerism, and higher education reminds me of a favorite quip by Lambert (amended here only for the topic):

    1A. Go Publish *** (because Academia is a market)
    1B. Go Publish More (this is your only important function – pay for yourself, pay to play)
    1C. Go Get a Grant (rainmakers are rewarded, pinnacle of academic life R01)

    2. Go Die (Perish in the ignominy of failure and die)

    *** do not forget to pay homage to the academic gatekeepers as they hold your future in their hands and collect your $200 in parking fines for not passing go…

  10. t

    In all areas of life, it’s easier to count things, like publications, then to know enough to recognize good work.

  11. Jesper

    My belief is that the reason why the situation is what it is relates to a fear of people getting paid too much.To minimise the risk of anyone getting paid to much, or even paid at all, then, due to the fear, some sort of measurement needs to done.
    Only the worthy shall have anything, so as decreed by the PMC, measurement be done….

    If there is a perceived need for a measurement then what is measured might have to be seen as objective and numbers are seen as neutral. Once the measurement (target) was established then people tend to game the system and to stop the attempts of gaming the system then adjustments need to be made. Gaming continues and the system is made more complicated….
    Publish or perish seems to be on the road to becoming more complicated in the hope that it will be more ‘fair’ and ‘just’.

    Possibly related: Some believe that the reason why teachers in Finland have better outcomes than in many other countries is that teachers in Finland (supposedly at least) are trusted to be teaching while in other countries the teachers are not trusted and to be teaching and therefore teachers have to teach to tests.
    Once control is needed then energy and effort has to be directed towards that control. Sometimes that energy and effort leads to improvements, other times it leads to something else entirely.

    I do not think the situation will improve, my belief is that the targets will be made more complicated until there is no time to do anything productive and all available time is spent on gaming the system – possibly then it might collapse.

  12. LAS

    Is it pressure to publish or rather pressure to get grants/other funding which in its turn requires publishing to try and persuade that intelligent use is made of the money? I can’t determine for sure, but suspect it is really about the money, the university as a captain of industry, home to the rainmakers. True scholarship is not all that desirable for itself.

  13. hk

    A critical problem is the lack of good measure for what “good research” or “good teaching” entails.

    A running joke from the time I was in academia was that 95% of all published articles, even in top journals, are crap. They are published largely because they successfully cross various “bureaucratic” checkpoints depending on established customs and practices of the discipline of the times, not because they raise interesting questions or bring forth unexpected insights. Indeed, the centrality of “established customs and practices of the discipline” for evaluating a “good” paper means that a paper going off the path can’t be evaluated properly at all since they stray far from such things. It’s not fair to call this “gaming” the system, but it comes close to it–understanding what is “fashionable” and catering to it over and over is the best way to get ahead.

    But what’s the alternative? Really “interesting” and “innovative” research is hard and rare and hard to evaluate (customs and practices do provide a useful measuring stick, after all). Is X actually innovative or is it just crazy? What about “teaching,” you say? I don’t think it is nearly so easy, especially in social sciences or humanities. In natural sciences or engineering (up to a point), say, “right answers” are at least easily defined and good teaching is how to get to these most effectively (but this, in turn, feeds the focus on “customs and practices” for evaluating “good research” later). In social sciences (and even more so in humanities), this is harder since the “right answers” are often for”convenience,” rather than actual reality: forms don’t actually maximize profits unconditionally, etc. For social sciences and humanities to be actually “useful” for students (and for research, too), they need to define “the interesting questions” a lot better and very people are doing it, in part because it is difficult and there’s no professional reward for it. Usually, “good teaching” winds up being not much more than just reinforcing whatever preconceived notions students bring to class and supplying them with justifications about why they were right all along and that, in a sense, is at the root of problems in much of US higher ed. Then again, though, what is the solution? Why do you study history, politics, or literature?

  14. David

    A lot depends on what you mean by “academia.” I have yet to meet a real academic from any country who believes the current system is working well. It’s not just paper citations, it’s the eternal pressure to tweet your latest research or the last conference you attended, have your own Fbk and LinkedIn page, and publish anything you can scrape together. So who benefits?
    Administrators, of course, and not only in Universities but also in Education Ministries and donors. After all, how do we know this money’s being effectively spent? What are these academics actually doing all day? Well, we don’t know anything about nuclear physics or Chinese poetry, but we can count to ten, more or less. So administrators have distorted academia into something that they can understand and measure, and thus acquired power over people who are a lot cleverer than they are. It’s the old story: what matters is whether you can measure something, not whether the measurement means anything.
    Now, Dr Wittgenstein, I see that you haven’t published anything since 1919 ….

    1. flora

      I have yet to meet a real academic from any country who believes the current system is working well.

      Indeed.

  15. Bun

    This is simply another example of a larger conundrum: how does one evaluate the effectiveness or capability of a large pool of candidates? In school they use grades. Do grades map 1-1 onto capability? Of course not. It’s a primary metric because, well, there aren’t many sort-of standard metrics that can be used to assess candidates.

    Similarly when it comes to publications, and their “quality”, in postgrad academia. To be clear, publications are not the only metric used in some hiring scenarios- e.g. for a postdoc or research associate position you can be hired based on your connections made in collaborations or conferences (that got me to MIT). However, for widely-advertised and coveted permanent positions like faculty, where sometimes hundreds of applications may need to be assessed, its inevitable that the publication record would become a standard metric used in the evaluation.

    Really, there aren’t many metrics to choose from when evaluating large pools of candidates where its not possible to do a deep dive in evaluating each one. Another one is your institution- a candidate from MIT or Oxford will get a closer look than one from U of Manitoba, even if the latter is more capable (if we could somehow know that!)

    But its definitely NOT the only one. When hiring committees get down even to their ‘medium’ lists, other factors certainly come into play. And once at the interview stage, publications cease to have major importance. Then its about ‘fit’, resilience, creativity, EDI considerations, teaching, service work etc.

    Hiring committees in my experience KNOW all this, and consciously try not to emphasize publications. Still, The feeling is that publication record is one of the useful proxies at the first cut stage, after which more fullsome evaluations could be done. In the end, there are almost always multiple quality candidates who would fit the role just fine, so in the final analysis its felt that despite the less-than-savoury process, a satisfactory result will eventually emerge. And that more than anything IMO is the reason for the huge inertial resistance to change.

  16. Sub-Boreal

    It’s certainly not a new phenomenon that much of what gets published in academia qualifies as what I call “CV roughage”: lacking in intellectual nourishment, but a source of fibre that assists passage along the academic career path.

  17. Bun

    IMO, when it comes to hiring, this is an instance of a larger conundrum: how does one evaluate the effectiveness or capability of a large pool of candidates? In school they use grades. Do grades map 1-1 onto capability? Of course not. It’s a primary metric because, well, there aren’t many sort-of standard metrics that can be used to assess candidates.

    Similarly when it comes to publications in postgrad academia. Note in some hiring scenarios- e.g. for a postdoc or research associate position you can be hired based on your connections made in collaborations or conferences (that got me to MIT). However, for widely-advertised and coveted permanent positions like faculty, where sometimes hundreds of applications may need to be assessed, its inevitable that the publication record would become a standard metric used in the evaluation. Namely because there aren’t many metrics to choose from when evaluating large pools of candidates where its not possible to do a deep dive in evaluating each one. Another one is your institution- a candidate from MIT or Oxford will get a closer look than one from U of Manitoba, even if the latter is more capable (if we could somehow know that!)

    But publications are definitely NOT the only metric. When hiring committees get down even to their ‘medium’ lists, other factors certainly come into play. And once at the interview stage, publications cease to have major importance. Then its about ‘fit’, resilience, creativity, EDI considerations, teaching, service work etc.

    Hiring committees in my experience KNOW all this, and consciously try not to emphasize publications. Still, The feeling is that publication record is one of the useful proxies at the first cut stage, after which more fullsome evaluations could be done. In the end, there are almost always multiple quality candidates who would fit the role just fine, so in the final analysis its felt that despite the less-than-savoury process, a satisfactory result will eventually emerge. And that more than anything IMO is the reason for the huge inertial resistance to change.

  18. nielsvaar

    I wonder how a mass movement by academics to publish scientific articles anonymously would do as a resistance? It could retain some marker of identity (e.g., department/university), but remove the ultraindividualist spirit of competition within the academy.

    It would also begin to sort out two other problems: (1) that of the tenured prof who puts their name on everything the department produce (see, Josef Perner and his so-called 180 articles for a salient example). (2) begin to filter out garbage science and shortform/review/opinion articles by professors who are out of touch but get clicks onto a journal.

    It also keeps the science about its subject matter. In this sense, even the act of publishing takes on a more materialist shine.

  19. square coats

    I feel like this article points a bit at the systemic nature of this problem but then mostly falls back on a sort of collective but still ultimately individual responsibility approach to it. I think a lot of comments people left so far have gotten more at a systemic approach, particularly (at least standing out in my mind at the moment, but not meaning to be dismissive of anyone else’s comments), I feel like Jester is making some good points in this regard.

    One thing I’m wondering about is what is required to maintain a successful academic journal. I believe that many journals are maintained by these various conglomerates/databases/platforms, elsevier, wiley, etc, but it seems many (most?) journals exist independently, while no doubt relying heavily on these databases for readers and revenue? How much freedom do journals have in deciding what to publish? To what extent is the decision making process constrained by market concentration driven by journal databases? What are the different sources of revenue available to journals? Are journals competing for funding from some of the same sources as researchers looking to eventually get published? What effects do efforts to publish research on open source platforms have on the system? I’m not trying to ask rhetorically, wondering what thoughts anyone has about this stuff and these are just some particular questions I arrived at while reading the article and comments.

    Thinking more along the lines of personal responsibility, I’m thinking about my dad who doesn’t work in academia but has published countless academic-oriented research papers in his field of interest while working in research in a more or less related (broadly, tech) industry. Some of the work has been supported by the companies he’s worked for, however a lot of it has been done in his time outside of work (maybe sometimes his employer at the time would pay for him to go to a conference he was invited to because the company felt it would reflect well on said company). So within his semi-niche area of interest he’s quite well-known and respected, and he’s spent a lot of his time outside of work being very very active in several major conferences for decades now. He tries as much as possible to use his stature/clout to help young people in grad school or newly working in fields related to his to get their work published. He also tries to affect the culture of the conferences he participates in to orient themselves in such a way (holding various unpaid administrative positions, sitting on review panels or sort of overseeing review panel work, active participation during the conferences themselves). I’m just wondering people’s thoughts about trying to change things this way too?

    I have a friend in a different sort of niche-ish tech industry who has become pretty active over the last few years in the academic/industry kind of crossover conferences and handful of journals (that is to say, the work people in the field are trying to do to bring the academic and industry spheres together as a professional and research community). This is in a fairly newish field of tech (UI/UX and also information architecture) and it seems like a lot of people involved in the field are considering it from a simultaneously academic and industry-oriented perspective, maybe more so than in other more longtime established fields. I wonder what it will look like as it moves forward.

  20. lovevt

    Betrayers of the Truth: Fraud and Deceit in the Halls of Science is a book based on a Congressional hearing about fraud in academic journals and the article review process prior to publication. I read the book and learned about female academic writers submitting their articles only to have their research stolen by the male reviewers.

Comments are closed.