Part the First: AI and Deskilling in Healthcare. Yes, it does happen as described in the news article As AI spreads through health care, is the technology degrading providers’ skills? (New study suggests that, after having a specialized tool taken away, clinicians were less proficient at colonoscopies):
The AI colonoscopy tool rolled out across four health centers. As endoscopists snaked a camera through patients’ large intestines, the algorithm would draw a square around precancerous polyps known as adenomas. The more adenomas detected and removed, the less likely the patient would go on to develop colon cancer.
Researchers were interested in whether the AI could improve those adenoma detection rates. So they designed a trial: Half the time, endoscopists got to use the algorithm; the other half, they were on their own. But the researchers also took a look at a different question: Like students who try to write an essay independently after using ChatGPT one too many times, how well might doctors detect polyps without AI after they had gotten used to its help?
Not great. In the three months before the endoscopists started using the AI helper, they were finding adenomas in 28% of colonoscopies. After they had been using the AI for three months, the researchers found their unassisted adenoma detection rate fell significantly — to 22%. Researchers called their finding the first documentation of a potential “deskilling” effect from clinical AI.
The paper is from The Lancet: Gastroenterology & Hepatology for those who have library access. Are the data convincing? Yes. The authors interpretation of their work is succinct:
Continuous exposure to AI might reduce the ADR of standard non-AI assisted colonoscopy, suggesting a negative effect on endoscopist behavior.
This is not surprising in any way, shape, or form but the investigators did not expect their result. Static image analysis using AI trained on hundreds of thousands of images is very good at identifying problematic lesions, various skin cancers, for example. But treatment still requires a confirmatory biopsy. During a colonoscopy the images are anything but static. And one wants an experienced gastroenterologist using the scope to identify lesions, snip and retain them for histology, and cauterize the wound. In my world I have noticed similar deskilling as routine laboratory tasks have become increasingly automated. When a scientist removes himself or herself from the data through an extra layer, no matter how routine, results are missed. In the clinic:
Medicine’s artificial intelligence boom is predicated on the idea that doctors can be made better, faster, and more accurate with algorithmic support. But “we’re taking a big gamble right now,” said Adam Rodman, a clinical reasoning researcher and internist at Beth Israel Deaconess Medical Center in Boston. “We’re going full speed ahead without fully understanding the cognitive effects on humans.”
Perhaps this will be a nothing burger in the end. But my Spidey sense, which is based on more than forty years in the laboratory, tingles otherwise. And more importantly:
If exposure to AI does prove to degrade physicians’ skills, trainee endoscopists could be the most at risk. Consider a gastroenterology fellow who trained for three years in a program that uses AI polyp detection, and then joins a practice that doesn’t have the technology. “If this is the level of deskilling that happens when somebody who has been trained in the old way uses it for three months, what happens when somebody trains with this from the very beginning?” asked Rodman. “Do they ever develop those skills?” (No)
If clinical AI “will definitely lead to deskilling,” the first pressing question for clinicians and health systems deploying AI tools is to choose which skills they’re comfortable losing, and which are essential to keep for patient safety.
Which skills are clinicians comfortable about losing? That question sits at the head of the table where the tutor sits with eight medical students in the Problem-Based Learning tutorial room, which is also the top of a very long and very slippery slope right into the abyss of ignorance.
Part the Second. mRNA Vaccines on the Block. Yes, I know this will be a shock to everyone, but the evidence is not generally in RFKJr’s favor as outlined in Jake Scott’s article Kennedy’s case against mRNA vaccines collapses under his own evidence. Dr. Scott is an infectious disease physician with an adjunct faculty appointment at Stanford University School of Medicine. He does not argue from authority, as certain others from that august institution are wont to do:
When Health and Human Services Secretary Robert F. Kennedy Jr. terminated $500 million in federal funding for mRNA vaccine research last week, claiming he had “reviewed the science,” his press release linked to a 181-page document as justification.
I reviewed Kennedy’s “evidence.” It doesn’t support ending mRNA vaccine development. It makes the case for expanding it.
The document isn’t a government analysis or systematic review. It’s a bibliography assembled by outside authors that, according to its own title page, “originated with contributions to TOXIC SHOT: Facing the Dangers of the COVID ‘Vaccines’” with a foreword by Sen. Ron Johnson (R-Wisc.). The lead compiler is a dentist, not an immunologist, virologist, or vaccine expert.
NIH Director Jay Bhattacharya has suggested the funding was terminated due to lack of public trust in mRNA vaccines. But misrepresenting evidence to justify policy decisions is precisely what erodes public trust. If we want to restore confidence in public health, we need to start by accurately representing what the science actually says.
Most of the papers listed are laboratory studies using cultured cells that express the S-protein of SARS-CoV-2. Viral S-protein binds to the surface of target cells and allows the virus to enter and begin replication and spread. It is not surprising that S-protein makes cultured cells sick. This kind of work is essential to understand the function of the S-protein, but it has little relevance for the mechanics of viral infection in the host animal, i.e., you and me.
Most damning is what’s absent. The compilation ignores the Danish nationwide study of approximately 1 million JN.1 booster recipients that found no increased risk for 29 specified conditions. It omits the Global Vaccine Data Network analysis of 99 million vaccinated across multiple countries finding no new or hidden safety signals. It excludes CDC data showing the unvaccinated had a 53-fold higher risk of death during Delta, demonstrating the critical importance of mRNA vaccination. The Commonwealth Fund estimates Covid vaccines prevented approximately 3.2 million U.S. deaths through 2022.
Based on my regular but certainly not exhaustive reading of the COVID-19 literature since the beginning of the pandemic, this is all true. One thing to keep in mind is that since late 2019 nearly 479,000 papers have been “published” with “Covid” somewhere in them. No one has read even a significant fraction of this literature. As a comparison, since 1982 about 188,000 papers are retrieved when “HIV AIDS” is used as the query. Something queer is going on here, in the science of COVID-19 and in the corrupt and corrupting business of scientific publication in the open-access, pay-to-publish virtually anything world.
The problem runs deeper, though. There can be no doubt the COVID-19 vaccines have prevented severe disease in many people and have saved many lives, millions of them. As a colleague backstage has asked, “How many people died because we did not treat COVID-19 as a lethal respiratory virus that should have been fought with non-pharmaceutical interventions such as air filtration, better ventilation, and effective masks?”
But in my view (your mileage certainly may vary), two things happened at the beginning of the pandemic that put us on the wrong path. Pardon me for repeating myself. The first is that scientists who should have known better went all-in on vaccines against a coronavirus, even though it has been known since shortly after the identification of avian Infectious Bronchitis Virus (IBV, probably the first coronavirus identified, in the 1930s) that vaccines do not work well with coronaviruses. The corollary is that experimental mRNA vaccines were used for a problem they were not likely to solve. And that is the production of durable immunity to a novel and lethal human coronavirus. Another thing to keep in mind is that nothing in the technical production of mRNA vaccines is experimental. The techniques have been developed over the past fifty years and are very robust. But so far, no other mRNA vaccine (Zika was the first attempt to my limited knowledge) has worked as people have come to expect of vaccines, which is the prevention of serious disease and its transmission.
How the biomedical scientific community can back out of this cul de sac remains a daunting puzzle, while RFKJr and his minions use politics as well as anyone ever has to their advantage. Given that mRNA-based cancer “vaccines” have shown great promise, throwing out mRNA therapeutics in general is stupid beyond measure. But subtle and supple reasoning is not our strong suit these days.
Part the Third. Three Hominins Lived in the Same Place – Did They Live There at the Same Time? The first World Book Encyclopedia Yearbook we received in my house when I was about ten years old had a long article about the work of Louis B. Leakey on the evolutionary lineage that led to us. It was fascinating then and remains so now. The Riddle of Coexistence published in Science a few weeks ago indicates that three members of our evolutionary bush may have lived in the same valley in South Africa at the same time about two million years ago:
One morning in April 2014, José Braga squatted at the bottom of an open pit, cleaning a wall of red sediments with a trowel. Long ago, these rocks had formed the floor of a cave, and in 1938 they had yielded a spectacular skull of an early member of the human family, or hominin. But Braga had been scouring the sediments without luck for 12 years. He was considering throwing in his trowel and going off to search for fossils in Mongolia instead.
Then, a small, bright object fell from the wall above, bounced off his thigh, and landed in the dirt beside him. “I couldn’t believe what I was seeing: a well-preserved hominin tooth!” recalls Braga, a paleoanthropologist at the University of Toulouse.
A few months later, Braga’s team excavated a piece of a baby’s upper jaw from the wall of the pit. The fallen molar fit perfectly into the jaw. Together, the tooth and jaw solidified the specimen’s identity as an early member of our own genus, Homo.
The very next year, Braga’s team found another baby’s jawbone. The two infants’ remains had lain less than 30 centimeters apart for about 2 million years, but the new one was from a very different species: a baby Paranthropus, a short, robust hominin with massive molars and jaws. And an as-yet-unpublished skull found in 2019, just a few meters away, in sediments likely to be a bit older, is different again: It may belong to a third hominin genus, Australopithecus, a group of upright-walking apes with brains slightly larger than those of chimps.
The fossils’ close proximity, in the same cave or within a short walk, suggests these creatures might have met, or at least been aware of one another. “They were both on this landscape for such an extensive period of time, there’s no way they didn’t interact with each other,” says paleoanthropologist Stephanie Edwards Baker of the University of Johannesburg (UJ). She has found Paranthropus and early Homo in the same layers at nearby Drimolen cave with geochronologist Andy Herries of La Trobe University. In 2020, they proposed in Science that the region was a meeting ground for both genera as well as Australopithecus.
Did these creatures really live together at Kromdraai? Possibly. And this is very good science that should be supported for as long as paleontologists are willing to shave red dirt very carefully with a trowel. And if the National Science Foundation is not funding some of this work by an international consortium of scientists, we should be ashamed of ourselves.
Part the Fourth. Can the Four-Day Workweek Work? Yes, according to Biggest trial of four-day work week finds workers are happier and feel just as productive. From July but still relevant, the conclusion is that “Compressing five days of work into four can create stress, but the benefits outweigh the downsides.”
Moving to a four-day work week without losing pay leaves employees happier, healthier and higher-performing, according to the largest study of such an intervention so far, encompassing six countries1. The research showed that a six-month trial of working four days a week reduced burnout, increased job satisfaction and improved mental and physical health.
To see whether shorter weeks might be the antidote for poor morale, researchers launched a study of 2,896 individuals at 141 companies in Australia, New Zealand, the United States, Canada, Ireland and the United Kingdom.
Before making the shift to reduced hours, each company that opted into the overhaul was given roughly eight weeks to restructure its workflow to maintain productivity at 80% of previous workforce hours, purging time-wasting activities such as unnecessary meetings. Two weeks before the trial started, each employee answered a series of questions to evaluate their well-being, including, “Does your work frustrate you?” and “How would you rate your mental health?” After six months on the new schedule, they revisited the same questions.
Overall, workers felt more satisfied with their job performance and reported better mental health after six months of a shortened work week than before it.
Would this ever be applicable to all jobs? No. To all careers? No, but the number is likely to be higher than expected. Will management ever “believe” this? Don’t make us laugh. But still, this has been floating around since the Personnel Department became the Department of Human Resources. Some of us are old enough to remember the former. It was a better time. But regarding management:
A common criticism of the four-day work week is that employees can’t produce the same output in four days as in five. The study didn’t analyse company-wide productivity, but it offers an explanation for how workers can be more efficient over fewer hours. “When people are more well rested, they make fewer mistakes and work more intensely,” says Pedro Gomes, an economist at Birkbeck University of London. But Gomes would like to see more analysis of the impacts on productivity.
Fan notes that more than 90% of companies decided to keep the four-day work week after the trial, indicating that they weren’t worried about a drop in profits.
The authors also looked at whether the positive impacts of shorter work weeks would wane once the system lost its novelty. They collected data after workers had spent 12 months after the start of the trial and found that well-being stayed high.
Toward the end of a long working life, it is clear that most of the support functions at each of my employers, public and private, academic and other, could be handled in a 4-day workweek without much trouble. And those of us who spend our time in the laboratories or offices doing and thinking about the next experiments, would get two Saturdays per week! Win, win.
Part the Fifth. On The True Meaning of Education. From young Kinley Bowers of Grove City College in her essay “A World Written: A Response to Wendell Berry’s “In Defense of Literacy.” In my estimation, worth your time:
Since graduating high school, I have told people that I specialize in impracticality. I love to read, write, sketch, sculpt, play piano, act, and birdwatch—all occupations thirsty for time and tending to flatten rather than fill my wallet. I suspect that some might view me as a spritely ignoramus, dancing through cumulous visions, and fated to someday be cracked upside the head with the 9-iron of reality. But Wendell Berry’s essay “In Defense of Literacy” offers a fresh angle on the common use of the term “practical,” defining it as “whatever will most predictably and most quickly make a profit.” He then proceeds to assess two staples of practicality: predictability and speed. These dual malefactors threaten the integrity of our language which impairs our literature and ultimately debilitates enriched lives.
And a bit later:
In a recent address at Grove City College, Andrew Peterson said that he used to take walks in the woods, but now he walks beneath poplars and oaks, sycamores and redbuds. Learning the vocabulary of a thing draws it into a realm of awareness and conversation. This endeavor also demonstrates care for the thing itself. Like Peterson, I used to watch birds on the feeder. Now I watch nuthatches and woodpeckers, orioles and chickadees. I hear the songs of American robins, Eastern peewees, and Carolina wrens instead of noise from a great generalized lump called “birds.”
My question is this: Why do such good attitudes and essays seem to come from small colleges, mostly of the conservative variety?
Weren’t there studies about the possible deskilling effect of automatic airplane pilots? I seem to remember that some professional pilots were expressing their worries about losing their dexterity when even mundane piloting tasks were being taken over by software. I do not know about the actual effects, though.
Yes. That was mentioned in the article.
Thanks. The article is behind a paywall, so no luck about reading their take on that specific case.
This is the thing that immediately leapt to mind for me. In the last decade or perhaps two, commercial airline pilots have spent very little time actually flying the airplane, unless they specifically make a point to do so. (Individuals refusing available automation in order to maintain their skills is another thing that would be worth studying. My guess is that such people would come to be considered eccentric and strange by their peers, rightly or not.)
Pilots in the industry frequently express concern that they are losing skills as a result of the increasing automation of modern aircraft. Many situations that a pilot encounters occur very infrequently, so you need to do a lot of hands-on flying just to ensure that you are at the controls when one of these rare things happens, and consequently can deal with it when it happens again. Note that commercial airline pilots work in pairs, normally with a more experienced pilot and a less experienced one in the same cockpit. This seems like a setup that encourages knowledge transfer and education to the junior people in a way that doesn’t happen in other professions. However, if both junior and senior pilot spend their days just sitting and looking out the window, much of this knowledge transfer seems like it will not happen.
I’m not trying to pooh-pooh anything here. Deploying automation in a useful and safe way is something that is recognized as a very difficult problem in human technical development. One of the things that makes it harder is that profit margins are frequently wrapped up in these decisions. However, even in the absence of profit considerations, finding an appropriate way forward using automation is extremely difficult.
Because big college are not attractive to these more introspective types?
“Why do such good attitudes and essays seem to come from small colleges, mostly of the conservative variety?” My stab at it: the students are there because of something OTHER than ambition.
The general academic community and its members only have themselves to blame for the situation they find themselves in.
By defining scholarly ambition so narrowly in terms of quickly gaining the necessary credentials to give oneself an aura of expertise is, in my opinion, an extremely shallow goal that is often followed by greater and greater degrees of arrogance and often significant corruption.
Kinley Bowers, bless her soul, sounds as if she was quite excited about the capacity of ideas to actually enrich our lives. Silly girl,
Her thinking must quickly be condemned as totally unprofessional and naive because it highlights the general hollowness of our academic institutions and our careers within them.
Tossing the trowel in for the malefactors of neoliberal ambition and financialized rent seeking.
The most well-endowed universities act as plaintiffs against public regulation and taxation of their economic rents. How quickly am I able to join the hereditary aristocracy Order?
“Biggest trial of four-day work week finds workers are happier and feel just as productive”
The article is behind a paywall, so I could not find answers to two crucial questions: the work week was reduced to four days, but (1) what about the total number of hours worked and (2) what kind of jobs were considered?
I strongly suspect that if the type of activity requires a high level of physical exertion, or a high degree of attentiveness, then working four days during 10 hours instead of five during 8 hours risks to be significantly more stressful and exhausting — in a non-linear way — and to reduce productivity. Can air traffic controllers really work more on a reduced number of days? Or lorry drivers? Or removers? Or surgeons? Or crane operators?
This article suggests that they cut a full days with of work, so not four tens. https://www.theregister.com/2025/07/22/4_day_week_study/
“My question is this: Why do such good attitudes and essays seem to come from small colleges, mostly of the conservative variety?”
Because most small colleges are of the conservative variety? The sentiments expressed seem quite typical of my time at Oberlin, or my father’s time at Antioch. “Conservative” isn’t an adjective often applied to either school; I suspect it’s the “small” that’s the relevant parameter.
Seems like the slop generator should be used only after an endoscopist does his/her analysis, to “check the work”, to catch any polyps missed by the human.
I’ve spoken to those who develop medical AI tools who have pursued this. This method often results in improved outcomes but also requires additional labor from medical staff which is unacceptable from both funders and hospitals.
Not sure what impact it has on performance to work fewer days per week and longer hours per day, but this does seem to be the way nurses are scheduled fwiw.
Yes, but I suspect that’s so hospitals can avoid giving nurses overtime or full-time benefits. The less time a nurse spends with any specific patient, the worse I suspect the care will be.
But the greater the profit to the ‘health provider’.
You get the results you pay for – so jabs safe and effective!
I did not get jabbed.
Years ago I had read of mRNA as a prospective cancer therapy and brought it to the attention of a friend, cancerous at the time. He was a good student of his disease and consequently bought years of life. He said the mRNA technology side effects were so sever that terminal cancer patients were better off without.
If the mRNA jabs are so safe and effective, why the liability shield?
Herman and Chomsky is still worth reading.
Wow. What is the point of this comment? So you didn’t get jabbed. Who cares? You have provided no rationale to support your decision. All you did was point out some bogus and non-referenced anecdote about a friend and then hint at some sort of conspiracy. It’s clear you did not read the article (or if you did, have made no substantive comments about any of the numerous links provided in the post).
That plus what is the “good student” trait have to do with the consequence of his cancer.? Doesn’t compute for me.
You know what I have never seen? Any papers comparing our mRNA vaccines with China’s antigen-based ones. They apparently worked pretty well, and had no side effects according to China residents who have mentioned it. I had a bad side effect from the Pfizer one. So did one of my sons (his face went numb on one side for about 8 hours immediately after the second injection.) I know several others as well. Why weren’t standard vaccines made available here too? Why was there no option to switch to another type if you *did* have a reaction? It’s not like they didn’t exist.
I often think the same thing. Perhaps there are comparisons and they’re not great. The Cuban and Russian vaccine should have also been evaluated.
The Cuban and Russian vaccine should have also been evaluated.
Don’t be so frivolous! By definition, a Russian vaccine cannot work and certainly no communist Cuban vaccine is ever going to be effective. The US State Department has said so.
There’s plenty of comparisons between mRNA and Novavax but it would be interesting to see the data on other creations.
Huh? I saw plenty of reports like this one out of Singapore. Sinovac was less effective:
https://www.ncid.sg/News-Events/News/Pages/Sinovac-jabs-not-as-effective-in-preventing-severe-disease-S%E2%80%99pore-study.aspx
Three Hominins Lived in the Same Place. Yeah, this one has to be played carefully. So imagine our civilization collapses but hard and it is tens of thousands of years before the climate stabilizes enough so that a new human civilization can arise. So in the year 52,000 AD some archaeologists are excavating a cave in Virginia. They find material that is identified coming from a North American Indian, another from a later individual that has lots of Scottish DNA in it and one that is fully Chinese DNA who, unknown to that team, was the remains of a lost Chinese tourist. Tests reveal that all three finds fall within the period 1800 to 2050 so a very narrow window archaeologically speaking but no history records survive from then. So with those three samples, how would they fare being able to reconstruct the movements of people in the region once known as the United States of (text lost).
Ah, not so fast there.
I will posit that these “future archaeologists” have sufficient knowledge to be able to tell that your Anglo and Chino skeletons are both from the same ‘Homo’ line.
Now imagine that your future diggers find both the Anglo and Chino skeletons plus, this being Virginia, and close to the seat of Imperial power, a Zeta Reticulan Reptilian Overlord skeleton. That would put the cat among the pigeons.
Stay safe.
Yeah. “They Live” was actually a documentary.
Until studies are able to investigate the lowered base skill level on tasks such as performing a colonoscopy or analyzing an MRI scan and it’s connection with being able to conduct higher level and more specialized tasks that senior doctors do, I’m afraid we will plough on ahead assuming AI will just fill the gaps. Everyone I speak to just shrugs, “so what if AI makes us worse? The AI compensates for that.” I suppose it depends on what it makes us worse at and whether AI tools can perform those tasks.
Possible word missing? KLG, in the following sentence did you mean to say “…who spend our FREE time” ?
“And those of us who spend our time in the laboratories or offices doing and thinking about the next experiments, would get two Saturdays per week! Win, win.”
But what if AI makes everything worse but makes profits higher and reduces the independence of the medical profession? My experience says all the cogent arguments and passionate words and studies will have no effect against the golden rule – those with the gold makes the rules (especially in the neoliberal west). Look at disparities in US health outcomes versus cost – they make zero societal sense, but make perfect sense for an oligarchy.
I don’t disagree with your first sentence but unfortunately think the process was well underway before AI came along. Anecdote alert. I read earlier that one of the two major weight loss wonder drugs is hiking price it charges UK NHS. The NHS will cave because Drs would rather send people away with a pill than get into the thorny issues surrounding the socio-demographics of obesity.
Yet I am finding it increasingly difficult to get my anti-depressant. It is literally the “mother of all anti-depressants” or the “anti-depressant of last resort” (quotes I’ve heard in person from older psychiatrists and/or seen in the literature). I got to it after failing 20-30 antidepressants. If this drug disappears its bridge territory. Yet a drug that went off patent 50 years ago and has NO exotic ingredients to push up costs, costs the UK NHS almost £1000 PER MONTH to treat me. That is the definition of price gouging. They do it in Australia too (I saw the costs there in my 6 years living there). Ironically, the country it is cheapest in is the USA! I’ve found it at 10% of the UK price!
Part of the problem is the lack of independence of the medical profession. They are instructed in their mental health rotation early in career that MAOIs are “like leech treatment”. This means fewer medical docs will argue for the drug. Which means the producers (there are 3 generic ones to my knowledge) all charge a fortune…..because they can! Meanwhile there are loads of US residents who have been put on this drug and helped the “demand side”. I do not often defend the US but here I will. Europe and Asia wants MAOIs gone. Institutions like NICE in UK are fast-tracking this process and why IMNSHO should be abolished ASAP. Regulatory capture.
And still no one even tries to discover the reason for continued elevated rates of death and disability that continue years after the Covid bug has ended its death rate spiral.
Also remember 2 points that are well proven-50% of all so-called scientific papers are just plain wrong AND during Covid the majority of journal reported studies were financed by Big Pharma.
I guess you didn’t get the memo that many people are getting and dying of Covid, and that Covid is now the #1 source of childhood disability.
Pardon my latent comment but I found the use of AI in routine procedures interesting but also….a little bit disturbing? What are the use cases for sticking an AI camera all the up the butthole, well it’s a reminder that I’m “due” for the procedure and secondarily….add another use fee for a service rendered by the hospital or department rendering that service to the customer. Clearly one needs to have some practice I’d think without the assist from the AI product.
If the X Files series was starting anew on TV or streaming in 2023 instead of circa 1993, they would only need a few tweaks to their overall conspiracy themed plot. Wealthy oligarchs and the lurking evil of our newest technology advances…