Coffee Break: The Future and Follies of Science and AI as Automation, for Better or Worse

Posted on by

Part the First: Who Will Supplant the United States in Scientific Research?  Before going any further in answer to The Rev Kev’s suggestion from last week, it is important to note that while the US currently remains the acknowledged leader in scientific research, this is a matter of quantity as much as quality.  Other countries are better because support does not depend on success in a grant lottery, but their footprint is smaller.  No place is perfect, but there are few American scientists who do not look at Canada and Europe and Australia with some wistfulness.  This will eventually include China.

Thus, my unequivocal answer to the Rev is China, which would not have been my choice not so long ago.  But then I would never have anticipated the outright attack on American science by the current Administration.  When I began working in the laboratory in the 1970s Chinese scientists were not uncommon, but they were from Taiwan mostly, with a few from Hong Kong.  The first scientist I ever met from the People’s Republic of China (PRC) was a botanist/plant biochemist named Mr. Hu.  He was “rehabilitated” in the late-1970s after being sent to the provinces during the early days of the Cultural Revolution.  Our department hosted him for two years, after which he returned to his former academic position at a higher rank.  Mao was still dead and the Gang of Four were on the way out.

A short twenty years later when I was a research associate, scientists from the PRC were everywhere, not without some friction once in a while.  But they are very good scientists, and most of those I knew seem to have remained in the US.  However, as support for research has improved in China, many researchers with a connection to the country are returning.  And the PRC is recruiting.  By most measures China is close to the US in scientific research in quantity, and I expect they will pass the West in quality soon, despite the blinkered view of American politicians and a few scientists.  This would not have crossed any of my colleagues’ minds ten years ago, but there you are.  Good science is good science, no matter where it is done.

A short commentary in Nature (June 9) outlines a likely trajectory.  Chinese scientists will stay home and build the institutions required, and leading international scientists other than the convicted liar Charles Lieber, formerly of Harvard, will accept research positions in China.  Joseph Needham, who has been one of my most important teachers in how to be a scientist, would approve.  It is true that China will struggle due to “concerns about academic autonomy, institutional transparency and quality of life…(and)…the strength of China’s home-grown research and innovation system will hinge on cultivating a truly open and supportive environment where top talent can remain and thrive.”  I expect this to happen sooner rather than later as the current global hegemon leads to a terminal Decline of the West undreamt of by Oswald Spengler.

Absent latter-day Armageddon, China’s home-grown research establishment will surpass all others in the lifetimes of my children.  They are a very patient people.  We are a very fickle and by definition unserious people.

Part the Second: Follies of American Science, Continued.  As China rises, America does something else.  Still, as the headline puts it, Senators push back on Trump’s proposed $18 billion NIH budget cut, as Jay Bhattacharya offers to “work with Congress.”

National Institutes of Health Director Jay Bhattacharya faced sharp questions on Tuesday from Republican and Democratic members of a Senate Appropriations subcommittee about the agency’s 2026 budget, with lawmakers struggling to reconcile his stated commitment to biomedical research with recent grant terminations, funding delays, and the Trump administration’s sweeping proposed spending cuts.

Well, some things are irreconcilable and always will be.  Senator Dick Durbin of Illinois noted that Northwestern University (Chicago and Evanston) has had “1,300 awards…terminated or frozen, including $9 million for clinical trials in colon, brain, and childhood cancers” as a result of an $18 billion cut to the NIH budget.”  Other Senators chimed in with similar comments and noted the Administration had terminated a 20-year effort to develop an HIV Vaccine.  I suppose someone at DOGE wonders why this has taken so long, but an effective vaccine against HIV (i.e., a vaccine that prevents the disease and its transmission) will probably come before a similar vaccine against pandemic coronaviruses, emphasis on the plural, especially as humans continue to push into areas that harbor these and other zoonotic pathogens.

The response of Jay Bhattacharya, MD-PhD, about increasing funding for Institutional Development Awards (IDeA) was nothing but nonsensical distraction:

“In my mind, it’s (IDeA) probably less funded than it ought to be.  And I actively would love to work with Congress to think of ways that we can make NIH investment in scientific research more geographically dispersed than it currently is,” said Bhattacharya, adding that he believes the concentration of NIH funding among a small number of top universities has led to scientific group think.

“I actively would love to work,” instead of inactively?  Never mind.  The IDeA Program is old, and it works as well as it can.  My previous institution was eligible for such awards, which “spread the wealth” by setting aside a pot of money for research at institutions in states in the bottom half of NIH research support.  This is noble, and it works.  But it is also a drop in the bucket.  The big states with the big universities and independent research organizations (e.g., Scripps, Salk, Fred Hutchinson) will remain what they are.  Maybe.  We can hope.

For a one-stop shop to see where NIH funding to medical schools goes, the Blue Ridge Institute for Medical Research (BRIMR) is essential.  A good place to begin is the Schools of Medicine link on this page.  It opens an Excel spreadsheet that includes 148 medical schools.  That the Top-20 medical schools receive about 50% of the support is not an accident and this does not lead to “scientific group think.”  The scientists at the University of Kentucky College of Medicine (55) and the University of Georgia (unranked, but highest ranked university without a medical school that will open in 2026) think exactly like those at UCSF (1), Johns Hopkins (6), and Jay Bhattacharya’s triple alma mater Stanford (7).

One other thing to note here: As shown in the spreadsheet, the overall indirect cost rate (overhead) for this extramural NIH research is 28%.  This is a bargain by any reasonable definition.  And no, medical schools are not getting rich on indirect costs.  These institutions, public and private, provide the built environment and resources necessary for biomedical research, while spreading the wealth beyond Washington DC and environs.  This was the vision of Vannevar Bush (of MIT) eighty years ago, and he was correct.  NIH extramural research funding is an incalculable force multiplier of the work done at NIH in Bethesda, Research Triangle Park, and a few other locations.  And yes, I am aware that the entire process can be improved.  But this is not what the current Secretary of Health and Human Services has in mind.

Part the Third: Can’t Anyone Here Play this Game?  Continuing with this thread, The Ol’ Perfesser Casey (“Can’t anyone here play this game?“) Stengel comes to mind with this: HHS reverses hundreds of CDC firings.

A spokesperson for the Department of Health and Human Services, which oversees the CDC, confirmed that the Atlanta-based agency will bring back more than 450 personnel who were initially fired as part of a department-wide reorganization.

That reorganization, directed in part by the U.S. DOGE Service, has seen the department downsized from approximately 80,000 employees to 60,000, with some of the deepest cuts to the CDC, the National Institutes of Health, and the Food and Drug Administration.

Among the divisions reinstated are the National Center for HIV, Viral Hepatitis, STD, and Tuberculosis Prevention; the National Center for Environmental Health; the Immediate Office of the Director; and the Global Health Center. Those centers include programs that work to keep cruise lines safe from disease, prevent childhood lead poisoning, and track and prevent HIV.

HHS Secretary Robert F. Kennedy Jr. previously said that at least 20% of the department’s cuts were “mistakes” and that it was “always the plan” to reinstate some employees.

Let me get this straight.  Needlessly upending the lives of people doing essential scientific work on a whim was just a mistake?  Good to know.  Come to think of it, they are playing this game exactly as intended, but their intentions are ill-considered in the extreme.

Part the Fourth: And in Other News. RFK Jr. names new members of CDC’s vaccine advisory panel.  The new members are:

  • Joseph R. Hibbeln, a psychiatrist and nutritional scientist who previously worked on nutritional neuroscience at the NIH
  • Martin Kulldorff, an epidemiologist formerly at Harvard Medical School, has served on an FDA safety committee as well as the vaccine subgroup of ACIP
  • Retsef Levi, a professor of operations management at the MIT Sloan School of Management
  • Robert Malone, a physician who conducted early research on mRNA vaccine technology
  • Cody Meissner, a professor of pediatrics at Dartmouth’s Geisel School of Medicine, has previously held advisory roles at the CDC and FDA, including as an ACIP member
  • James Pagano, an emergency medicine physician
  • Vicky Pebsworth, a nurse with a Ph.D. in public health who has previously served on FDA vaccine advisory committees
  • Michael Ross, an obstetrician and gynecologist who has served on a CDC advisory committee for the prevention of breast and cervical cancer.

Four of these new members of the panel (down from seventeen, the better to manage outcomes?) were listed in the dedication of RFKJr’s book The Real Anthony Fauci: Malone, Kulldorff, Pebsworth, and Meissner.  To call this book tendentious in an insult to the word, but its references were not hallucinated by ChatGPT or equivalent.  They were chosen and misinterpreted the old-fashioned way, intentionally.  Politicians go with people they know, but a few comments about two of the new members may be in order.

Martin Kuldorff is one of three authors of the Great Barrington Declaration (Bhattacharya was another author), which was one of the primary sources of the “Let ‘er rip” approach to COVID-19 so that herd immunity could be reached in a matter of months, while the vulnerable were protected.  Not exactly.  Herd immunity would most likely require durable immunity to the pathogen, which is not elicited by coronaviruses or vaccines against coronaviruses.  Herd immunity (short animation) works for measles, as long as more than 90% of the population is vaccinated.  Immunity to measles through previous infection or vaccination is very durable.  As for protecting the vulnerable, that was left to our imaginations, back when we had no idea of the natural course of SARS-CoV-2 infections.  One should never generalize (too much) from his or her own necessarily limited experience, but I have had two very good friends, both very healthy, die of COVID-19 sequelae.  Yes, I am still angry about that and will remain so until I join them in the Great Beyond.

Robert Malone has claimed to be the inventor of mRNA vaccines.  As mentioned here before, he was the first author of the first published study showing that a foreign mRNA could direct expression of the protein in cultured mouse cells.  He later worked on optimizing the procedures for transfecting cultured mammalian cells with foreign mRNAs.  Immediately prior to the COVID-19 pandemic, he was coauthor of a few papers on rapid responses to emergent infectious diseases.  I read those papers in vain for mention of mRNA.  mRNA vaccines were invented by on one in particular and certainly not by Dr. Robert Malone.

For those who can surmount the paywall, more information on the new panel is here.  STAT’s gloss on Dr. Robert Malone:

Malone has both claimed he was one of the inventors of mRNA and denounced the technology, rising to prominence within the anti-vaccine universe and among critics of the Covid response through frequent appearances on podcasts during the pandemic. While Malone did some early research on the technology, he didn’t play a major role.

Malone is a trained physician and researcher. He gained wide attention for questioning the safety of Covid shots and spreading conspiracy theories on Joe Rogan’s podcast in late 2021. He also spoke at rallies and other events in opposition to Covid shots, including alongside Kennedy.

That about covers it concerning Dr. Robert Malone.  Those intrepid souls who so desire can find Dr. Robert Malone and his alter ego Dr. Bret Weinstein all over YouTube.

Part the Fifth. Gene Therapy that Works.  We discussed gene therapy for hemophilia here in March 2023.  In a follow-up, this paper from NEJM shows the therapeutic effect endures for at least thirteen years after initial treatment using an adeno-associated vector to deliver the missing Factor IX to patients with Hemophilia B.  The paper is behind a paywall, so an abstract of the Abstract is included here:

Adeno-associated virus (AAV)–mediated gene therapy has emerged as a promising treatment for hemophilia B. Data on safety and durability from 13 years of follow-up in a cohort of patients who had been successfully treated with scAAV2/8-LP1-hFIXco gene therapy are now available.

Ten men with severe hemophilia B received a single intravenous infusion of the scAAV2/8-LP1-hFIXco vector in one of three dose groups (low-dose: 2×1011 vector genomes [vg] per kilogram of body weight [in two participants]; intermediate-dose: 6×1011 vg per kilogram [in two]; or high-dose: 2×1012 vg per kilogram [in six]). Efficacy outcomes included factor IX activity, the annualized bleeding rate, and factor IX concentrate use. Safety assessments included clinical events, liver function, and imaging.

Participants were followed for a median of 13.0 yearsFactor IX activity remained stable across the dose cohorts, with mean factor IX levels of 1.7 IU per deciliter in the low-dose group, 2.3 IU per deciliter in the intermediate-dose group, and 4.8 IU per deciliter in the high-dose group. Seven of the 10 participants did not receive prophylaxis. The median annualized bleeding rate decreased from 14.0 episodes to 1.5 episodes, which represented a reduction by a factor of 9.7. Use of factor IX concentrate decreased by a factor of 12.4). A total of 15 vector-related adverse events occurred, primarily transient elevations in aminotransferase levels (indicative of transient mild liver dysfunction). Factor IX inhibitor, thrombosis, or chronic liver injury did not develop in any participant. Two cancers were identified but were deemed by the investigators, together with an expert multidisciplinary team, as being unrelated to the vector. A liver biopsy that was conducted in 1 participant 10 years after the infusion revealed transcriptionally active transgene expression in hepatocytes without fibrosis or dysplasia.  Levels of neutralizing antibodies to AAV8 remained high throughout follow-up, thus indicating potential barriers to readministration of the vector.

Coagulation factors are synthesized in the liver and secreted into the blood.  Thus, infection of the patients’ livers with the AAV vector resulted in stable expression and secretion of Factor IX.  This was a one-time treatment, and these patients were not dependent on subsequent infusions of pure Factor IX.  However, the AAV8 induced an antibody response, and this means that readministration of the treatment could cause a systemic immune response.

Years ago, scientists got ahead of themselves, and an immune response killed a healthy young volunteer in one of the earliest tests of the feasibility of this kind of gene therapy.  The current paper shows how biomedical and clinical sciences work together – incrementally, based on deep knowledge and sound practice.  Which leads me to believe the Autism Moon Shot advertised by Secretary Kennedy will have trouble finding the cause(s) of autism spectrum disorder by September.  I would also note that the international team of researchers doing this research on gene therapy for hemophilia included Americans supported by NIH.  Disease knows no political boundaries, but it seems now that American money can be spent only on American Science.

Part the Sixth.  AI and Education. As part of my day job, I have been with trepidation reading the literature on AI in medical education.  This is growing at a surprising rate but not as fast as the adoption of various forms of algorithmic intelligence among medical students.  Nicholas Carr has written an excellent essay on The Myth of Automated Learning, showing that automation is the real threat of AI.

Carr’s take seems exactly right to me.  “The real threat AI poses to education isn’t that it encourages cheating. It’s that it discourages learning.”  And it does this because AI is fundamentally an automation technology.  Computers, even those that talk back, cannot do what human reason does, but they can calculate much faster and thereby produce a reasonable facsimile.

Automation itself can have good or bad effects on a learner’s skills.  The worker could (1) improve his skills, (2) see her skills atrophy, or in the worst scenario (3) see his or her skills never develop.  When I had students in the laboratory, they could use shortcuts in the form of automation or reagent kits only after they learned the old-fashioned way manually.  If I say so myself, students in other labs lagged in their development of skills and thus scientific intuition when in their day-to-day operations they were encouraged to use shortcuts and automation in the name of a faux efficiency that improved apparent productivity in the guise of more publications.  Automation in the lab can be a great thing, but only after everyone involved knows precisely what goes on inside the machine or algorithm.  And more importantly, what does not:

Which scenario plays out hinges on the level of mastery a person brings to the job. If a worker has already mastered the activity being automated, the machine can become an aid to further skill development.  It takes over a routine but time-consuming task, allowing the person to tackle and master harder challenges.  In the hands of an experienced mathematician, for instance, a slide rule or a calculator becomes an intelligence amplifier (same with automation of routine lab chores).

If, however, the maintenance of the skill in question requires frequent practice — as is the case with most manual skills and many skills requiring a combination of manual and mental dexterity — then automation can threaten the talent of even a master practitioner.  We see this in aviation (and I would here add medicine and scientific research).  When skilled pilots become so dependent on autopilot systems that they rarely practice manual flying, they suffer what researchers term “skill fade.” They lose situational awareness, and their reactions slow. They get rusty.

Automation is most pernicious in the third scenario: when a machine takes command of a job before the person using the machine has gained any direct experience doing the work. Without experience, without practice, talent is stillborn.  That was the story of the “deskilling” phenomenon of the early Industrial Revolution.  Skilled craftsmen were replaced by unskilled machine operators.  The work sped up, but the only skill the machine operators developed was the skill of operating the machine, which in most cases was hardly any skill at all.  Take away the machine, and the work stops.

To bring this back to medicine and medical education, could the judicious use of AI improve a physician’s craft?  I think it could, but only if the Aristotelian final cause of the AI app is something other than making money for its vendor.  Will the use of AI interfere with the practice of being a physician in his or her medical practice?  Perhaps.  Will AI interfere with a medical student learning the art, craft, and science of medicine?  Undoubtedly.  Will this lead to catastrophe?  Yes, when healing hands never have the chance to develop properly.

Carr notes that “AI too often produces…the illusion of learning.”  I have watched medical students use an extract from the standard textbook of pharmacology (now in its fourteenth edition) as a prompt to convert information on chemotherapy drugs into a pdf of pristine columns of names, mechanisms, and specific uses.  Done and dusted in seconds.  All well and good.  Efficient, yes, but effective?  Not in my experience.  What I fear is that AI really is the “magic fairy dust” that (too many) medical students view as a substitute for the grueling work and total immersion in what makes their calling possible.  As Han Solo put it in a galaxy a long time ago and far, far away, “I have a bad feeling about this.”

More to come after wrestling with his serpent.  But I have found hope in several of our most accomplished students who just finished their first year of medical school.  They have told me, to a person, they can be no help to me in my investigation because they study the old-fashioned way.  That is, they use the syllabus and reading guide as their prompts and actually read and study the 12-to-15 standard medical textbooks they must know deep in their bones to develop the foundation to become the good physician.  Wonders never cease but they seem to be increasingly rare.

See you next week.  Suggestions still welcome!

Print Friendly, PDF & Email

7 comments

  1. Carolinian

    Sorry that you lost friends to Covid. Any thoughts on whether Covid itself may have been created in a lab?–not the certainty of same but just the possibility?

    I know someone who works for the USDA and still has the sword hanging over her due to the back to the office demand. Many of the offices were disposed of and she says others were told to come back even if they have to sit in the hallway. It’ll be like some of our hospital Emergency Departments.

    So yes it’s chaotic and that tracks to Donald J. Trump, the boob. He’s currently wielding his delicate touch in the Middle East.

    Reply
    1. LY

      It could have been created in a lab, but Occam’s razor… As mentioned elsewhere, as humans push the boundaries on wilderness, get our animal products from CAFO, and warm the climate, as a species we’re just buying more lottery tickets in the zoonotic disease jackpot.

      The people pushing hardest for the lab origin explanation are at best using it as a diversion, but more likely playing to their anti-regulatory and anti-public health ideology. It couldn’t have been a result of profit-seeking market flaunting regulations about food safety, environment protection, health, etc.

      Reply
  2. Sub-Boreal

    Thanks, as always, for this weekly round-up. I admit that I almost gagged at “there are few American scientists who do not look at Canada … with some wistfulness”. Although there seems to be quite a lot of recent chatter about scooping talented refugees from Down Below, the reality is that we’re all pretty much used to improvising on a shoestring, and it’s not as though there are vast pools of surplus funding that could be used as bait for recruitment. I suspect that any initiatives along those lines will be fragmentary and launched only for selected high-visibility fields, and led by a few of the bigger schools.

    The email traffic about AI from my former academic employer has already greatly bulked up my “Got Out Just in Time” folder! Your anecdotes about medical education are certainly reinforcing my will to stay as healthy as possible in my dotage.

    This week’s issue of Science has a piece which connects with several of the themes that you’ve touched on, but adds another angle: Science’s reform movement should have seen Trump’s call for ‘gold standard science’ coming, critics say.

    Although I had a passing familiarity with some of the quasi-scandals around reproducibility, I hadn’t realized that “science reform movement” was a thing, and that it was influential enough that its proponents needed to worry about its work being misused for anti-science mischief. I guess that gets filed under “One More Damn Thing to Fret About”.

    Reply
  3. Jeremy Grimm

    “Automation in the lab can be a great thing, but only after everyone involved knows precisely what goes on inside the machine or algorithm.” I think there are some problems realizing that knowledge of how AI algorithms operate. If that lab automation is an AI trained using an AI technique like neural nets I am not sure that anyone really knows what goes on inside algorithm. I believe that is one of the major problems with AI. No one knows quite how it works. Some of the older lab automation which was called AI back in the day were build using frameworks very like the insect key I used to identify an insect’s family and sometimes its genus. A key organizes a classification taxonomy by directing observations and tests done to identify a subject and locate its place in the taxonomy. As far as I know the nodes, their connections, and weightings trained to construct a neural network based AI and the how and why that AI operates remain a mystery opaque to human understanding. Though I confess I am not familiar with the most recent approaches to building AI, I have seen no literature describing techniques for understanding how an AI operates to generate an answer to a given problem.

    I agree that the human “deskilling” various computer tools including medical AI, threaten results from this age’s mania for financial gain — making money, and to that I would add the mania for efficiency. We have become fixated on obtaining the answer. An answer might solve a particular problem. Deeper knowledge in a field of practice will not be found without further analysis, comparison with the answers to other problems, and exploration of the broader problem space. I confess my notion of this deeper knowledge is vague. I imagine it as the sensed but occulted knowledge that provokes a kind of feeling, an intuition, that I believe mathematicians rely upon when seeking deep connections between unrelated mathematical topics.

    Fixation on the answer makes it too easy to ignore the difficult work of learning what AI tools might have to offer. I like to dream the recent advances in solving the protein folding problem could hold a framework for making protein design an engineering discipline. Perhaps there are analogs to the beams, columns, and trusses of building design, or tensegrity structures acting as components.

    Reply
  4. hazelbee

    On AI in Education and The Myth of Automated Learning –

    Do you think the impact is different based on the subject matter?

    Anecdata but – my son uses chatgpt very specifically with Further maths problems (UK A Levels, aged 17-18. ). We wrote a prompt that he uses – the prompt means the machine doesnt tell him the answer but offers other questions or nudges for him to reason to the answer. He gets good results in exams so he is learning and able to deploy and use that learning..

    but…
    that is a very specific case. very specific usage and for a very theoretical subject, highly symbolic and dry. Versus a “wet” science like lab work in chemistry or biology say.

    the linked article does describe differences. Drawing out where maintainance of skills based on both manual and mental dexterity is needed. AI use can’t help with that manual dexterity.

    interesting article- gives food for thought about his and mine usage in other areas.

    Reply
    1. Terry Flynn

      Thank KLG and hazelbee. I would echo the latter’s question and offer some anecdotes of my own that might be complementary. Firstly, I (not thanks to money but thanks to scholarship, we had none of the former) went to a secondary school (age 11-18) widely considered 9th best in the UK in late 1980s. I did maths and further maths (along with economics) as my A levels. The school at that time had a cohort of the (nearing retirement) old school teachers who didn’t just teach for the test, but taught you how to think and encouraged you in the kind of ways hazelbee’s son is using chatGPT. In a typical cohort of 120 boys in an academic year about 20 of us got into Oxbridge. However it should be stressed that given the “Federal” nature of both universities the school had highly specific relationships with certain colleges: if you wanted to read mathematics you applied to Gonville & Caius[1]. When I applied to Caius to read economics the headmaster sought me out on my way after lunch and I thought “uh-oh, what’ve I done?”. He told me “you do realise that’s not our economics college and we have no guidance as to whether the college Fellows there might hate you because of the school?” I was full of myself and brushed off his concerns. The “gamble” paid off but that’s another story. Anyway having rambled, my point is that I can see how chatGPT could help in the way hazelbee says, since we “potential Oxbridge entrants” got additional training from teachers to develop the kind of skills in synthesis of data, thinking laterally and coming at the problem from multiple directions that was a crude “analogue” to modern computer aids.

      Fast forward to my post-doc in the 2000s when “officially” I worked in health services research (HSR). HSR even then was debated as a term for the field, since it had become so broad as to include practically anything that was health related but wasn’t clinical or bench science. Thus most people like me had a more “specific home field” – mine being health economics – but even then some, like me, found ourselves straying increasingly from this. Choice modelling [2] became my field, although 80% of my applied work there was in healthcare and medicine.

      I spent 20 years working in this field and I was learning new things with every applied study right to the very end. Part of why I think AI will find it at least difficult, and maybe impossible, to provide anything but summaries of the principles of how to a choice experiment is because of the fundamental properties of the data we collect, how we collect it, and crucially, how we (if following best practice) need two datasets to make any robust predictions. I, over those two decades, had to get up to speed on statistical design theory, elements of mathematical psychology (which are great because they are typically based on identity relationships so are true by definition etc and so can be “stronger” than even, say, a six-sigma inference), qualitative research (to be aware of the effects the language/format you use in the survey might influence people), the entirely different (and IMNSHO far superior) way of conceptualising “utility” to economics and last, but not least, a very thorough understanding of the area of health/medicine in which you intend to run the study. This ties back to statistical design theory but a lay expression that helps people understand why automated procedures often fail is “it is greater/less than the sum of its parts”. This is an example of an interaction. My former mentor estimated that of the studies he designed that were large enough to estimate the interactions, over 95% found them to be materially and statistically significant. Which should scare more people attempting to run these studies and certainly casts serious doubt on most choice experiment results published in health since “main effects” models (which assume all two-way and higher order interactions are zero) infest the field and they’re what AIs are scraping from. Real experiments done by real people on real people are needed since choice modelling effectively estimates a “higher dimensional demand/supply function” BEYOND what is available from existing data. So I wonder what KLG feels about the use of AI in the medicine-adjacent fields, particularly those that seek to estimate trade-offs patients (and indeed clinicians) are willing to make?

      [1] Yes this advantage was very unfair, I agree entirely, partly why I didn’t want to make full use of it.
      [2] NB my mathematical psychologist friend/colleague (RIP) and I wrote original version of this wiki after getting so annoyed at the industry written version that had been there previously. The wiki might do a better job in explaining some of the things my bad-brain-fog week caused me to not explain so well in the NC linked blogpost of a few weeks ago.

      Reply
  5. The Rev Kev

    There is a link to an interesting article called “Top Chinese scientists flee Boston area as Harvard, MIT fall in rankings ; Silicon Valley also hit” in today’s Links-

    https://kdwalmsley.substack.com/p/top-chinese-scientists-flee-boston

    They cite reasons like being made to feel unwelcome, not being safe, worried about collaborations with Chine, etc. But I think that the data in this article was from before the Trump regime not only launched a trade war against China but also against Chinese students as well. Add in the gutting of American research and I bet a lot of those Chinese are looking critically at some US institutions and are wondering if they are worth the candle. Is it worth it to say that they studied at Harvard or MIT? And maybe the answer is not. I am reminded how before the Global Chinese Crash that Asians were undertaking plastic surgery so that their eyes looked less slanted as they were envious of the west. After the crash that feeling passed and I think that something similar is happening with US centers of education. Why bust your gut studying there in a foreign environment when Trump could order them all out over a weekend and derailing both their education and their careers. Better to study in another country that does not do that, even if they have to stay home.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *