A Front Line Report of AI Damage to the Job Market

This is Naked Capitalism fundraising week. 1188 donors have already invested in our efforts to combat corruption and predatory conduct, particularly in the financial realm. Please join us and participate via our donation page, which shows how to give via check, credit card, debit card, PayPal, Clover, or Wise. Read about why we’re doing this fundraiser, what we’ve accomplished in the last year, and our current goal, Karōshi prevention.

Yves here. On the one hand, the account below, from Richard Murphy’s son Tom, on how AI has upended the job market and career paths in the UK, is anecdata. On the other hand, Tom provides a lot of informative color. I hope readers who either directly or through family and colleagues have further information and insight will pipe up in comments.

It is also maddening, in light of this post and other issues that Murphy has raised about AI, to see him as an enthusiastic user and even AI tout. At the end of his piece, which I have not reproduced given our policy of being firmly opposed to AI use as bad societally, theft of intellectual property, massively destructive environmentally and a reducer of skill levels in users, he encourages readers to write to their MPs….using a ChatGPT tool…which he warns can generate errors and must be checked.

Does Murphy have NO self-awareness? No wonder the Left sucks and Labour is in the state it is in. He’s openly supporting the sabotage of workers. He knows, or ought to know, that MIT has found that 95% of the companies that tried implementing AI did not find it was net productive and that LLMs are not accurate, and these problems are inherent (as in the idea that they would operate vastly better with sufficient scale is false). And this post means he can’t pretend to unaware of the impact of his treating AI use as not just defensible but even desirable.

By Richard Murphy, Emeritus Professor of Accounting Practice at Sheffield University Management School and a director of Tax Research LLP. Originally published at Funding the Future

AI is reshaping work faster than universities, employers, or governments can adapt.

In this intergenerational conversation, I talk with my son Tom about how artificial intelligence has destroyed the old promise: work hard, get good grades, and you’ll get a good job.

From “ghost jobs” to algorithmic hiring and a two-tier workforce, this video explores what happens when AI changes everything — and how young people can still shape their future.

This is the audio version:

This is the transcript:


I closed this week’s Funding the Future podcast by saying:

We are living in a period of change. You are going to live through a lot more change than I’ve seen in my career.

The conversation that I was concluding was with my son, Tom, who, for the past eighteen months, has been behind the camera for every YouTube video we’ve made. For the first time, he came out from behind the lens to talk about something that directly affects his generation: how artificial intelligence (AI) is reshaping employment, and what that means for young people trying to enter the world of work.

The conversation ranged from personal experience to hard data, and from graduate disillusionment to the new inequalities of AI. What emerged was a sobering, and at times disturbing, portrait of a labour market being transformed faster than most people — including employers — can comprehend.

The AI Revolution Without a Plan

I began by noting that many businesses are rushing into AI adoption out of FOMO — the ‘fear of missing out’ — rather than as a consequence of having any coherent strategy. Few know why they might use it for, or with what consequences. Yet, despite this confusion, the impact on real people, and in particular young people, is already enormous.

Tom’s observation was blunt: young people, students and graduates are entering a “confusing situation” in which they don’t know what skills to acquire or what jobs will even exist.

When he started university in 2020, AI was a distant rumour. Four years later, it dominates everything in the world of work he faces. He likened it to the spread of smartphones; initially novel, then suddenly everywhere.

The result is a generation being told to invest in education without any clarity about where it will lead. That uncertainty is spreading fast.

Graduates Are Being Squeezed Out

The graduate employment system, as Tom described, has become dehumanised and alienating. Job applications are filtered through AI-driven forms that strip out personality and force applicants to re-enter every detail manually — “the most boring thing in the world,” he said. After hours of aptitude tests and algorithmic vetting, most applicants receive automated rejections, often with no feedback.

The scale of the mismatch is startling. A survey by Hult International Business School found that 98% of employers said they struggled to fill vacancies, but 89% admitted they did not want to hire graduates. In short, employers complain of a skills shortage while rejecting the very people they demand that the economy produce.

The absurdity deepens. In 2024, there were just 17,000 graduate jobs advertised in the UK, attracting 1.2 million applications, or about 70 applicants per job. Of course, many people applied for many jobs, but it is still the case that universities continue to expand their intake, producing another 465,000 graduates each year. Whatever the data underpinning the ratios, the arithmetic simply doesn’t work.

No wonder Tom concluded: “The old idea — get good grades, go to a good university, get a good job — is dead.”

The Rise of “Ghost Jobs”

If that weren’t demoralising enough, many of the jobs graduates do apply for turn out to be ghosts. These are vacancies that companies post without any intention of filling them, whether to collect CV data, to signal “growth” to investors, or simply to test the market.

Tom cited data suggesting 30% of advertised positions are ghost jobs, rising to nearly 60% in some sectors. His friend applied to seven such roles, all of which remained online long after rejection letters arrived. Another firm Tom had applied to kept the same vacancy open for a year, re-advertising it every few months, but never seeming to want to actually fill it.

The result is an economy where hope is systematically wasted. Jobseekers spend weeks applying to roles that don’t exist. Companies exploit the illusion of opportunity to mine personal data or inflate their image. It’s a form of corporate dishonesty — employment wash, if you like — and it leaves young people exhausted and disillusioned.

AI and the Death of Entry-Level Work

Tom’s own field of study was accounting and finance. Yet, as he discovered, the firms that once hired thousands of trainees are now scaling back. The Big Four accountancy firms, he noted, have cut graduate recruitment by between 6% and 29% in a single year.

Why? Because AI can already do much of the routine data work that once required human accountants. Employers are filling entry-level gaps with algorithms rather than apprentices.

The same pattern is visible in marketing, coding, and customer service — professions now being “AI-washed.” As one local business owner told me, his marketing agency has effectively become a tech firm: it writes AI prompts for clients instead of campaigns. The problem, he admitted, is that “if there are no juniors now, who replaces me when I retire?”

This is the new paradox of automation: short-term efficiency at the cost of long-term sustainability. If companies eliminate the bottom rung, there will be no ladder left to climb.

Learning to Master the Machines

Yet not all the news is bleak. Tom and I both use AI every day in our work — for research, structuring videos, and accelerating creative processes. Used intelligently, it saves time and sparks ideas.

But, as we agreed, AI must be mastered, not served. The real risk is not that machines take our jobs, but that people forget how to think. Writing a good AI prompt is not like typing into Google. It’s a craft requiring clarity, precision, and critical awareness — the very skills universities should be teaching.

Yet universities, fearful of cheating and plagiarism, have mostly retreated from AI training. They are preparing students for a world that no longer exists. The Hult survey again offers insight: 94% of graduates who learned AI skills said it improved their career prospects, but few are being offered the chance to learn those skills formally.

So, we have an education system afraid of the tools that define the modern workplace. That cannot last.

Job Killer or Job Creator?

Tom’s view was that AI is both: it destroys routine work but creates new opportunities for those with initiative. It can do in seconds what once took hours — from writing citations to data analysis — freeing people to focus on creativity, design, and strategy.

But he also warned of an emerging two-tier workforce:

  • AI users — lower-paid, task-driven, and easily replaced;
  • AI designers and strategists — fewer in number, but commanding far greater influence.

That divide, he suggested, will define his generation’s inequality. Not just between rich and poor, but between those who learn to work with AI and those who are worked by it.

Lifelong Learning or Lifelong Precarity?

Our conversation ended with something more hopeful. Over the past eighteen months, Tom has reinvented himself as a videographer, editor, and digital learner — mostly self-taught through online courses and peer learning. He has acquired a range of skills, from lighting and sound to AI-assisted editing, none of which existed in his original degree.

That, perhaps, is the lesson. In a world of accelerating change, learning can no longer stop at graduation. It must be continuous, self-directed, and creative. Those who adapt will find opportunities. Those who wait for the old job market to return will wait forever.

A New Social Contract for Education and Work

The deeper problem is, I think,  systemic. We are asking young people to invest time, money, and hope into an education system that no longer guarantees them a livelihood.

Employers complain of shortages while excluding the newly qualified.

Universities sell courses while refusing to teach the skills employers now demand.

Governments celebrate “innovation” while ignoring the growing despair of those left out.

If we want AI to serve society, not enslave it, we need a new social contract between learning, work, and technology — one that recognises human potential as our most valuable form of intelligence.

As I said at the close of the podcast:

AI is changing the world, whether we like it or not. The only question that remains is whether we shape it — or let it shape us.

AI isn’t the end of work. But it is the end of pretending that the old rules of education and employment still apply.

Print Friendly, PDF & Email

25 comments

  1. ocypode

    I find AI discourse tiring because people often don’t think about the most basic questions at hand. Murphy is no different. These surface-level debates that ignore the fact that AI companies might well go bankrupt soon (Zitron has made that case clear); if companies are replacing people with AI and AI goes bust they won’t have a business anymore. This wouldn’t be about the “end of work” so much as about “the collapse of a lot of businesses” (and the stock market). Furthermore, when the real costs are properly accounted for, capital expenses and electricity consumption means that AI usage properly costed (and not subsidized by VC or bad loans) is probably substantially more expensive than human beings.

    Another note:

    Tom’s own field of study was accounting and finance. Yet, as he discovered, the firms that once hired thousands of trainees are now scaling back. The Big Four accountancy firms, he noted, have cut graduate recruitment by between 6% and 29% in a single year.

    Why? Because AI can already do much of the routine data work that once required human accountants. Employers are filling entry-level gaps with algorithms rather than apprentices.

    Oh. Accountancy, a field that notoriously requires detail-oriented people because minuscule mistakes can be disastrous, is being replaced by delusional parrots. I wonder if this will have any consequences.

    Reply
    1. Jesper

      Accountancy jobs are disappearing due to automation, not AI. Some software packages are amazingly easy to work with and has automated away a lot drudgery – the drudgery a junior accountant would be hired and paid to do. Not all tasks can be be automated but for the tasks that can be automated then there is no contest, automation is faster, cheaper and more accurate than any human.
      I’d say most SW-packages and ERP are now providing easy to use automated data-extraction, data-migration between systems, data-analysis, data-visualisation, dashboard-building and basically most of the drudgery an accountant does can be highly automated.
      Using AI, as in neural networks, in areas where rules and processes are clearly defined and optimised seems strange to me. As far as I can tell a lot of the successes claimed by AI is not some neural network, it is automation.

      Reply
    2. lyman alpha blob

      As someone who works in that field, I can tell you that yes, there will be consequences. I have seen personally this rush to adopt “AI” due to some fear of missing out without any plan on how or why it will be used. At my employer, we seem to do much more software testing on an ongoing basis now than any of our actual core work.

      About 15 years ago, I was asked to use what was then called “automation” to post certain transactions. What I found was that the “automation” did not have the same attention to detail that a human being has, and constantly got things wrong. Rather than going over everything twice, I stopped using the “automation” after day 1, and just got things correct the first time by doing the work myself. Fast forward to the present and we are now using “AI” to post certain transactions, and it works about as well as the “automation” did 15 years ago – it still can’t even get the date on a document correct on a consistent basis, much less the more important information. But now I’m required to use it, even though it wastes time, and our labor costs have greatly increased after implementing the latest enshittified software platform brought to the world by a squillionaire techbro that we all have grown to loathe. And nobody seems to care much at all – keeping up with the latest hype seems more important than getting the core work of the business done efficiently.

      Reply
    3. Acacia

      Because AI can already do much of the routine data work that once required human accountants.

      That’s not what actual researchers are finding when they study this, e.g.:

      GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models
      https://arxiv.org/pdf/2410.05229

      Our findings reveal that LLMs exhibit noticeable variance when responding to different instantiations of the same question. Specifically, the performance of models declines when only the numerical values in the question are altered in the GSM-Symbolic benchmark. Furthermore, we investigate the fragility of mathematical reasoning in these models and demonstrate that their performance significantly deteriorates as the number of clauses in a question increases.

      In this study, LLMs are evaluated using grade school math (GSM) problems. Merely changing names and/or numbers (Sally to George, or 7 apples to 4 apples) can degrade performance, suggesting that the LLM must have seen some of the standard questions in its training data, and hasn’t understood much.

      But sure… give accounting work to an app that can’t handle grade school math — what could go wrong?

      Reply
      1. Polar Socialist

        Back in the day, when the computer first time beat a grand master in chess, the joke was “how about a game of checkers, then?”

        LLMs are decent at guessing the next token (read: word) in the sentence they are constructing, they have not been trained at all to do any kind of math.

        And the LLMs are just a small part of the field called AI, albeit very popular at the moment as anyone can fool around with them until the novelty of the toy wears off.

        Reply
        1. ChrisPacific

          Exactly. I don’t know why we needed a study to show this (well, I guess I do, but it shows the degree to which the mania has taken hold). We also don’t need a study to show that if you use a spoon to cut up a steak, it’ll do a poor job of it.

          Back in the distant pre-GPT past, we used to call the field ‘machine learning’ and had many different learning models specialized to particular tasks. LLMs were one such model, designed for natural language processing. Nobody would have dreamed of applying them to mathematical problems, any more than you would think of cutting a steak with a spoon.

          The fact that we are even asking the question proves exactly how misaligned expectations have become – tacitly aided and abetted by the likes of OpenAI to prop up the bubble, though always with suitable CYA disclaimers in the fine print.

          Reply
  2. ilsm

    I suspect, having no more data than Murphy, that recent grad job issues are like any recent grad job situation during the aging expansion from trillions of printed dollars during Covid.

    Reply
    1. Mikel

      “If that weren’t demoralising enough, many of the jobs graduates do apply for turn out to be ghosts. These are vacancies that companies post without any intention of filling them, whether to collect CV data, to signal “growth” to investors, or simply to test the market.”

      I remember that being discussed as a major issue before Covid. It was called an programmed algorithm problem before the “AI” hype.

      And this is all about “trillions printed” … but the era of ZIRP and near ZIRP precedes Covid.

      Reply
      1. hazelbee

        adding to what you both said – yes ghost jobs have always been a thing. they were a thing 20-30 years ago, they are still a thing – and that has nothing at all to do with AI.

        “algorithmic hiring” ? yes also a “thing” for decades, just the implementation of the algorithm has changed over time.
        It’s a necessity. If you have 100 applicants for 2 roles what do you do? Interview all of them? no. not possible. you apply an algorithm to cut down to a sane number of people to actually interview. That has nothing to do with AI.

        There are many factors here concerning business will to invest in new hires, in growth. Blaming AI for all of them is too simplistic. Blame the “AI hype” perhaps but don’t blame AI.

        Particularly with this contradiction:

        from the intro: “MIT has found that 95% of the companies that tried implementing AI did not find it was net productive ”

        if 95% of companies are getting no net benefit from AI then why is AI the cause of all these woes? That does not logically follow.

        last – the section on “learning to master the machines” and “a two tier workforce” are good.

        But the irony here- the skills that are needed to “master the machines” are the same skills that are needed to work well in a team with other people – i.le. clarity of thought, critical thinking, clarity in communication and intent, ability to reason through and see different bias at play, ability to influence and constrain to the relevant topic/theme, understanding of the team goal and team constraints.

        These are ALL needed to work a large language model properly, and they are ALL needed to work well in a complex team environment.

        Anecdata: My eldest daughter finished uni this summer. Her friends are roughly split a third, third third between proper “career” jobs, going on to a Masters degree or just working any job/travelling.

        Reply
  3. griffen

    This is the world my youngest nephews or nieces are going to experience. Going to college in the US used to, long ago I admit, not feature soul crushing levels of student debt. I graduated from a humble four year college in 1995, computer science degree in hand but within the next year I conceded at last, that a future of Cobol coding was not for me. Being able to pivot into finance or accounting type roles was crucial.

    Don’t think my personal anecdotes would apply so readily today. Employers used to like their youngest employee hires to be bright , eager and also willing to accept entry level pay perhaps beneath their desired level to earn the experience. More recently I submit that college intern programs are a key to launching full time employment after graduation; a highly motivated individual must pursue these avenues. Companies I once worked at over the past 9 years in South Carolina continue to post roles that I presume are open, more than likely because of turnover than corporate expansion. It’s tough out there, and AI will only worsen how the HR systemic procedures glean out a high percentage of applicants. Also, in my own experience but applying for professional finance roles during late 2009 to October 2010 was a brutal eye opener.

    It’s the opposite of a classic song…” Future’s so bright, I gotta wear shades..”

    Reply
  4. Carolinian

    This is an important topic but surely the ‘deskilling” didn’t start with AI or even computers. The PMC are merely the latest victims of an industrial age that sets efficiency versus biology. Our biological side wants to make more of us while our efficiency side takes away the need for more of us. My farm family mother was one of eleven children at a time when poor farmers needed the labor. It’s doubtful that one sees many US families that size now.

    A less populous future seems inevitable and for the planet that would be a good thing if we can make it that far. This used to be a constant topic among the intelligentsia but was inconvenient to the tycoons and got shoved aside. Very likely they are our biggest problem.

    Reply
    1. JBird4049

      The business class — owners, CEOs, investors, Techlords, billionaires… all want increased efficiency in wealth extraction, but do not care that such increased efficiency decreases effectiveness to where businesses cannot survive or can only survive by becoming financial grifts. Since the whole purpose of modern economics is to get as much money by any means possible without considering any of the costs, they are not bothered by the disappearance of the real economy.

      Reply
  5. Acacia

    I’ve commented a fair bit on this previously, but if you want to see the impact on the job market, look no further than the feeds on Linkedin.

    It’s pretty painful, really, as there are so many jobs being offered for “AI training” or something related to “AI” — i.e., helping company X to develop a service to eliminate entry-level jobs in companies Y, Z, P, Q, R, etc. — while at the same them there are many software engineers also posting about how it’s simply not going to work, vibe-coding is cr*p, etc.

    It’s safe to predict that the damage to the job market is just beginning, too.

    Reply
  6. t

    Keep hearing a lot about the need for “US AI dominance” like it’s a space race or there is finite amount of AI and we cannot afford to miss out or I don’t even know.

    Reply
    1. Mikel

      “Need?”
      There is desperation to create demand for something not needed to do things we are already doing because some fools have tied themselves up in some circle-jerk financing.

      Reply
  7. ejf

    Here’s an idea. Have Tom put together a panel of 2 or 3 of his classmates, each with an expertise in computer programming. Have them ask of several AI apps to put together a solution to the Barista machine problem (they’ll know what I mean): there’s a constantly changing Queue of Customers; each orders a coffee of a particular type (a latte, extra shot of expresso, with cream….); all the while tracking the coffee inventory as well as the extras

    Then critique the AI “solutions” and illustrate to the audience how AI screws up.

    Reply
  8. ambrit

    Our middle daughter who just got her teaching degree for the State of Louisiana told me the other day via telephone that she and her fellow pedagogues are encouraged by the Administration to use AI in producing the regular student reports the academic middle managers demand.
    “There is a ton of paperwork associated with the students, (she teaches fifth grade,) on top of the regular classwork. Other teachers complain about the crushing volume of paperwork now demanded from them. The Administration’s response is this “suggestion” that we use AI to plug the gap.” she says.
    When I mention the issues of “hallucinations” and plain old data falsification, she replied that the teachers all recognized that but contend that letting AI “plug the gaps” and then checking on the computer generated results was ‘easier’ than straight human generated paperwork. (She objects when I use my “Terran human” category; “Stop it with the conspiracy theories Dad.”)
    What I later realized in this particular issue is the underlying problem of Administrations “justifying” their bloated existence by demanding reams of basically unnecessary paperwork. As was noted earlier by IMDoc, professional practitioners are being double tasked. First in their fields of specialty, and second as glorified ‘file clerks.’
    AI is being touted as the sine qua non of “efficiency.” As with almost all previous examples of “efficiency” I can think of, this ‘new’ strategy merely eliminates a new swath of workers and dumps the resulting ‘orphan’ tasks onto the backs of some other already overburdened workers. If this is “efficiency” then I have a byte sized bridge over the Information Superhighway to lease you.
    Stay safe.

    Reply
  9. stefan

    While I’m skeptical of AGI, I think AI can be useful in highly focussed applications.

    For instance, my son, who is a medical doctor, is implementing “deep-learning” AI to interpret H&E (Hematoxylin and Eosin) stain–the most common histological stain–in routine colorectal cancer slides to predict patient outcomes and support more accurate, personalized treatment decisions.

    Of course, this doesn’t do away with the problem in Plato’s “Phaedrus”, the myth of Thoth:

    “At the Egyptian city of Naucratis, there was a famous old god,
    whose name was Theuth; the bird which is called the Ibis is sacred to
    him, and he was the inventor of many arts, such as arithmetic and
    calculation and geometry and astronomy and draughts and dice, but his
    great discovery was the use of letters. Now in those days the god
    Thamus was the king of the whole country of Egypt; and he dwelt in
    that great city of Upper Egypt which the Hellenes call Egyptian Thebes,
    and the god himself is called by them Ammon. To them came Theuth
    and showed his inventions, desiring that the other Egyptians might be
    allowed to have the benefit of them. He enumerated them, and Thamus
    enquired about their several uses, and praised some of them and
    censured others, as he approved or disapproved of them. It would take
    a long time to repeat all that Thamus said to Theuth in praise or blame
    of the various arts. But when they came to letters, This, said Theuth, will
    make the Egyptians wiser and give them better memories; it is a specific
    both for the memory and for the wit. Thamus replied: O most ingenious
    Theuth, the parent or inventor of an art is not always the best judge of
    the utility or inutility of his own inventions to the users of them. And in
    this instance, you who are the father of letters, from a paternal love of
    your own children have been led to attribute to them a quality which
    they cannot have; for this discovery of yours will create forgetfulness in
    the learners’ souls, because they will not use their memories; they will
    trust to the external written characters and not remember of
    themselves. The specific which you have discovered is an aid not to
    memory, but to reminiscence, and you give your disciples not truth, but
    only the semblance of truth; they will be hearers of many things and will
    have learned nothing; they will appear to be omniscient and will
    generally know nothing; they will be tiresome company, having the
    show of wisdom without the reality.”

    Reply
    1. ChrisPacific

      Given that that’s a fictional scenario, this is actually Plato writing an argument for illiteracy, which is pretty funny. (I often find Plato funny). If ‘The Republic’ is any guide, Plato is usually both making a serious point of some kind and also encouraging challenge and criticism by taking it a bit too far.

      In more recent works, ‘Children of Memory’ (Adrian Tchaikovsky, 2022) is a sci fi novel that introduces a race of beings that can hold conversations and work together to solve problems far in excess of human capabilities, but that might not actually be smarter than humans or even sentient. Given that he wrote it before ChatGPT set off the whole AI mania, it’s quite prescient.

      (Purists should note that it’s the third in a series, so you should read ‘Children of Time’ and ‘Children of Ruin’ first if that’s important to you. Both are well worth your time).

      Reply
  10. Felix_47

    This AI seems to be a symptom of terminal capitalism. It is very helpful in document analysis, review of records, recording court deadlines and witness lists etc. It is helpful in trial law and trial prep. But these are not activities that improve society even though I do them for a living because I have kids that must eat.

    Reply
  11. Mikel

    “Writing a good AI prompt is not like typing into Google. It’s a craft requiring clarity, precision, and critical awareness — the very skills universities should be teaching.”

    I string of unkind words about this alleged thought process come to mind. Don’t want to get banned.

    There’s a certain oblivion about the clarity, precision and critical thinking (I do note that he uses the word awarness because I assume people are not supposed to be doing “critical thinking” but prompting for BS to swallow hole) that comes from reading and filtering through source material for oneself. And those were the skills universities were supposed to be teaching.

    And where do I start with this?:
    “As one local business owner told me, his marketing agency has effectively become a tech firm: it writes AI prompts for clients instead of campaigns. The problem, he admitted, is that “if there are no juniors now, who replaces me when I retire?”

    Did he ask what makes himself irreplaceable and why people choose to use an agency? If he can’t define it, has he asked himself if he actually owns “a business” anymore? If he wanted to sell his business, what would he be selling?
    Newsflash: he doesn’t have a “tech company”. He’s a marketing guy that put a platform middle man between himself and his clients.
    Flashback: Remember when WeWork and a host of companies flitted around, branding themselves as tech companies? It’s all so 2000 to 2010. Plenty has been written and assumed learned about the marketing hype (hey…he is in marketing) of branding all kinds of operations as “tech companies”.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *