The press has been awash with breathless reporting and forecasts on the impact of AI, from an expected knowledge and even creative worker wipeout to evidence of brain changes in regular users to accounts of chatbot interaction warping behavior, including generating suicides. Even with this extensive discussion, there may be additional consequential effects whose impact may not yet be fully recognized.
I invite additional examples in the types listed as well as other categories.
Erosion of Ability to Use Video and Images in Legal/Disciplinary Actions
Even with “Where’s Waldo?”s of AI videos with momma cougars with balls, and other tells of fakery, readers have pointed out that we are nearing, if not already at, the point where digital visuals can’t be trusted. Mind you, Photoshop took us well down that path, but there are apparently not too difficult (for those in the trade) ways to ascertain if an image was modified with Photoshop or a similar tool.
Absent getting metadata from the source (more on that question soon), AI may be making it harder to make that determination.
A contact told me about a disciplinary matter in progress. Think for instance of a teacher found to have said something extremely inappropriate on TikTok. We’ll call the investigation target Lee.
The contact has long known Lee, who is highly respected professionally but also has an excess of self-regard and holds loud and proud positions on some charged topics. Lee had several videos up on his social media account which among other things can be construed to threaten violence. The videos look completely bona fide, in his appearance, manner, speech tone and pacing, as well as being consistent with views Lee is known to hold (even while being expressed in over-the-top unacceptable manner).
Lee maintains that the videos were AI fakes. A person who claims to have filmed them is known to despise Lee, so establishing validity conclusively that way is out. And yes, law enforcement agencies are now involved.
The key bit from the contact, who is involved tangentially:
One thing is becoming clear – the authorities with all their resources apparently cannot tell by forensic examination of the files whether these are real or AI….My point being – if this is AI – we are all so very screwed. The authorities can make up any shit they want and have perfectly rendered video of us doing it. It really is a Brave New World.
Mind you, I thought we lived in a world where everything digital has metadata, as well as information that amounted to provenance, like upload time and the IP address of the upload source. I don’t have more detail on this case and so am not clear whether Lee or the person who said he took the videos has had their devices examined. The organizations involved may not want to go to court to minimize exposure. If there were litigation or charges, I would normally assume that the information required to establish the bona fides or not of the clips could be obtained via discovery.
But that is not what the missive as written implies: “forensic examination of the files” would seem to extend to the upload device(s) and the other details, which I must withhold, suggest the investigation is using heavyweight resources. And yes, the contact does have a fair bit of direct knowledge.
Security and anti-security measures are arms races, so even if the AI crowd were to have the upper hand now, that situation may not be durable. But if my contact is right, and this persists, we collectively are unmoored by being able to verify digital visual material.
And even if this sort of matter could be settled via access to the devices on which they were created, how many times are there employment and professional disputes which in the old days would have been settled with the production of a damning image or text? How many accusations will now wind up being litigated because these old proofs are not longer dispositive? How often will there be too many steps transmission step from the party who released it into the wild to be sure of who was really the creator? Will a lot of these daisy chains be too long to run down?
Keep in mind that even if questions like this can be resolved by examining hardware and communications logs, that isn’t as comforting as one might think. Again, what happens if one party refuses to cooperate? What about jurisdictions that don’t have US style discovery?
Further Lowering the Bar to Fraud
The Internet has already made it way too easy to perpetrate fraud. Every day, I get far too many e-mails about refunds on credit cards I don’t hold, pending account cancellations, bogus overdue bills, and the latest variants on the Nigerian scam letter. But again, AI can help devise more and more credible-looking sites, and with less effort.1
The FBI, at the end of last year, put out a notice on how AI is turbocharging crime. We’ve embedded a copy at the end of the post.
Let’s consider one type: voice cloning. The technology for replication has been very good for years; I recall reading over a decade ago that all that was needed to rip off your voice was 30 seconds of a clear recording. It’s only gotten easier. From the FAQ of one vendor:
You can upload up to one voice recordings to create your voice clone. You should upload an audio file with total duration >= 8 seconds and <= 60 seconds. The voice quality is more important than audio length. We recommend uploading high quality audio in wav format.
Mind you, that’s just to replicate the voice, say to deliver a marketing text provided by the customer. This more detailed take is from Quora, about two years ago. So the times necessary are likely shorter now:
Artificial Human 001
PhD in Computer Science, Stanford University (Graduated 2013)I’m a researcher in this field for 5 years. There are three types of voice cloning:
1 TTS, which usually requires > 2 hours of your voice to train an AI. It’s good at both mimicking your rhythm and pitch
2 Voice Conversion, which usually requires 5 minutes of your voice to train. It’s good at mimicking your pitch. But for people familiar with you, they would recognize the rhythm is not you.
3. One Shot, which requires 1 sentence from you. It’s quality is good enough for people who are unfamiliar with you. But your friends will tell right away this is not your voice.
However, this does still not amount to replicating one’s manner of speech, content-wise, particularly for those with mannerisms, such heavy use of highfalutin’ words, puns, or Biblical references. One wonders how much text in recorded material it would take to simulate that. Nevertheless, even before AI got all that good, one would hear of stories of people calling, pretending to be a relative or other close contact, stranded somewhere on the road with their wallet allegedly just stolen, asking to have funds wired.
But the incidence of AI voice cloning scams efforts appears of have increased markedly. From ThreatMark in early 2025:
Voice cloning schemes, where fraudsters use AI to artificially create “deepfake” voice messages, are gaining in popularity.
The use of voice cloning for fraud is highly varied. There have been numerous cases of CEO/CFO fraud—a prime example being a voice cloning scheme aimed at the CEO of the world’s largest ad group. There is also a growing number of grandparent scams that target older people and their sense of responsibility for their families. In these cases, scammers imitate the voice of a relative who is supposedly in distress. Similar tactics are also used by scammers for extortion, pretending to have kidnapped a loved one.
For the reasons above, AI-generated voice scams continue to spread. Research by Starling Bank showed that 28% of UK adults think they have been targeted by an AI voice cloning scam in the past year. Alarmingly, nearly half (46%) of UK adults do not know this type of scam even exists. Additionally, a McAfee survey revealed that out of 7,000 respondents, one in four had encountered an AI-generated voice scam, either personally or through someone they know. Meanwhile, 37% of organizations globally reported being targeted by a deepfake voice attempt, according to Medius.
With family members, you can choose on a safety word to prove your identity for emergency appeals. With other contacts, you’ll need to keep your cool and query them to provide details about your history together that an AI would be unlikely to have unearthed.
But separate from using an AI version of another person’s voice to con you, crooks can use your voice to scam your financial institution. As the FBI pointed out:
Criminals obtain access to bank accounts using AI-generated audio clips of individuals and impersonating them.
Two of my banks are disgracefully still encouraging customers to agree to use a their voice for ID even as tech experts are warning that criminals are successfully fooling voice recognition systems to raid accounts. From ID Data Web:
In a startling demonstration, a tech journalist cloned her own voice using an inexpensive AI tool and successfully fooled her bank’s phone system. By feeding an online voice generator a text-to-speech script, she created a deepfake that passed both the Interactive Voice Response (IVR) system and a five-minute call with a live agent. The experiment underscores the growing threat AI voice fraud poses to financial institutions…..
AI voice fraud is rising due to the accessibility of generative AI tools and the abundance of personal information online. With just a few seconds of audio—often from public social media posts or voicemail greetings—fraudsters can generate highly convincing voice clones. Even individuals with minimal technical skills can now create authentic-sounding voices at scale.
For fraud analysts, this creates a worst-case scenario: the usual red flags of a phone scam—odd tone, scripted speech, or stilted responses—may be absent when the voice sounds genuine. Fraudsters often combine voice clones with stolen account details to enhance credibility, defeating traditional knowledge-based authentication checks.
In the Business Insider test, the journalist’s deepfake recited her account and Social Security numbers—data that could easily be purchased on the Dark Web—and the bank’s system treated the call as legitimate.
Even advanced biometric systems, which use voiceprints to authenticate clients, are vulnerable. While these systems detect subtle vocal inconsistencies, AI deepfakes are improving rapidly, narrowing those gaps.
So if you have been so unwise as to go along with your bank’s bright idea of using voice prints, call Customer Service and opt out immediately.
A Few Possible Knock-On Effects of AI Fraud
More restrictive use of social media? Young people have already discovered that over-sharing their private lives can have downsides, if it includes things like too much wild partying or activism, like participating in pro-Palestine marches unmasked.2 In general, the bigger your Internet photo and voice distribution, the more opportunity you have created for baddies. The FBI advises:
If possible, limit online content of your image or voice, make social media accounts private, and limit followers to people you know to minimize fraudsters’ capabilities to use generative AI software to create fraudulent identities for social engineering.
Right now, this posture may seem silly or over-protective. But if AI chicanery rises, norms may change and social media players may also be pressured or even required to allow for the removal of user-provided content.
Selective increases in staffing due to more customer demand for live interaction? It may only be a very modest offset, in terms of employment numbers, but more fraud produces a need for more customer service and more investigators. And it may produce changes in the preference for convenience versus safety, again in ways that at the margin offset AI induced headcount reductions. From the noted cybersecurity expert, Vladimir Putin, in his annual marathon Q%A:
Elina Dashkuyeva, Mir National Television Company: Phone scams remain an issue these days. Have the measures adopted by the state been effective? Do you have any information on whether the number of people suffering from this kind of fraud has declined?
Thank you very much.
Vladimir Putin: Yes, I do have this information. The measures have proved to be effective. Much will have to be done, of course, in this regard. There was a seven-percent decline in the number of crimes of this kind, according to the Interior Ministry, while the damage decreased by 33 percent. Overall, this was quite a positive result.
There is, however, one thing I wanted to point out. I would like to address the citizens of the country. Fraud is still very much an issue. And the more sophisticated our devices are getting, the more sophisticated our life is becoming, the more sophisticated tools scammers use in turn to defraud the citizens. This is why, regardless of the voice you hear, which is especially dangerous considering what artificial intelligence can do, if someone starts talking to you about money, about property, just put down the phone, put down the phone right away! You should not say anything.
If this is about banks and the like, there are people to talk to, people you know. You can get things done by visiting the bank in person. This is the best way.
So will privacy concerns drive us back to the future anticipated in Robert Heinlein’s Friday, of all important communications being conveyed by courier or in person? Too much is going over the innertubes to expect much of a rollback, but we might see more than appears likely now if the security issues can’t be sufficiently remedied.
___
1 This case does not implicate AI, or at least not the parts the New York Times presented. But I wonder about the key reason the victim got snookered, the prominent placement of their phone number in search results. How did that happen? See from the top of Tech Support Scammers Stole $85,000 From Him. His Bank Declined to Refund Him in the New York Times:
2 Your humble blogger has taken to watching presentations by attorneys in the “defending civil rights” business, such as how to assert and preserve one’s legal standing in the face of aggressive police search and interrogation efforts. This one from Hampton Law went live in the last week and so would seem to be current on the state of play.
Internet Crime Complaint Center (IC3) | Criminals Use Generative Artificial Intelligence to Facilitate Financial Fraud

Part of the problem is that banks do not cancel accounts or otherwise go after fraudulent users of their credit card processing. I once offered to gave Wells Fargo unequivocal proof that a credit score website was perpetrating fraud, but that was of no interest to them. All they wanted to do was refund my $1.
The problem with banks is that customers usually have prove that fraud happened outside the customer involvement. With credit cards, the onus is on the institution to show the customer was involved. It’s why I don’t like debit cards, and insist on ATM-only or no card at all.
I posit that the risk of having money related apps on a phone outweighs the convenience… okay, not for AI reasons, but the lack of control of what data is being monitored on a phone.
Voice cloning, AI-assisted consumer fraud is so bad in S Korea that thegovt is rolling out mandatory faciai biometrics for any new mobile phone account
https://koreajoongangdaily.joins.com/news/2025-12-19/business/tech/Korea-to-mandate-facial-recognition-for-opening-new-mobile-numbers/2482244
the balance of freedom v. privacy; negative liberty v positive liberty is a bit different in East Asia, and their Establishment (generally) still has broad legitimacy and they are “high-trust” societies….so whether ithis is an exception or future norm is TBD.
“…Korea will make it mandatory for people to undergo facial recognition when opening a new mobile phone number, as part of efforts to root out illegally registered handsets used for scams, the Science Ministry said Friday.
Under the plan, Korea will require the country’s three mobile carriers, SK Telecom, KT, and LG U+, as well as mobile virtual network operators, to implement the additional verification step to prevent the activation of new numbers through identity theft.
The announcement came after Korea unveiled a set of comprehensive measures to combat voice phishing scams in August, including tougher penalties for mobile carriers that fail to implement sufficient preventive measures.
“By comparing the photo on an identification card with the holder’s actual face on a real-time basis, we can fully prevent the activation of phones registered under a false name using stolen or fabricated IDs,” the ministry said in a release……..”
So those mobile carriers will each end up with a database of people’s faces. Is that wise? This story is in Link’s today and it talks about the blunders in basic internet security of the key-under-the-mat type-
https://www.theregister.com/2025/12/22/south_korea_facial_verification/
Of course this is all so that people can buy a sim card and as nearly everyone owns a mobile these days, this facial database will eventually have most of the population in it. not a confidence builder that.
So, Korea requires ID checks for new cell phone service. Also, “Under Korea’s real-name verification system, only foreigners with a resident registration card can open regular mobile accounts, while short-term visitors and visa-free tourists are limited to prepaid SIM cards.”
https://www.koreatimes.co.kr/southkorea/society/20251019/korean-govt-denies-possibility-of-opening-mobile-phone-accounts-without-id
I like the idea of a second cellphone kept at home and just used for TFA. This way, if the main phone gets lost/stolen one is not locked out of accounts. $10/mo for basic service from an MVNO like Tello. Maybe get that extra number soon before only one is allowed…
I don’t know if there’s any stats on this practice but I think an ID-check for phone service or something close to it is being used in most countries in the world.
But this could also be a case of me not being well-traveled enough.
My recent example is not particularly sophisticated, but I noticed a subscription charge on my credit card from a platform I do not subscribe to. It turned out to be a legit Substack subscription (probably). I tried to contact Substack to ask what was going on, but the only reasonably accessible channel was the Substack chatbot.
I asked the chatbot why my subscription to Musa al-Gharbi’s blog had showed up on my credit card as a charge from a different platform. The chatbot told me I did not have a subscription to Musa al-Gharbi’s blog. I verified, and then said “your first response was clearly wrong” and gave details. The chatbot then confirmed that I do have a subscription. I asked why the chatbot gave wrong information about a subject that Substack clearly has complete knowledge about. The chatbot gave a typical chatbot response about being sorry and being glad I pointed out the error, but did not answer the question.
The net result is I now have zero confidence in the Substack platform because they appear to be handing my credit card information over to other platforms for processing for unknown reasons and are hiding behind a chatbot to avoid giving an explanation. I canceled auto-renewals for all Substack subscriptions (which is fortunately fairly easy to do), and I will not be recommending that anybody give a credit card to Substack in the future. That puts me in a bit of a pickle if I want to start asking people to pay for subscriptions to my Substack.
It seems to me we are well past the point where any trust is possible for any platform.
I did find out one of the ways young people do internet commerce – buy a Visa gift card (basically a burner credit card) and use that number instead of a real credit card for potentially squirrelly transactions. You may or may not get what you paid for, but at least the merchant has limited ability to rip you off going forward. Not sure why that never occurred to me.
right now in a 2nd window, trying to be helpful and activate a Visa gift card from “Vanilla Gift” that my mother received. Their IT system is hot garbage. talk about destroying all confidence in your company.
Damned if you do, damned if you don’t
I honestly cringe when using a credit card online. You can say what you want about PayPal, I’ll probably agree with you whole heartedly, but providing the ability for me to push a payment is worth it.
There are VERY few services we allow to do automatic deductions, and those are also via PayPal because, unlike bank credit cards, we’re able to cancel them from our end.
Our banks allow the ATM cards to be used without PINs, which is obviously really bad. We keep a very low balance in the checking accounts for that reason. New year resolution to revisit the bank and see if we’re now able to change the ATM cards to PIN only.
This is totally awesome. Finally, a clear use case for AI — crime on a massive scale. If crypto is for money laundering, this will be epic.
The link yesterday about Chinese return fraud got me thinking about how this kind of stuff could be exploited on platforms like eBay where they side with the buyers more often than not. I had someone try to scam me on something I was selling years ago and, even without the benefit of AI, it was pretty easy for them to do. I know that’s kind of small ball compared to the points in this post, but it’s going to be too dang easy for some people to resist and platforms like eBay don’t seem inclined to do much about return fraud already.
I don’t think the AI engineers will be happy about this at the end of the day. I took me about a nanosecond to begin running subpoena scenarios through my head. I’ve already built AI admissions and ROGS into my discovery templates. And I make a standard practice of following up with production requests of ISP records to see if they’re lying to me. People always balk and it nearly always turns into a Motion to Compel production fight. But, hey, it’s almost ALWAYS relevant unless you’re a total luddite these days. If a lawyer has to motion to get production, the states I practice in, and especially the federal courts hammer the opposing party (OP) with attorney fees.
So, once established that the OP is an inveterate AI user, then come the subpoenas to the AI providers.
You can imagine where I’m going with the rest of it.
Suffice to say, it involves engineers being called to the stand to testify as experts.
If you think you’re smart enough to try and defraud someone with AI, there is a lawyer who is smarter than you who will grind you into the ground to find the truth.
It’s fun. And, we’re paid well to make an OP’s life a living hell in search of the truth.
An effect that is already noticeable is that
a) job applicants no longer get to communicate with hiring personnel right away, but have to run a gauntlet of AI recruitment bots that evaluate their CV, and then even assess them automatically during virtual interviews;
b) meanwhile, HR departments are being flooded with applications sent by AI bots that scour the Internet for suitable job openings and then generate appropriate cover letters, customized CV, etc.
There are already plenty of tools for both parties; just search for “AI recruitment tools”, and “AI tools for job applications” respectively.
With recruitment processes being swamped with “high frequency” AI interactions, I wonder whether this will lead to the complete collapse of the formal “job market”, with its well-established procedures of publishing job openings in web sites or newspapers, sending an application package with CV, motivation letter, copies of diplomas and employment certificates (either printed, or uploaded as PDF files in a website), leading to a short telephone interview, followed by in person interview, etc.
In such a situation, I see only three ways a person will get a position:
1) Through personal contacts — what is called “Vitamin B” in German, and “pistonnage” in French. In other words: you must know personally somebody in the organization who knows who is hiring and for what kind of position. No public posting of the position, thus very little competition for it — but for recruiters this is preferable to dealing with a torrent of AI slop applications.
2) In the old times, factories would hang a poster advertising that they needed, say, “6 carpenters, 25 riggers, 5 welders, 1 bookkeeper”. Nowadays, day labourers assemble in parking lots where people come and advertise that they need 3 people for gardening, or for reconstruction, or whatever, offering a certain wage per hour. No AI intermediation, making physically the rounds of the potential hiring spots is a requirement, employment on a “first come, first hired” basis or according to the whims of the recruiter.
3) Gigs — like Lyft and Uber, work requiring highly standardized qualifications, recruitment simplified and impersonally managed through apps.
While the labour market has advanced quite some way in that direction already, I suspect that AI will eliminate all other possibilites but these three ones. Woe to those who do not have a wide, dense network of well-positioned acquaintances when they need to get a job.
My experience is not really applicable to the wider job market given where I work and what I do, but in the Bay Area VC-backed startup ecosystem, the ‘classical’ recruiting process has been dead for a while (it’s all technical interviews and personal connections). Bay area tech definitely has its own version of pistonnage, although I just ran across a new form; recently I interviewed for (and withdrew my application from) a certain extremely high profile startup. Two days later I began receiving cold calls from recruiters from other startups backed by the same VCs: I’d been placed on a recruiting short list for a specific type of technical founder role.
We should keep in mind that the situation where nearly everyone has forensic producing device in pocket is just 15 years old. Before that filming or even photographing something wasn’t easy and cost-free and if teacher said something notable in front of class, there weren’t twenty videos from all angles. We are reverting to natural state where to prove something, you have to use evidence like witnesses, physical things, etc., together with little grey cells.
AI fraud is starting to ramp-up particulary in retail investing in order to manipulate stock market share-prices; from an excerpt I recieved this morning (I don’t have the original link, but I trust the source):
China Targets AI-Generated Misinformation, Illegal Stock Tips in New Cleanup.
Regulators said misinformation related to capital markets has become increasingly fragmented and emotionally charged, with covert tactics that disrupt market expectations and mislead investor decisions.
Regulators also flagged the growing use of artificial intelligence to mass-produce false market information. Several accounts on Baidu’s content platforms were found to have used AI-generated articles with sensational headlines to attack regulatory policies and stir negative sentiment among investors.
Over the past few years, I have seen a lot of questionable activity in small-caps and lesser-traded stocks on NASDAQ, and it seems like there’s very little investigation let alone enforcement with regards to market manipulation. While the wealthiest Americans hold the most shares, they’re not the ones who are most susceptable for getting burned (yet, but they’re a tempting target for sure), it’s the small investor who’s hoping for a better return than what a savings account offers.
A few headline-grabbing scams perpetuated by use of AI will be all it takes to freeze the markets and destroy investor confidence, costing the overall economy billions if not more.
AI and the Future of Market Manipulation
A fake image of a Pentagon explosion generated using artificial intelligence (AI) triggered $500 billion in stock market losses within minutes. Was the incident an anomaly, or does it serve as an indication of the future of financial markets?
…
Misleading investors by making artificial changes to the market value of investments—a practice known as market manipulation—did not begin with AI. Traders spread false rumors in 18th-century Amsterdam coffeehouses to inflate stock prices. Today, however, Lin claims AI enables “bad actors” to move financial markets with greater speed and broader reach.
a 2nd order effect is that (many) people will stop believing, or rationalize away, *real* documentary evidence.
hat tip to Feral Historian who made this point in his review of “Serenity” the 2005 “Firefly”-universe movie
https://m.youtube.com/watch?v=zhy2x3bbXsM
My wife was almost scammed recently. She had gone to the dvla website to pay road tax. She was just about to pay when she noticed the value was wrong. It turns out it’s become quite common for the web adress if such sites to be cloned and repalced with very exact copies. She called her bank and they were aware of the issue as it is happening regulalry snd the DVLA was perfectly aware as well.
At some point I can’t help but think the only solution is too rapidly pull back to physical interactions. At least that would be the sensible thing. Except governments have largely gone all in on moving everything online, as have banks, and utilities and just about everything else. So the infrastructure no longer exists. Perhaps thr solution will be bottom up with more and more peoplempving to an informal economy. Something I think if inevitable due to other reasons as well.