Some Perhaps Not Fully Anticipated Effects of Increased AI Use: Evidence Doubts, Fraud and Fraud Containment

The press has been awash with breathless reporting and forecasts on the impact of AI, from an expected knowledge and even creative worker wipeout to evidence of brain changes in regular users to accounts of chatbot interaction warping behavior, including generating suicides. Even with this extensive discussion, there may be additional consequential effects whose impact may not yet be fully recognized.

I invite additional examples in the types listed as well as other categories.

Erosion of Ability to Use Video and Images in Legal/Disciplinary Actions

Even with “Where’s Waldo?”s of AI videos with momma cougars with balls, and other tells of fakery, readers have pointed out that we are nearing, if not already at, the point where digital visuals can’t be trusted. Mind you, Photoshop took us well down that path, but there are apparently not too difficult (for those in the trade) ways to ascertain if an image was modified with Photoshop or a similar tool.

Absent getting metadata from the source (more on that question soon), AI may be making it harder to make that determination.

A contact told me about a disciplinary matter in progress. Think for instance of a teacher found to have said something extremely inappropriate on TikTok. We’ll call the investigation target Lee.

The contact has long known Lee, who is highly respected professionally but also has an excess of self-regard and holds loud and proud positions on some charged topics. Lee had several videos up on his social media account which among other things can be construed to threaten violence. The videos look completely bona fide, in his appearance, manner, speech tone and pacing, as well as being consistent with views Lee is known to hold (even while being expressed in over-the-top unacceptable manner).

Lee maintains that the videos were AI fakes. A person who claims to have filmed them is known to despise Lee, so establishing validity conclusively that way is out. And yes, law enforcement agencies are now involved.

The key bit from the contact, who is involved tangentially:

One thing is becoming clear – the authorities with all their resources apparently cannot tell by forensic examination of the files whether these are real or AI….My point being – if this is AI – we are all so very screwed. The authorities can make up any shit they want and have perfectly rendered video of us doing it. It really is a Brave New World.

Mind you, I thought we lived in a world where everything digital has metadata, as well as information that amounted to provenance, like upload time and the IP address of the upload source. I don’t have more detail on this case and so am not clear whether Lee or the person who said he took the videos has had their devices examined. The organizations involved may not want to go to court to minimize exposure. If there were litigation or charges, I would normally assume that the information required to establish the bona fides or not of the clips could be obtained via discovery.

But that is not what the missive as written implies: “forensic examination of the files” would seem to extend to the upload device(s) and the other details, which I must withhold, suggest the investigation is using heavyweight resources. And yes, the contact does have a fair bit of direct knowledge.

Security and anti-security measures are arms races, so even if the AI crowd were to have the upper hand now, that situation may not be durable. But if my contact is right, and this persists, we collectively are unmoored by being able to verify digital visual material.

And even if this sort of matter could be settled via access to the devices on which they were created, how many times are there employment and professional disputes which in the old days would have been settled with the production of a damning image or text? How many accusations will now wind up being litigated because these old proofs are not longer dispositive? How often will there be too many steps transmission step from the party who released it into the wild to be sure of who was really the creator? Will a lot of these daisy chains be too long to run down?

Keep in mind that even if questions like this can be resolved by examining hardware and communications logs, that isn’t as comforting as one might think. Again, what happens if one party refuses to cooperate? What about jurisdictions that don’t have US style discovery?

Further Lowering the Bar to Fraud

The Internet has already made it way too easy to perpetrate fraud. Every day, I get far too many e-mails about refunds on credit cards I don’t hold, pending account cancellations, bogus overdue bills, and the latest variants on the Nigerian scam letter. But again, AI can help devise more and more credible-looking sites, and with less effort.1

The FBI, at the end of last year, put out a notice on how AI is turbocharging crime. We’ve embedded a copy at the end of the post.

Let’s consider one type: voice cloning. The technology for replication has been very good for years; I recall reading over a decade ago that all that was needed to rip off your voice was 30 seconds of a clear recording. It’s only gotten easier. From the FAQ of one vendor:

You can upload up to one voice recordings to create your voice clone. You should upload an audio file with total duration >= 8 seconds and <= 60 seconds. The voice quality is more important than audio length. We recommend uploading high quality audio in wav format.

Mind you, that’s just to replicate the voice, say to deliver a marketing text provided by the customer. This more detailed take is from Quora, about two years ago. So the times necessary are likely shorter now:

Artificial Human 001
PhD in Computer Science, Stanford University (Graduated 2013)

I’m a researcher in this field for 5 years. There are three types of voice cloning:

1 TTS, which usually requires > 2 hours of your voice to train an AI. It’s good at both mimicking your rhythm and pitch

2 Voice Conversion, which usually requires 5 minutes of your voice to train. It’s good at mimicking your pitch. But for people familiar with you, they would recognize the rhythm is not you.

3. One Shot, which requires 1 sentence from you. It’s quality is good enough for people who are unfamiliar with you. But your friends will tell right away this is not your voice.

However, this does still not amount to replicating one’s manner of speech, content-wise, particularly for those with mannerisms, such heavy use of highfalutin’ words, puns, or Biblical references. One wonders how much text in recorded material it would take to simulate that. Nevertheless, even before AI got all that good, one would hear of stories of people calling, pretending to be a relative or other close contact, stranded somewhere on the road with their wallet allegedly just stolen, asking to have funds wired.

But the incidence of AI voice cloning scams efforts appears of have increased markedly. From ThreatMark in early 2025:

Voice cloning schemes, where fraudsters use AI to artificially create “deepfake” voice messages, are gaining in popularity.

The use of voice cloning for fraud is highly varied. There have been numerous cases of CEO/CFO fraud—a prime example being a voice cloning scheme aimed at the CEO of the world’s largest ad group. There is also a growing number of grandparent scams that target older people and their sense of responsibility for their families. In these cases, scammers imitate the voice of a relative who is supposedly in distress. Similar tactics are also used by scammers for extortion, pretending to have kidnapped a loved one.

For the reasons above, AI-generated voice scams continue to spread. Research by Starling Bank showed that 28% of UK adults think they have been targeted by an AI voice cloning scam in the past year. Alarmingly, nearly half (46%) of UK adults do not know this type of scam even exists. Additionally, a McAfee survey revealed that out of 7,000 respondents, one in four had encountered an AI-generated voice scam, either personally or through someone they know. Meanwhile, 37% of organizations globally reported being targeted by a deepfake voice attempt, according to Medius.

With family members, you can choose on a safety word to prove your identity for emergency appeals. With other contacts, you’ll need to keep your cool and query them to provide details about your history together that an AI would be unlikely to have unearthed.

But separate from using an AI version of another person’s voice to con you, crooks can use your voice to scam your financial institution. As the FBI pointed out:

Criminals obtain access to bank accounts using AI-generated audio clips of individuals and impersonating them.

Two of my banks are disgracefully still encouraging customers to agree to use a their voice for ID even as tech experts are warning that criminals are successfully fooling voice recognition systems to raid accounts. From ID Data Web:

In a startling demonstration, a tech journalist cloned her own voice using an inexpensive AI tool and successfully fooled her bank’s phone system. By feeding an online voice generator a text-to-speech script, she created a deepfake that passed both the Interactive Voice Response (IVR) system and a five-minute call with a live agent. The experiment underscores the growing threat AI voice fraud poses to financial institutions…..

AI voice fraud is rising due to the accessibility of generative AI tools and the abundance of personal information online. With just a few seconds of audio—often from public social media posts or voicemail greetings—fraudsters can generate highly convincing voice clones. Even individuals with minimal technical skills can now create authentic-sounding voices at scale.

For fraud analysts, this creates a worst-case scenario: the usual red flags of a phone scam—odd tone, scripted speech, or stilted responses—may be absent when the voice sounds genuine. Fraudsters often combine voice clones with stolen account details to enhance credibility, defeating traditional knowledge-based authentication checks.

In the Business Insider test, the journalist’s deepfake recited her account and Social Security numbers—data that could easily be purchased on the Dark Web—and the bank’s system treated the call as legitimate.

Even advanced biometric systems, which use voiceprints to authenticate clients, are vulnerable. While these systems detect subtle vocal inconsistencies, AI deepfakes are improving rapidly, narrowing those gaps.

So if you have been so unwise as to go along with your bank’s bright idea of using voice prints, call Customer Service and opt out immediately.

A Few Possible Knock-On Effects of AI Fraud

More restrictive use of social media? Young people have already discovered that over-sharing their private lives can have downsides, if it includes things like too much wild partying or activism, like participating in pro-Palestine marches unmasked.2 In general, the bigger your Internet photo and voice distribution, the more opportunity you have created for baddies. The FBI advises:

If possible, limit online content of your image or voice, make social media accounts private, and limit followers to people you know to minimize fraudsters’ capabilities to use generative AI software to create fraudulent identities for social engineering.

Right now, this posture may seem silly or over-protective. But if AI chicanery rises, norms may change and social media players may also be pressured or even required to allow for the removal of user-provided content.

Selective increases in staffing due to more customer demand for live interaction? It may only be a very modest offset, in terms of employment numbers, but more fraud produces a need for more customer service and more investigators. And it may produce changes in the preference for convenience versus safety, again in ways that at the margin offset AI induced headcount reductions. From the noted cybersecurity expert, Vladimir Putin, in his annual marathon Q%A:

Elina Dashkuyeva, Mir National Television Company: Phone scams remain an issue these days. Have the measures adopted by the state been effective? Do you have any information on whether the number of people suffering from this kind of fraud has declined?

Thank you very much.

Vladimir Putin: Yes, I do have this information. The measures have proved to be effective. Much will have to be done, of course, in this regard. There was a seven-percent decline in the number of crimes of this kind, according to the Interior Ministry, while the damage decreased by 33 percent. Overall, this was quite a positive result.

There is, however, one thing I wanted to point out. I would like to address the citizens of the country. Fraud is still very much an issue. And the more sophisticated our devices are getting, the more sophisticated our life is becoming, the more sophisticated tools scammers use in turn to defraud the citizens. This is why, regardless of the voice you hear, which is especially dangerous considering what artificial intelligence can do, if someone starts talking to you about money, about property, just put down the phone, put down the phone right away! You should not say anything.

If this is about banks and the like, there are people to talk to, people you know. You can get things done by visiting the bank in person. This is the best way.

So will privacy concerns drive us back to the future anticipated in Robert Heinlein’s Friday, of all important communications being conveyed by courier or in person? Too much is going over the innertubes to expect much of a rollback, but we might see more than appears likely now if the security issues can’t be sufficiently remedied.

___

1 This case does not implicate AI, or at least not the parts the New York Times presented. But I wonder about the key reason the victim got snookered, the prominent placement of their phone number in search results. How did that happen? See from the top of Tech Support Scammers Stole $85,000 From Him. His Bank Declined to Refund Him in the New York Times:

David Welles, a retired lawyer, had been struggling with his new iPad for hours when he tried to call tech support.But instead of dialing Microsoft to help him connect his email, the phone number he found on Google put him in touch with cybercriminals.

2 Your humble blogger has taken to watching presentations by attorneys in the “defending civil rights” business, such as how to assert and preserve one’s legal standing in the face of aggressive police search and interrogation efforts. This one from Hampton Law went live in the last week and so would seem to be current on the state of play.

Internet Crime Complaint Center (IC3) | Criminals Use Generative Artificial Intelligence to Facilitate Financial Fraud
Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *