Deceit Pays Dividends: How CEO Lies Can Boost Stock Ratings and Fool Even Respected Financial Analysts

Yves here. The study described below may sound clever and satisfying but it is actually pretty terrible. It uses an AI that it claims is 84% accurate in detecting lies and had it score earnings call transcripts from 2008 to 2016. Then it looked at the CEOs the AI found to be liars and compared it to analyst ratings. It found that the dishonest CEOs got higher stock ratings than the more honest ones….and the highest rated analysts were more likely to upscore the fibbing CEOs.

The write-up of the study attempts to pin the analysts’ unduly favorable votes on the CEO verbal chicanery. Huh?

Consider these questions that the study ignores:

1. Were these ratings wrong? Did the stocks of con artist CEOs perform worse than those of similar rated stocks by honest CEOs?

2. Were the supposed lies material? Material misrepresentations in a earnings call is securities froud. This is the simple reason analysts would not question what was said in a conference call as dishonest, as opposed to merely optimistic. Maybe in many cases the CEO lies were in making overly-optimistic or overly-problem-evasive remarks on matters that didn’t make a hill of beans of difference to company earnings.

3. Could the analyst have been rationally indifferent to the matter of truth? Aside from the “I can’t assume the company is engaged in securities fraud unless their story really looks iffy,” a second reason is the Keynes beauty contest theory of investment. Keynes was a fan of not trying to make the best pick in a purist performance sense, but which would be regarded as the most promising by other buyers. This idea makes senses most successful investing approach for way too long a time has been momentum trading, and not fundamental investing.

4. Could the slippery-tongued CEO in fact by a proxy for one who wos particularly aggressive about stock price manipulation, as in buybacks?

5. Could the AI simply be very bad at scoring earnings calls? It wasn’t trained on them. Due to the securities fraud issues in #2, as in securities law issues meant that CEOs on conference calls have to tap dance in a particular manner when they are asked about sensitive topics, and the AI is simply misreading required hedging or evasiveness so as to stay out of trouble as lying.

By Steven J. Hyde, Assistant Professor of Management, Boise State University. Originally published at The Conversation

The multibillion-dollar collapse of FTX – the high-profile cryptocurrency exchange whose founder now awaits trial on fraud charges – serves as a stark reminder of the perils of deception in the financial world.

The lies from FTX founder Sam Bankman-Fried date back to the company’s very beginning, prosecutors say. He lied to customers and investors alike, it is claimed, as part of what U.S. Attorney Damian Williams has called “one of the biggest financial frauds in American history.”

How were so many people apparently fooled?

A new study in the Strategic Management Journal sheds some light on the issue. In it, my colleagues and I found that even professional financial analysts fall for CEO lies – and that the best-respected analysts might be the most gullible.

Financial analysts give expert advice to help companies and investors make money. They predict how much a company will earn and suggest whether to buy or sell its stock. By guiding money into good investments, they help not just individual businesses but the entire economy grow.

But while financial analysts are paid for their advice, they aren’t oracles. As a management professor, I wondered how often they get duped by lying executives – so my colleagues and I used machine learning to find out. We developed an algorithm, trained on S&P 1500 earnings call transcripts from 2008 to 2016, that can reliably detect deception 84% of the time. Specifically, the algorithm identifies distinct linguistic patterns that occur when an individual is lying.

Our results were striking. We found that analysts were far more likely to give “buy” or “strong buy” recommendations after listening to deceptive CEOs – by nearly 28 percentage points, on average – rather than their more honest counterparts.

We also found that highly esteemed analysts fell for CEO lies more often than their lesser-known counterparts did. In fact, those named “all-star” analysts by trade publisher Institutional Investor were 5.3 percentage points more likely to upgrade habitually dishonest CEOs than their less-celebrated counterparts.

Although we applied this technology to gain insight into this corner of finance for an academic study, its broader use raises a number of challenging ethical questions around using AI to measure psychological constructs.

Biased Toward Believing

It seems counterintuitive: Why would professional givers of financial advice consistently fall for lying executives? And why would the most reputable advisers seem to have the worst results?

These findings reflect the natural human tendency to assume that others are being honest – what’s known as the “truth bias.” Thanks to this habit of mind, analysts are just as susceptible to lies as anyone else.

What’s more, we found that elevated status fosters a stronger truth bias. First, “all-star” analysts often gain a sense of overconfidence and entitlement as they rise in prestige. They start to believe they’re less likely to be deceived, leading them to take CEOs at face value. Second, these analysts tend to have closer relationships with CEOs, which studies show can increase the truth bias. This makes them even more prone to deception.

Given this vulnerability, businesses may want to reevaluate the credibility of “all-star” designations. Our research also underscores the importance of accountability in governance and the need for strong institutional systems to counter individual biases.

An AI ‘Lie Detector’?

The tool we developed for this study could have applications well beyond the world of business. We validated the algorithm using fraudulent transcripts, retracted articles in medical journals and deceptive YouTube videos. It could easily be deployed in different contexts.

It’s important to note that the tool doesn’t directly measure deception; it identifies language patterns associated with lying. This means that even though it’s highly accurate, it’s susceptible to both false positives and negatives – and false allegations of dishonesty in particular could have devastating consequences.

What’s more, tools like this struggle to distinguish socially beneficial “white lies” – which foster a sense of community and emotional well-being – from more serious lies. Flagging all deceptions indiscriminately could disrupt complex social dynamics, leading to unintended consequences.

These issues would need to be addressed before this type of technology is adopted widely. But that future is closer than many might realize: Companies in fields such as investing, security and insurance are already starting to use it.

Big Questions Remain

The widespread use of AI to catch lies would have profound social implications – most notably, by making it harder for the powerful to lie without consequence.

That might sound like an unambiguously good thing. But while the technology offers undeniable advantages, such as early detection of threats or fraud, it could also usher in a perilous transparency culture. In such a world, thoughts and emotions could become subject to measurement and judgment, eroding the sanctuary of mental privacy.

This study also raises ethical questions about using AI to measure psychological characteristics, particularly where privacy and consent are concerned. Unlike traditional deception research, which relies on human subjects who consent to be studied, this AI model operates covertly, detecting nuanced linguistic patterns without a speaker’s knowledge.

The implications are staggering. For instance, in this study, we developed a second machine learning model to gauge the level of suspicion in a speaker’s tone. Imagine a world where social scientists can create tools to assess any facet of your psychology, applying them without your consent. Not too appealing, is it?

As we enter a new era of AI, advanced psychometric tools offer both promise and peril. These technologies could revolutionize business by providing unprecedented insights into human psychology. They could also violate people’s rights and destabilize society in surprising and disturbing ways. The decisions we make today – about ethics, oversight and responsible use – will set the course for years to come.

Print Friendly, PDF & Email

4 comments

  1. Steve M

    So AI scores about the same as a polygraph. Not surprising. Seems like four out of every five is any machine’s maximum efficiency for interactions with people.
    What that means is that the cream of liars – the ones who do the most damage – will be vindicated and allowed to continue. And the most wretched of truth tellers will be excoriated and exiled.
    “But AI says.” Waiting for that to be the standard yardstick of idiots.
    However, I do know how to improve AI’s bullshit detection to 100 percent.
    Just apply it to politicians exclusively!

  2. lyman alpha blob

    I call BS on this. The author seems incredibly naive, perhaps deliberately so, both in regards to the efficacy of AI and the honesty of the financial players.

    Rather than there being a “truth bias” where the”all-start” analysts are deceived more frequently, I find it exceedingly more likely that analysts are in cahoots with CEOs to boost stock prices for mutual benefit. It sure wouldn’t be the first time. This guy never heard of pump and dump? Making money for the big shots is how one gets to be an “all-star” analyst in the first place, isn’t it?

    I do recall the ratings agencies giving AAA ratings to piles of garbage, because if they didn’t, they knew the companies asking for ratings would simply take their business elsewhere to an agency who would give them the desired result. The much cited quote from Upton Sinclair comes to mind here.

  3. Mikerw0

    This study is academic nonsense, in my opinion having worked on both sides of the street fro over 30 years. The underlying assumption is that the ratings both matter and are a function of what is said on earnings conference calls. Correlation and causation are not the same thing — at all.

  4. HotFlash

    “In it, my colleagues and I found that even professional financial analysts fall for CEO lies – and that the best-respected analysts might be the most gullible.”

    ‘Even’ Well, surprise, surprise! Another case where academics are really ignorant of the real world. In one of my jobs, a car dealership, an axiom was, “The easiest person to sell is a salesman(person).” Ask any sales manager.

Comments are closed.