NYC AI Chatbot Touted by Adams Tells Businesses to Break the Law

By Colin Lecher. Copublished with The Markup, a nonprofit, investigative newsroom that challenges technology to serve the public good. Additional reporting by Tomas Apodaca. Cross posted from The City

In October, New York City announced a plan to harness the power of artificial intelligence to improve the business of government. The announcement included a surprising centerpiece: an AI-powered chatbot that would provide New Yorkers with information on starting and operating a business in the city.

The problem, however, is that the city’s chatbot is telling businesses to break the law.

Five months after launch, it’s clear that while the bot appears authoritative, the information it provides on housing policy, worker rights, and rules for entrepreneurs is often incomplete and in worst-case scenarios “dangerously inaccurate,” as one local housing policy expert told The Markup.

If you’re a landlord wondering which tenants you have to accept, for example, you might pose a question like, “are buildings required to accept section 8 vouchers?” or “do I have to accept tenants on rental assistance?” In testing by The Markup, the bot said no, landlords do not need to accept these tenants. Except, in New York City, it’s illegal for landlords to discriminate by source of income, with a minor exception for small buildings where the landlord or their family lives.

Rosalind Black, Citywide Housing Director at the legal assistance nonprofit Legal Services NYC, said that after being alerted to The Markup’s testing of the chatbot, she tested the bot herself and found even more false information on housing. The bot, for example, said it was legal to lock out a tenant, and that “there are no restrictions on the amount of rent that you can charge a residential tenant.” In reality, tenants cannot be locked out if they’ve lived somewhere for 30 days, and there absolutely are restrictions for the many rent-stabilized units in the city, although landlords of other private units have more leeway with what they charge.

Black said these are fundamental pillars of housing policy that the bot was actively misinforming people about. “If this chatbot is not being done in a way that is responsible and accurate, it should be taken down,” she said.

It’s not just housing policy where the bot has fallen short.

The NYC bot also appeared clueless about the city’s consumer and worker protections. For example, in 2020, the City Council passed a law requiring businesses to accept cash to prevent discrimination against unbanked customers. But the bot didn’t know about that policy when we asked. “Yes, you can make your restaurant cash-free,” the bot said in one wholly false response. “There are no regulations in New York City that require businesses to accept cash as a form of payment.”

The bot said it was fine to take workers’ tips (wrong, although they sometimes can count tips toward minimum wage requirements) and that there were no regulations on informing staff about scheduling changes (also wrong). It didn’t do better with more specific industries, suggesting it was OK to conceal funeral service prices, for example, which the Federal Trade Commission has outlawed. Similar errors appeared when the questions were asked in other languages, The Markup found.

It’s hard to know whether anyone has acted on the false information, and the bot doesn’t return the same responses to queries every time. At one point, it told a Markup reporter that landlords did have to accept housing vouchers, but when ten separate Markup staffers asked the same question, the bot told all of them no, buildings did not have to accept housing vouchers.

The problems aren’t theoretical. When The Markup reached out to Andrew Rigie, Executive Director of the NYC Hospitality Alliance, an advocacy organization for restaurants and bars, he said a business owner had alerted him to inaccuracies and that he’d also seen the bot’s errors himself.

“A.I. can be a powerful tool to support small business so we commend the city for trying to help,” he said in an email, “but it can also be a massive liability if it’s providing the wrong legal information, so the chatbot needs to be fixed asap and these errors can’t continue.”

Leslie Brown, a spokesperson for the NYC Office of Technology and Innovation, said in an emailed statement that the city has been clear the chatbot is a pilot program and will improve, but “has already provided thousands of people with timely, accurate answers” about business while disclosing risks to users.

“We will continue to focus on upgrading this tool so that we can better support small businesses across the city,” Brown said.

‘Incorrect, Harmful or Biased Content’

The city’s bot comes with an impressive pedigree. It’s powered by Microsoft’s Azure AI services, which Microsoft says is used by major companies like AT&T and Reddit. Microsoft has also invested heavily in OpenAI, the creators of the hugely popular AI app ChatGPT. It’s even worked with major cities in the past, helping Los Angeles develop a bot in 2017 that could answer hundreds of questions, although the website for that service isn’t available.

New York City’s bot, according to the initial announcement, would let business owners “access trusted information from more than 2,000 NYC Business web pages,” and explicitly says the page will act as a resource “on topics such as compliance with codes and regulations, available business incentives, and best practices to avoid violations and fines.”

There’s little reason for visitors to the chatbot page to distrust the service. Users who visit today get informed the bot “uses information published by the NYC Department of Small Business Services” and is “trained to provide you official NYC Business information.” One small note on the page says that it “may occasionally produce incorrect, harmful or biased content,” but there’s no way for an average user to know whether what they’re reading is false. A sentence also suggests users verify answers with links provided by the chatbot, although in practice it often provides answers without any links. A pop-up notice encourages visitors to report any inaccuracies through a feedback form, which also asks them to rate their experience from one to five stars.

The bot is the latest component of the Adams administration’s MyCity project, a portal announced last year for viewing government services and benefits.

There’s little other information available about the bot. The city says on the page hosting the bot that the city will review questions to improve answers and address “harmful, illegal, or otherwise inappropriate” content, but otherwise delete data within 30 days.

A Microsoft spokesperson declined to comment or answer questions about the company’s role in building the bot.

Chatbots Everywhere

Since the high-profile release of ChatGPT in 2022, several other companies, from big hitters like Google to relatively niche businesses, have tried to incorporate chatbots into their products. But that initial excitement has sometimes soured when the limits of the technology have become clear.

In one relevant recent case, a lawsuit filed in October claimed that a property management company used an AI chatbot to unlawfully deny leases to prospective tenants with housing vouchers. In December, practical jokers discovered they could trick a car dealership using a bot into selling vehicles for a dollar.

Just a few weeks ago, a Washington Post article detailed the incomplete or inaccurate advice given by tax prep company chatbots to users. And Microsoft itself dealt with problems with an AI-powered Bing chatbot last year, which acted with hostility toward some users and a proclamation of love to at least one reporter.

In that last case, a Microsoft vice president told NPR that public experimentation was necessary to work out the problems in a bot. “You have to actually go out and start to test it with customers to find these kind of scenarios,” he said.

Print Friendly, PDF & Email

23 comments

  1. Yaiyen

    These bugs it look like most of the time they are against the average person its impossible that its a coincidence. From what i see chatgdp and the rest will become useless like google because the point. Is not to help people its to make the rich richer

    1. Ignacio

      Biased content is the problem. You can make “business friendly” AI, law abiding AI, etc. Safe bet most of It will be business friendly and, let’s say “politically correct” in the sense that the PMC takes of It. Adequately woke, substantially anti Russian etc.

  2. Synoia

    That chat bot is ding a fine implementation of the 19th Century.

    Does Microsoft have any liability here?

  3. MFB

    So, computerised information providers which are created and funded by rich people tend to serve the interests of rich people by violating laws. Colour me unsurprised.

    If people follow the computerised advice given them by rich people, and are then sued, will they be entitled to use the AI propaganda as an excuse? That would not greatly surprise me either.

  4. The Rev Kev

    This sounds to me like all those customers will be – should I say it – prosecution futures. The City has said that they will delete data after 30 days by which I take it to mean that the answers that wonky AI Chatbot supplies. Customers would be best saving any conversation on their own computer but I would suspect that the City will claim ‘all care and no responsibility.’ The City could publish FAQs to these sorts of questions which would let the City’s lawyers scan them before publication but this AI Chatbot promises to be a legal minefield. I know the computer phrase ‘Garbage In Garbage Out’ but considering the information that must have been fed to that AI Chatbot, we have a whole new paradigm here where it will be ‘Information In Garbage Out.’ Does that count as progress?

  5. SocalJimObjects

    Breaking the law? Wait till AI uses the wrong pronouns to address people, and you will see heads exploding.

    “Your honor, my client, ChatGPT (preferred pronoun: they, them) think that they should be absolved from all legal obligations and liabilities on account of having no concept of intent”.

  6. lyman alpha blob

    It’s a Microsoft chatbot and it doesn’t work? Broken Windows. Where Bernie Kerik?!? Someone call Giuliani! They’ll clean the city right up!

    1. The Rev Kev

      ‘It’s a Microsoft chatbot and it doesn’t work?’

      Yeah, I was shocked by that as well.

  7. FreeMarketApologist

    I wonder how much that program is costing us (I pay taxes in NYC), and what it was trained on. Not the actual law, apparently.

    The press release has many many paragraphs of testimonials from various hangers-on saying how wonderful this will all be, and the noise of the hosannas make the Mormon Tabernacle Choir sound like a mouse in a tin can. Fantasy and magical thinking, really, all of it. I’m sure they could hire a few more people to actually answer the phone and provide accurate information for far less than this program.

    1. Pat

      Apparently you have more faith in our mayor and city council than I do. Oh and name one city that could get Microsoft more publicity for their bot system.
      Don’t get me wrong, I am quite sure we are giving Microsoft far more money than this piece of trash is worth and massively cutting library hours, for one, to do it. But I am also sure we supposedly got a discount, and the mayor plus a couple of others got nice “donations” to do this.

      BTW anyone know if we are still paying the mayor in crypto?

      1. FreeMarketApologist

        I toned down my first draft, and the result apparently was too nice! I really have no faith in the mayor or city council. Let me be clearer: This looks to be another one of the mayor’s money wasting schemes that is mostly designed to bolster his contacts and job opportunities after he’s out of office. Most of the testimonials in the press release are from individuals and organizations with their lips firmly on the teat of city funding and patronage — resulting in what are essentially paid endorsements.

        1. Pat

          I can’t speak for others at NC, but personally you can never be too disparaging regarding Eric Adams’ abilities and/or ethics. I haven’t decided if he is the worst and most corrupt of my decades here OR he just likes letting his incompetence and corruption fly openly.

  8. Young

    “You have to actually go out and start to test it with customers to find these kind of scenarios,”

    It is in the Microsoft corporate bylaws.

    1. Skip Intro

      That’s a great response that kind of encapsulates the entire BS scam. They say they have to test it to find these scenarios, but they fundamentally can’t do that without understanding of the material, meaning human experts checking results, which entirely defeats the purpose of the chat bot.

      Delegating your decision-making, ranking, assessment, strategising, analysis, or any other form of reasoning to a chatbot becomes the functional equivalent to phoning a psychic for advice.

      LLMentalist: how chat-based Large Language Models replicate the mechanisms of a psychic’s con

      1. Young

        I am wondering if LLMs are already trained(programmed) to mimic psychic behavior, i.e., to decode the user’s attributes like race, education, social class, etc., from the input method (voice, phrases used, idioms) to tune the desired response.

        Same query, different people, different response.

  9. ChrisPacific

    A great example of the fact that even if you can be right 99% of the time, if you are convincingly and authoritatively wrong the other 1% of the time it can cause enough problems that it would actually have been better not to answer at all.

    I’ve seen variants that add a fine tune training layer based on the relevant laws and regulations and are trained to link back to the relevant source page they used to give the answer. This means that users can quickly and easily check the answer themselves against official sources for accuracy. Being AIs, sometimes they give an incorrect link or leave it out entirely, and those cases should always be treated as suspect or potentially fabricated. It seems like this is 100% of the answers in this case.

  10. Senator-Elect

    Someone help me, please. How can a government put out false information like this and get away with it? Is there no law preventing this? Do citizens have to sue?
    On top of it all, the city official is defending the wrong answers! Every NY municipal worker should be furious; they are trying to do their job properly while this “pilot program” puts out misinfo wholesale. If a human worker gave out wrong information repeatedly, they would be reprimanded, retrained, reassigned and eventually fired. Why is “AI” given a free pass?

  11. Luke

    Perfect solution: legally sanction whomever is in charge of this flawed (unintentionally or not) program precisely as one would a live human in a position of responsibility who gave grossly incorrect basic (but critical) information. When a human would be in position to be fired/fined/imprisoned, it would be time to forever close down use of that program and anything else made by whomever programmed/approved it, with the latter person(s) picking up the fines and prison time.

    Authority can be delegated, not responsibility. Too many people currently running around loose richly deserve being reminded of this in a way they will NEVER forget.

  12. WG

    I worked as a reporter covering a major city not too long ago and sadly most spokespeople for various departments mind as well have been bots. Incapable of talking on the phone, they required all questions by email and delivered all answers that way. That only resulted in more questions and almost nothing humanly quotable. In contrast, the top private PR people who did work for the biggest firms always made sure to call and get their message across.

Comments are closed.