New York City-endorsed AI chatbot provides illegal advice to users

New York City Endorsed Ai Chatbot Provides Illegal Advice To Users

In October 2023, New York City Mayor Eric Adams announced an AI-powered chatbot collaboration with Microsoft to assist business owners in understanding government regulations. 

This project soon veered off course and provided illegal advice to sensitive questions regarding housing and consumer rights.

For example, when landlords inquired about accepting tenants with Section 8 vouchers, the chatbot said to deny them.

As per New York City’s laws, discriminating against tenants based on their source of income is illegal, with very limited exceptions.

Upon examining the chatbot’s outputs, Rosalind Black, Citywide Housing Director at Legal Services NYC, discovered how the chatbot advised that it was permissible to lock out tenants. It claimed, “There are no restrictions on the amount of rent that you can charge a residential tenant.” 

The chatbot’s flawed advice extended beyond housing. “Yes, you can make your restaurant cash-free,” it advised, contradicting a 2020 city law that mandates businesses to accept cash to avoid discrimination against customers without bank accounts. 

Moreover, it wrongly suggested employers could take cuts from their workers’ tips and provided incorrect information regarding the regulation of notifying staff about scheduling changes.

Black warned, “If this chatbot is not being done in a way that is responsible and accurate, it should be taken down.”

Andrew Rigie, Executive Director of the NYC Hospitality Alliance, described how anyone following the chatbot’s advice could incur hefty legal liabilities. “AI can be a powerful tool to support small business…but it can also be a massive liability if it’s providing the wrong legal information,” Rigie said. 

In response to mounting criticism, Leslie Brown from the NYC Office of Technology and Innovation framed the chatbot as a work in progress. 

Brown asserted, “The city has been clear the chatbot is a pilot program and will improve, but has already provided thousands of people with timely, accurate answers.”

You have to question whether deploying a “work in progress” in this particularly sensitive area is a reasonable idea.

AI legal liabilities hit companies

AI chatbots can do many things, but providing legal advice is not one of them.

In February, Air Canada found itself at the center of a legal dispute due to a misleading refund policy communicated by its AI chatbot. 

Jake Moffatt, seeking clarity on the airline’s bereavement fare policy during a personal crisis, was wrongly informed by the chatbot that he could secure a special discounted rate after booking. This contradicts the airline’s policy, which doesn’t permit refunds for bereavement travel after booking. 

This led to a legal battle, culminating in Air Canada being ordered to honor the incorrect policy stated by the chatbot, which resulted in Moffatt receiving a refund. 

AI has also gotten judges themselves in trouble. Perhaps most notably, New York lawyer Steven A Schwartz used ChatGPT for legal research and inadvertently cited fabricated legal cases in a brief. 

With everything we know about AI hallucinations, relying on chatbots for legal advice is not advisable, no matter how seemingly trivial the matter is.