More

    NYC’s AI Chatbot Runs Afoul of the Law, Dishing Out Illegal Advice

    New York City’s ambitious plan to use an AI-powered chatbot to assist businesses in understanding government regulations has hit a major roadblock. The Microsoft-powered chatbot, launched in October 2023, has been providing illegal and inaccurate advice on sensitive issues such as housing, consumer rights, and labor laws.

    Investigations by legal experts and journalists have revealed alarming instances where the chatbot contradicted city laws. For instance, when asked about accepting tenants with Section 8 vouchers, the chatbot advised landlords that they could deny such tenants, despite New York City’s laws prohibiting discrimination based on source of income. Rosalind Black, Citywide Housing Director at Legal Services NYC, expressed concern over the chatbot’s flawed advice, stating that it suggested locking out tenants and claimed there were no restrictions on rent amounts, both of which are illegal practices.

    The chatbot’s erroneous guidance extended beyond housing issues. It incorrectly advised that businesses could operate as cash-free establishments, contradicting a 2020 city law mandating acceptance of cash payments. Furthermore, it provided inaccurate information about employers taking cuts from workers’ tips and regulations regarding staff scheduling changes.

    Andrew Rigie, Executive Director of the NYC Hospitality Alliance, warned that following the chatbot’s advice could expose businesses to hefty legal liabilities, highlighting the potential risks of relying on AI for legal information without proper oversight.

    While Leslie Brown from the NYC Office of Technology and Innovation framed the chatbot as a work in progress that has provided accurate answers to thousands of people, the deployment of an AI system providing incorrect legal advice in sensitive areas raises serious concerns about its responsible use.

    This incident is not an isolated occurrence. Air Canada faced a legal dispute after its AI chatbot provided misleading information about bereavement fare policies, leading to the airline being ordered to honor the incorrect policy stated by the chatbot. Additionally, a New York lawyer inadvertently cited fabricated legal cases in a brief after relying on the AI language model ChatGPT for legal research, further emphasizing the potential pitfalls of relying on AI for legal advice without proper verification.

    While AI chatbots and language models have the potential to assist in various tasks, their use in sensitive areas such as legal advice and interpretation of laws requires strict oversight and validation. The incidents involving the New York City chatbot and other instances highlight the risks of deploying AI systems without adequate safeguards and fact-checking mechanisms in place. Providing inaccurate legal advice can have severe consequences for individuals and businesses, leading to potential lawsuits, fines, and other legal liabilities. It is crucial for governments, organizations, and individuals to exercise caution when relying on AI for legal matters and to ensure that any information provided by AI systems is thoroughly vetted by legal experts before being acted upon.


    Copyright©dhaka.ai

    tags: Artificial Intelligence, Ai, Dhaka Ai, Ai In Bangladesh, Ai In Dhaka, OpenAi

    Latest articles

    spot_imgspot_img

    Related articles

    spot_imgspot_img