More

    A Call for the “Right to Repair” in Artificial Intelligence

    Rumman Chowdhury is a trailblazer in the field of ethical and responsible AI development. As the CEO and co-founder of Humane Intelligence, a nonprofit focused on the safety and ethics of generative AI, she has been at the forefront of efforts to embed human values and oversight into advanced AI systems. Her work has earned her recognition as one of TIME’s “100 Most Influential People in AI” in 2023.

    Transcript Of the Speech…

    I want to tell you a story 

    about artificial intelligence and farmers. 

    Now, what a strange combination, right? 

    Two topics could not sound more different from each other. 

    But did you know that modern farming actually involves a lot of technology? 

    So computer vision is used to predict crop yields. 

    And artificial intelligence is used to find, 

    identify and get rid of insects. 

    Predictive analytics helps figure out extreme weather conditions 

    like drought or hurricanes.

    But this technology is also alienating to farmers. 

    And this all came to a head in 2017 

    with the tractor company John Deere when they introduced smart tractors. 

    So before then, if a farmer’s tractor broke, 

    they could just repair it themselves or take it to a mechanic. 

    Well, the company actually made it illegal 

    for farmers to fix their own equipment. 

    You had to use a licensed technician 

    and farmers would have to wait for weeks 

    while their crops rot and pests took over.

    So they took matters into their own hands. 

    Some of them learned to program, 

    and they worked with hackers to create patches to repair their own systems. 

    In 2022, 

    at one of the largest hacker conferences in the world, DEFCON, 

    a hacker named Sick Codes and his team 

    showed everybody how to break into a John Deere tractor, 

    showing that, first of all, the technology was vulnerable, 

    but also that you can and should own your own equipment. 

    To be clear, this is illegal, 

    but there are people trying to change that.

    Now that movement is called the “right to repair.” 

    The right to repair goes something like this. 

    If you own a piece of technology, 

    it could be a tractor, a smart toothbrush, 

    a washing machine, 

    you should have the right to repair it if it breaks.

    So why am I telling you this story? 

    The right to repair needs to extend to artificial intelligence. 

    Now it seems like every week 

    there is a new and mind-blowing innovation in AI

    But did you know that public confidence is actually declining? 

    A recent Pew poll showed that more Americans are concerned 

    than they are excited about the technology. 

    This is echoed throughout the world. 

    The World Risk Poll shows 

    that respondents from Central and South America and Africa 

    all said that they felt AI would lead to more harm than good for their people.

    As a social scientist and an AI developer, 

    this frustrates me. 

    I’m a tech optimist 

    because I truly believe this technology can lead to good. 

    So what’s the disconnect? 

    Well, I’ve talked to hundreds of people over the last few years. 

    Architects and scientists, journalists and photographers, 

    ride-share drivers and doctors, 

    and they all say the same thing. 

    People feel like an afterthought.

    They all know that their data is harvested often without their permission 

    to create these sophisticated systems. 

    They know that these systems are determining their life opportunities. 

    They also know that nobody ever bothered to ask them 

    how the system should be built, 

    and they certainly have no idea where to go if something goes wrong.

    We may not own AI systems, 

    but they are slowly dominating our lives. 

    We need a better feedback loop 

    between the people who are making these systems, 

    and the people who are best determined to tell us 

    how these AI systems should interact in their world.

    One step towards this is a process called red teaming. 

    Now, red teaming is a practice that was started in the military, 

    and it’s used in cybersecurity. 

    In a traditional red-teaming exercise, 

    external experts are brought in to break into a system, 

    sort of like what Sick Codes did with tractors, but legal. 

    So red teaming acts as a way of testing your defenses 

    and when you can figure out where something will go wrong, 

    you can figure out how to fix it.

    But when AI systems go rogue, 

    it’s more than just a hacker breaking in. 

    The model could malfunction or misrepresent reality. 

    So, for example, not too long ago, 

    we saw an AI system attempting diversity 

    by showing historically inaccurate photos. 

    Anybody with a basic understanding of Western history 

    could have told you that neither the Founding Fathers 

    nor Nazi-era soldiers would have been Black.

    In that case, who qualifies as an expert? 

    You. 

    I’m working with thousands of people all around the world 

    on large and small red-teaming exercises, 

    and through them we found and fixed mistakes in AI models. 

    We also work with some of the biggest tech companies in the world: 

    OpenAI, Meta, Anthropic, Google. 

    And through this, we’ve made models work better for more people.

    Here’s a bit of what we’ve learned. 

    We partnered with the Royal Society in London to do a scientific, 

    mis- and disinformation event with disease scientists. 

    What these scientists found 

    is that AI models actually had a lot of protections 

    against COVID misinformation. 

    But for other diseases like measles, mumps and the flu, 

    the same protections didn’t apply. 

    We reported these changes, 

    they’re fixed and now we are all better protected 

    against scientific mis- and disinformation.

    We did a really similar exercise with architects at Autodesk University, 

    and we asked them a simple question: 

    Will AI put them out of a job? 

    Or more specifically, 

    could they imagine a modern AI system 

    that would be able to design the specs of a modern art museum? 

    The answer, resoundingly, was no.

    Here’s why, architects do more than just draw buildings. 

    They have to understand physics and material science. 

    They have to know building codes, 

    and they have to do that 

    while making something that evokes emotion. 

    What the architects wanted was an AI system 

    that interacted with them, that would give them feedback, 

    maybe proactively offer design recommendations. 

    And today’s AI systems, not quite there yet. 

    But those are technical problems. 

    People building AI are incredibly smart, 

    and maybe they could solve all that in a few years.

    But that wasn’t their biggest concern. 

    Their biggest concern was trust. 

    Now architects are liable if something goes wrong with their buildings. 

    They could lose their license, 

    they could be fined, they could even go to prison. 

    And failures can happen in a million different ways. 

    For example, exit doors that open the wrong way, 

    leading to people being crushed in an evacuation crisis, 

    or broken glass raining down onto pedestrians in the street 

    because the wind blows too hard and shatters windows. 

    So why would an architect trust an AI system with their job, 

    with their literal freedom, 

    if they couldn’t go in and fix a mistake if they found it?

    So we need to figure out these problems today, and I’ll tell you why. 

    The next wave of artificial intelligence systems, called agentic AI, 

    is a true tipping point 

    between whether or not we retain human agency, 

    or whether or not AI systems make our decisions for us.

    Imagine an AI agent as kind of like a personal assistant. 

    So, for example, a medical agent might determine 

    whether or not your family needs doctor’s appointments, 

    it might refill prescription medications, or in case of an emergency, 

    send medical records to the hospital. 

    But AI agents can’t and won’t exist 

    unless we have a true right to repair. 

    What parent would trust their child’s health to an AI system 

    unless you could run some basic diagnostics? 

    What professional would trust an AI system with job decisions, 

    unless you could retrain it the way you might a junior employee?

    Now, a right to repair might look something like this. 

    You could have a diagnostics board 

    where you run basic tests that you design, 

    and if something’s wrong, you could report it to the company 

    and hear back when it’s fixed. 

    Or you could work with third parties like ethical hackers 

    who make patches for systems like we do today. 

    You can download them and use them to improve your system 

    the way you want it to be improved. 

    Or you could be like these intrepid farmers and learn to program 

    and fine-tune your own systems.

    We won’t achieve the promised benefits of artificial intelligence 

    unless we figure out how to bring people into the development process. 

    I’ve dedicated my career to responsible AI, 

    and in that field we ask the question, 

    what can companies build to ensure that people trust AI?

    Now, through these red-teaming exercises, and by talking to you, 

    I’ve come to realize that we’ve been asking the wrong question all along. 

    What we should have been asking is what tools can we build 

    so people can make AI beneficial for them?

    Technologists can’t do it alone. 

    We can only do it with you.

    Chowdhury’s powerful speech highlights the crucial need to bridge the gap between AI developers and the people whose lives are impacted by these systems. By advocating for a “right to repair” that gives individuals the ability to understand, modify, and hold AI accountable, she offers a pathway towards building trust and ensuring these technologies truly benefit humanity. As AI continues its rapid advance, heeding voices like Chowdhury’s will be essential to realizing the immense potential of AI while mitigating its risks and unintended consequences.

    About Rumman Chowdhury: In addition to her leadership at Humane Intelligence, Rumman Chowdhury serves as the United States Science Envoy for Artificial Intelligence and a Responsible AI Fellow at the Berkman Klein Center for Internet & Society at Harvard. Her insights on ethical AI have been featured in publications such as The Atlantic, Forbes, and The Information.


    Copyright©dhaka.ai

    tags: Artificial Intelligence, Ai, Dhaka Ai, Ai In Bangladesh, Ai In Dhaka, Google, Claude, Future of AIRumman Chowdhury

    Latest articles

    spot_imgspot_img

    Related articles

    spot_imgspot_img