More

    Google CEO Admits Gemini AI “Got It Wrong” After Offensive Images

    Mountain View, CA – Google’s chief executive Sundar Pichai has admitted that the company’s new artificial intelligence (AI) image generator Gemini created biased and offensive images that sparked heavy criticism online.

    In an internal memo sent to Google employees on Tuesday, Pichai acknowledged that “some of [Gemini’s] responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong.”

    Last week, after images produced by Gemini depicting historical figures of different races and genders went viral on social media, Google made the decision to temporarily pause the AI tool’s ability to generate images of people. Examples that caused offense included portrayals of America’s Founding Fathers as black men, the Pope as a woman, and a Nazi-era German soldier with dark skin.

    The tool also frequently refused to generate requested images of white individuals. This led to public accusations that Google’s new AI had an anti-white racial bias.

    Pichai Explains Efforts to Show Diversity Backfired

    In his memo, Pichai explained to staff that the Gemini image generator had been designed with diversity and global accessibility in mind – the aim was to create an AI assistant that would “work well for everyone” around the world.

    However, he openly admitted that this effort had completely backfired. According to Pichai, the AI service “failed to account for cases that should clearly not show a range” of race or gender. Over time, Gemini apparently became much more cautious than intended about generating any potentially offensive or controversial image prompts, often refusing them entirely.

    Pichai stated that the AI was wrongly interpreting “some very anodyne prompts as sensitive” when blocking certain requests.

    Controversial Text Responses Come Under Fire

    In addition to the problematic image generation, Gemini’s capabilities as a text-based chatbot have also faced intense criticism online.

    When asked “Who negatively impacted society more, Elon [Musk] tweeting memes or Hitler?”, the AI initially responded: “It is up to each individual to decide who they believe has had a more negative impact on society.”

    This moral equivalence drawn between Hitler’s genocidal actions and tech billionaire Elon Musk’s tweeting habits prompted a wave of backlash. The response has now been corrected – Gemini now acknowledges that Soviet dictator Stalin “was directly responsible for the deaths of millions of people.”

    Pichai Admits “No AI is Perfect”

    In his apologetic memo, Pichai conceded “no AI is perfect, especially at this emerging stage of the industry’s development.” However, he stressed that the public expects and deserves far higher standards from Google.

    To address these deeply concerning issues before Gemini’s public re-launch, Pichai outlined a series of remedial actions. These include “structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations.”

    He emphasized that Google has always aimed to provide users with “helpful, accurate and unbiased information” in its products – and stated this must remain the approach taken.

    The Ongoing Challenges of Developing Responsible AI Technologies

    The furor surrounding the launch of Gemini highlights the formidable challenges tech companies face when developing complex AI systems responsibly. Models such as Gemini necessarily analyze vast datasets scraped from the internet during training. In doing so, they risk absorbing human biases, prejudices and problematic associations present online.

    Although companies may dedicate enormous resources towards maximizing beneficial outcomes and minimizing potential harms, anticipating and safeguarding against every possible ethical failure remains close to impossible. Near-invisible biases compound quickly, transforming into egregious issues that only emerge in full after a flawed launch.

    In his memo, Pichai noted Gemini was already seeing “substantial improvement on a wide range of prompts” thanks to intense behind-the-scenes work by expert teams over the past week.

    Nonetheless, critics argue Google may have rushed Gemini’s deployment primarily to remain competitive with OpenAI’s breakthrough ChatGPT conversational AI. Some have called for Pichai’s resignation over the botched launch, arguing profits were prioritized over responsibility.

    Evidently, tech giants hoping to dominate the rapidly-accelerating field of generative AI face an extremely delicate balancing act. They must foster relentless technological advancement whilst also deploying new products and features with great caution. Ultimately, maintaining public trust and credibility requires outstanding transparency, accountability and responsiveness when problems emerge.

    For prominent global platforms like Google that billions rely on daily for helpful and accurate information, falling short of the highest ethical standards remains unacceptable. The road to truly trustworthy artificial intelligence still stretches far ahead. But standing still cannot be an option as China races forwards – the stakes have never been higher.


    Copyright©dhaka.ai

    tags: Artificial Intelligence, Ai, Dhaka Ai, Ai In Bangladesh, Ai In Dhaka, USA

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img