As artificial intelligence (AI) continues to reshape industries, economies, and societies, the global race to regulate this transformative technology has intensified. In June 2025, significant legislative developments in the United States, Europe, China, and beyond have underscored the delicate balance between fostering AI innovation and mitigating its risks. From state-level initiatives to federal moratorium proposals and international frameworks, the past month has seen a flurry of activity that could define the future of AI governance. This article explores the key AI regulation and legislation developments as of June 27, 2025, highlighting the debates, policies, and implications for stakeholders worldwide.

U.S. Legislative Landscape: A Federal vs. State Showdown
In the United States, AI regulation has become a contentious issue, with a proposed federal moratorium on state-level AI laws dominating headlines. On June 27, 2025, reports surfaced about a Republican-backed provision in the U.S. Senate’s “One Big Beautiful Bill,” which would impose a 10-year ban on states enforcing or passing new AI regulations. This measure, championed by Senator Ted Cruz (R-TX), aims to prevent a “labyrinth of regulation” by centralizing AI oversight at the federal level, arguing that disparate state laws could stifle innovation. The provision, which passed a key procedural hurdle on June 22, ties compliance to federal broadband funding, potentially withholding billions from states that enact AI laws. For example, Utah could lose an estimated $5 million annually, while broader access to the $42 billion Broadband Equity, Access, and Deployment (BEAD) program is at risk.
This proposal has sparked fierce opposition from a diverse coalition, including state lawmakers, attorneys general, and even some conservative figures like Representative Marjorie Taylor Greene (R-GA) and the House Freedom Caucus. Critics argue that the moratorium undermines state rights and consumer protections, particularly in areas like deepfake regulation, child safety, and workplace discrimination. Tennessee’s Attorney General, for instance, warned that the provision could nullify “common-sense protections” against AI-generated child pornography, while Utah lawmakers emphasized their recent legislation on AI use in police reports. The NAACP and the ACLU have also voiced concerns, with posts on X highlighting the risk of unchecked discriminatory AI practices.
Meanwhile, states are forging ahead with their own AI laws. On June 2, Texas passed the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), signed into law by Governor Greg Abbott on June 27. TRAIGA establishes a state AI council to oversee ethical AI deployment and sets compliance requirements for companies, making Texas the fourth state after Colorado, Utah, and California to enact AI-specific legislation. California, a leader in AI policy, advanced several bills in June, including SB 11, which extends right of publicity laws to AI-generated content, and SB 243, requiring AI platforms to remind minors that chatbots are not human. These bills, sponsored by Senators Angelique Ashby and Steve Padilla, respectively, are under review in the Assembly, with hearings scheduled for early July.

However, the federal moratorium threatens to preempt these state efforts. A June 9 letter from the U.S. Chamber of Commerce supported the moratorium, citing the need for a unified national approach, while Senators Maria Cantwell (D-WA) and Marsha Blackburn (R-TN) opposed it, arguing it leaves consumers vulnerable to AI-related fraud and harm. The debate reflects a broader tension: tech giants like Google and OpenAI back federal preemption to streamline compliance, while state officials and advocacy groups like the AI Now Institute warn of a “regulatory vacuum” that could exacerbate AI’s societal risks.
International Developments: Europe and China Set the Pace
Globally, AI regulation is advancing at a rapid clip. In Europe, the EU AI Act, set to take effect in August 2025, remains a cornerstone of global AI governance. The Act categorizes AI systems by risk level, imposing strict requirements on high-risk applications like those in healthcare and criminal justice. On June 4, the European Commission held a workshop with 150 experts to finalize guidelines under the Digital Services Act (DSA) for protecting minors online, including AI-driven platforms. These guidelines, open for public consultation until June 15, aim to ensure privacy and safety and are expected to be adopted by summer 2025. Additionally, the EU is developing a privacy-preserving age verification system, with a beta version available on GitHub, to complement the upcoming EU Digital Identity Wallet by 2026.
In China, AI regulation is gaining momentum under a state-driven approach. On June 17, Legal Daily reported that National People’s Congress (NPC) representatives proposed drafting a comprehensive AI law to create a “scientific legal framework.” This follows the 2021 Plan for Building Rule of Law in China (2020–2025) and a July 2024 CPC resolution to enhance generative AI governance. Shanghai’s Cyberspace Administration also announced penalties for non-compliant generative AI services under the “Bright Sword Huangpu · 2025” campaign, signaling robust enforcement. Unlike the U.S., where federal inaction has spurred state-level laws, China’s centralized approach prioritizes both innovation and control, with an eye on global AI dominance.

Ethical and Environmental Concerns
Beyond legislative mechanics, June 2025 has spotlighted ethical and environmental challenges in AI governance. A New York Times report on June 25 warned that unrestricted AI development could add 1 billion tons of greenhouse gas emissions in the U.S. over the next decade, equivalent to Japan’s annual emissions. The proposed federal moratorium, critics argue, could exacerbate this by limiting state-level regulations on energy-intensive AI data centers. Researchers like Gianluca Guidi from Harvard’s TH Chan School of Public Health emphasize that the environmental impact depends on clean energy adoption, which state policies could incentivize.
Ethically, the debate over AI’s societal impact is intensifying. A UN report highlighted AI’s potential misuse in terrorism, including cyberattacks and deepfake propaganda, urging swift regulatory action. In the U.S., Illinois passed a bill on June 24 prohibiting AI chatbots from acting as mental health therapists, reflecting concerns about AI’s role in sensitive human interactions. Similarly, California’s SB 243 addresses the psychological risks of AI chatbots for minors. These measures underscore a growing recognition that AI’s cognitive capabilities—pattern recognition, language generation, and decision-making—require guardrails to prevent harm.
Industry and Public Sentiment
The tech industry is divided on regulation. Companies like OpenAI and Google support federal preemption to avoid a patchwork of state laws, as noted in a Reuters report on June 25. Conversely, advocacy groups like Americans for Responsible Innovation warn that the moratorium’s broad language could dismantle protections against deepfakes and algorithmic bias. On X, sentiment is polarized: posts from users like @Tech_Oversight and @yassaminansari decry the moratorium as a “gift to tech companies,” while others argue it’s necessary to compete with China’s rapid AI advancements.
Public opinion, as per a Pew Research Center study in April 2025, leans toward caution, with many Americans more concerned about AI’s risks than its benefits. This contrasts with industry leaders like OpenAI’s Sam Altman, who, speaking in Berlin on February 7, predicted a surge in AI’s utility over the next two years. The tension between innovation and oversight is palpable, with state lawmakers like Utah’s Doug Fiefia advocating for collaborative federal-state policies to “get AI policy right.”

Looking Ahead
As June 2025 closes, the future of AI regulation hangs in the balance. The U.S. Senate’s vote on the “One Big Beautiful Bill” by July 4 will determine whether the 10-year moratorium becomes law, potentially reshaping state-level efforts. In Europe, the EU AI Act’s implementation looms, while China’s legislative push signals a long-term commitment to AI governance. The global stakes are high: unchecked AI could amplify risks like misinformation, discrimination, and environmental harm, but overly restrictive policies might cede technological leadership to competitors.
For stakeholders—policymakers, companies, and citizens—the challenge is clear: craft regulations that harness AI’s potential while safeguarding society. As the Texas AI Council and California’s legislative efforts demonstrate, states are stepping up where federal action lags. Yet, the proposed moratorium underscores a fundamental question: should AI governance be centralized or decentralized? The answer, still unfolding, will shape the next decade of technological progress.
Coyprights: Dhaka ai