Skip to content Skip to footer

AI Safety Bill Veto Sparks Debate

This week, the AI and cybersecurity communities praised California Governor Gavin Newsom’s decision to veto the highly debated Senate Bill 1047. The bill, which aimed to introduce strict safety regulations for AI developers, passed the California legislature in late August but was halted by the governor’s veto.

Senate Bill 1047 proposed comprehensive safety testing requirements for AI models developed with over $100 million in investment and utilizing computing power greater than 10^26 operations. The bill went further by proposing civil liability for developers of these models if they were involved in mass casualty events or caused damages exceeding $500 million. One of its more contentious provisions was the demand for public safety measures, such as a mandatory “kill switch” to disable rogue AI models deemed hazardous.

Though Senate Bill 1047 aligned with growing calls for more regulation in the AI industry, it received backlash for being seen as premature and overly burdensome. Critics argued that the bill placed a disproportionate focus on future, theoretical threats rather than addressing the immediate challenges AI developers currently face. The bill’s broad scope raised concerns within the AI sector, particularly among startups and open-source model developers who feared that such stringent requirements would stifle innovation and create insurmountable hurdles for smaller players in the industry.

In his veto message, Governor Newsom acknowledged the need for AI regulation but pointed out the fundamental flaws in the bill’s approach. He expressed concern that by targeting only the largest and most expensive AI models, the bill could give a misleading impression of AI safety, while potentially overlooking the risks posed by smaller, more specialized models. Newsom emphasized that the regulatory framework must be adaptable to keep pace with the rapidly evolving AI landscape, rather than being rigidly tied to outdated concepts based solely on size and cost.

Many cybersecurity and AI experts echoed the governor’s reservations about SB 1047. While recognizing the need for safety protocols in AI development, they agreed that the bill’s focus was misdirected. Jim Liddle, Chief Innovation Officer at Nasuni, noted that although the bill had good intentions, it failed to address the more immediate risks that AI currently poses. “Instead of tackling future threats,” Liddle said, “we should be focusing on existing issues like bias in training data, ensuring privacy, and increasing transparency in the development of AI models.” These are challenges that need urgent solutions today, not years from now.

David Brauchler, Technical Director at NCC Group, agreed with Liddle, pointing out that the bill’s emphasis on the size and cost of AI models as risk indicators is flawed. According to Brauchler, smaller AI models can be just as dangerous as their larger counterparts, if not more so, depending on how they are integrated into critical systems. “The real danger,” Brauchler explained, “often comes from poor implementation, such as integrating an underperforming model into a self-driving car or other high-stakes environment, rather than from the AI model itself.”

Another key concern among experts is the importance of academic research and governmental collaboration in crafting effective AI safety regulations. Manasi Vartak, Chief AI Architect at Cloudera, emphasized that deeper research into AI safety requirements is essential for establishing safeguards that balance innovation and security. “Collaborative partnerships between government and academic institutions will be critical in generating the knowledge needed to create well-informed, adaptable safety protocols,” Vartak said, highlighting the complexity of addressing AI’s rapidly advancing capabilities.

While Senate Bill 1047 may have been vetoed, the debate over AI safety regulations is far from over. Industry experts and lawmakers alike agree that future legislation must strike a balance between safeguarding public safety and promoting innovation. The key lies in creating regulations that are flexible and adaptable, capable of evolving alongside AI technology to mitigate real-world risks without stifling the progress that drives the AI industry forward.

Though Senate Bill 1047 did not pass into law, the conversation surrounding AI regulation is only heating up. The veto of this high-profile bill highlights a critical need for the development of smarter, more targeted legislation that directly addresses the real-world risks posed by artificial intelligence, without stifling innovation in this rapidly evolving field. The future of AI regulation remains a pressing issue for policymakers, developers, and industry leaders, as they work toward creating frameworks that balance safety with progress.

A Call for Smarter, Evidence-Based AI Regulation

While Senate Bill 1047 aimed to impose much-needed safety protocols, experts agree that future AI regulation must be more evidence-based and adaptable to the diverse applications of AI technology. James White, President of AI security company CalypsoAI, emphasized that Governor Newsom’s veto was an important step but not the end of the story. “Governor Newsom’s veto was wise, but it must lead to targeted action,” White said. “Now, it’s crucial to develop smarter, more flexible regulations that account for the size and scope of AI models while focusing on real-world risks.”

White’s comments underscore a growing consensus in the AI and cybersecurity industries: regulation should not be one-size-fits-all. The next generation of AI laws will need to distinguish between the different stages of AI development, particularly AI training and inference, and address the unique safety challenges posed by each stage. For instance, AI models used in healthcare systems have vastly different risks compared to those used in autonomous vehicles or financial fraud detection. A tailored regulatory approach will better mitigate the risks specific to each application without imposing unnecessary burdens on developers.

The Technical Pitfalls and Potential for Abuse of Senate Bill 1047: An In-Depth Look

Senate Bill 1047 has sparked significant debate within the AI and cybersecurity sectors, primarily due to the technical vulnerabilities it introduces. While the bill was designed with the best intentions, to regulate large-scale AI systems and promote public safety, it unintentionally opens the door for various unintended consequences. By focusing on large-scale models, SB 1047 could stifle innovation, create opportunities for regulatory manipulation, and ultimately fail to address some of the most pressing AI safety concerns.

Overemphasis on Computational Power as a Risk Metric

A key flaw in Senate Bill 1047 lies in its heavy reliance on computational power as a primary metric for AI risk. The bill specifically targets models that utilize over 10^26 floating-point operations and cost more than $100 million to develop, assuming that larger models pose greater risks to society. However, this assumption does not align with the realities of AI development.

In practice, the size of an AI model does not necessarily correlate with its potential for harm. Smaller, specialized models can be just as dangerous, if not more so, than their larger counterparts. For example, a small-scale AI designed to manipulate network traffic or generate realistic deepfakes can have far-reaching consequences, despite not requiring the massive computational resources outlined in the bill. By focusing primarily on large-scale models, SB 1047 may fail to account for the real and present dangers posed by more compact but equally potent AI systems.

Additionally, this overemphasis on computational power could encourage developers to underreport the true resources used during model training. In an effort to avoid regulatory scrutiny, companies could distribute their computing power across multiple regions or clusters, effectively sidestepping the bill’s requirements. This loophole resembles tactics used by corporations to avoid tax liabilities and could undermine the bill’s effectiveness, allowing potentially dangerous AI models to evade regulation.

Misuse of the “Kill Switch” Mechanism

The inclusion of a mandatory “kill switch” in Senate Bill 1047 raises several technical and ethical concerns. The purpose of the kill switch is to allow for the immediate shutdown of AI systems deemed hazardous, theoretically preventing rogue AI from causing catastrophic harm. While this might seem like a reasonable safeguard, implementing such a mechanism in practice is fraught with challenges.

Firstly, AI systems, particularly large models like GPT-4 or LLaMA, often operate across distributed environments with numerous servers handling different components of the model’s functionality. Shutting down such systems in real-time without causing collateral issues, such as data corruption or disruption of critical processes, requires a level of coordination that is technically difficult to achieve. Moreover, developers could design their systems in ways that ensure partial functionality even after the kill switch is activated, rendering the safeguard ineffective.

Secondly, the kill switch itself could become a point of vulnerability. Hackers or malicious insiders could exploit this mechanism to disable essential AI systems, potentially causing significant damage to autonomous systems or critical infrastructure. Instead of preventing harm, the kill switch could introduce new security risks, allowing bad actors to leverage it for malicious purposes.

Additionally, the kill switch requirement disproportionately impacts open-source AI developers. Open-source models are frequently modified and deployed by various users and institutions. Enforcing a kill switch across this decentralized network is nearly impossible, and attempting to do so could discourage developers from contributing to the open-source community, stifling innovation in AI development.

Chilling Effects on Innovation and Competition

Another significant concern with SB 1047 is its potential to create a regulatory environment that favors established tech giants over smaller companies and startups. Compliance with the bill’s safety protocols, including third-party audits and certification processes, would likely impose significant financial burdens on smaller developers. Large corporations with extensive resources can absorb these costs, but for startups and open-source developers, the expense could be prohibitive.

This imbalance could lead to a chilling effect on innovation. Startups, which often drive AI breakthroughs, might be dissuaded from pursuing new research and development due to the high compliance costs associated with SB 1047. As a result, the bill could consolidate power within a handful of large companies that can afford to meet the stringent requirements, reducing competition and limiting the diversity of innovation within the AI industry.

The bill’s vague definitions of what constitutes a “covered model” and its ambiguous language regarding “unreasonable risk” further compound this issue. Developers may face uncertainty about whether their models fall under the bill’s purview, which could open the door to frivolous lawsuits. Fearing legal repercussions, developers may become overly cautious, slowing the deployment of AI technologies and potentially stalling innovation in the field.

Inadequate Scope for Global AI Development

AI development is a global endeavor, often involving cross-border collaborations between teams from various countries. However, Senate Bill 1047 applies only to AI systems developed or deployed within California, limiting its effectiveness on a global scale. Companies could easily circumvent the bill’s regulations by relocating their high-compute AI development to regions with less stringent regulations.

This geographic limitation could undermine SB 1047’s effectiveness. Large tech companies based in California might choose to develop their most advanced AI systems outside the state to avoid the bill’s regulatory burdens. Furthermore, international AI developers may struggle to comply with California’s specific standards, potentially creating barriers to entry for companies looking to bring their AI products into the California market.

Potential for Regulatory Capture

One of the more insidious risks associated with SB 1047 is the potential for regulatory capture. The bill mandates the creation of a Board of Frontier Models and a Division of Frontier Models to oversee AI development and ensure compliance with safety regulations. However, there is always the risk that these regulatory bodies could be influenced by the very industry they are meant to regulate.

Large tech companies have the resources to influence regulatory agencies through lobbying efforts, legal challenges, and even by placing their representatives within these organizations. Over time, this could lead to a scenario where the Board of Frontier Models becomes lenient toward the largest players in the AI industry, granting them exemptions or reducing their regulatory oversight. Meanwhile, smaller companies and startups could be left to bear the brunt of the bill’s requirements, further entrenching the dominance of established tech giants and stifling competition.

Federal Efforts and the Role of the FTC in AI Regulation

As states continue to debate AI safety laws, federal agencies like the Federal Trade Commission (FTC) are stepping into the arena, especially in areas concerning data privacy and algorithmic accountability. One of the FTC’s recent regulatory tools, “algorithmic disgorgement,” mandates that companies delete AI models built on unlawfully obtained data. This represents a significant shift in how regulators are addressing AI’s impact, particularly in safeguarding consumer privacy and ensuring transparency in AI model training.

However, the absence of comprehensive federal AI legislation leaves a significant gap in regulating the industry effectively. While the FTC can enforce data privacy measures, it lacks the full authority to tackle the broader safety risks of AI systems, particularly when it comes to preventing misuse in critical sectors such as healthcare, infrastructure, and national security. The need for a nationwide framework that addresses both data protection and AI safety is becoming increasingly apparent, especially as more AI models are deployed in high stakes environments​​.

International and National Frameworks: The Role of NIST and Global Standards

On the global stage, bodies like the National Institute of Standards and Technology (NIST) have been developing frameworks designed to help organizations manage the risks associated with AI. The NIST AI Risk Management Framework is a comprehensive guideline for organizations looking to implement trustworthy and secure AI systems. It emphasizes the importance of risk management throughout the AI lifecycle, including during development, deployment, and ongoing monitoring.

Despite the value of these frameworks, many experts argue that without enforceable legislation, their impact will remain limited. Frameworks like NIST’s provide crucial guidance, but they do not carry the legal weight necessary to ensure compliance across industries. As a result, the creation of enforceable laws remains essential to ensure that AI systems meet standardized safety and security requirements, particularly in critical sectors such as healthcare, transportation, and national security.

Melissa Ruzzi, AI Director at AppOmni, echoes this sentiment: “Most big companies are already putting a lot of effort into safety, but to make sure everyone follows the same rules, we need legislation. This will help remove uncertainty and build public trust in AI technologies.” Ruzzi’s comments reflect a broader industry concern that, while many leading tech companies are self-regulating, smaller companies or those operating outside the U.S. may not adhere to the same safety standards without formal legislation​.

Building Public Trust Through Regulation

One of the core challenges facing AI regulation is the need to build public trust. As AI systems become more deeply embedded in daily life, from social media algorithms to healthcare diagnostics, there is growing public concern about how these systems are developed, who controls them, and what safeguards are in place to prevent misuse. Effective AI regulation can help alleviate these concerns by ensuring transparency, accountability, and fairness in AI deployment.

Laws that mandate transparency in AI decision-making processes, require third-party audits, and hold developers accountable for unsafe practices will be key to building this trust. Clear rules about data use, bias mitigation, and the proper application of AI technologies can reassure the public that these systems are designed with their safety and privacy in mind.

A Path Forward for AI Regulation

While Senate Bill 1047 was built on the noble goal of promoting AI safety, its technical flaws and potential for abuse underscore the need for a more nuanced approach to AI regulation. Policymakers must work closely with technical experts, cybersecurity professionals, and academic researchers to develop regulatory frameworks that are adaptable, scalable, and responsive to real-world use cases.

The key to effective AI regulation lies in creating a flexible framework that can evolve alongside the technology it governs. Rather than focusing solely on the size and power of AI models, lawmakers should prioritize the development of adaptable safety protocols that account for the diverse applications of AI across various industries. By striking a balance between innovation and safety, regulators can ensure that AI continues to advance while mitigating the risks posed by this powerful technology.

As the AI industry continues to evolve, the challenge for policymakers will be to craft regulations that promote public safety without stifling innovation. This will require ongoing collaboration between government, industry, and academia to ensure that AI technologies are developed and deployed in a responsible manner that benefits society as a whole.