California Bill Calls for AI Regulation and Proposed Research Cloud

California Bill Calls for AI Regulation and Proposed Research Cloud

With all the focus on Washington D.C., a lawmaker in one of the world’s top 10 economies has introduced an AI regulation bill. In California, State Senator Scott Wiener introduced Senate Bill 294, known as the Safety Framework in Artificial Intelligence Act. The roughly three-page bill wants to establish standards for the safe development of AI systems, ensure secure deployments of said systems, and for the responsible scaling of AI models throughout California. While not as dense as the European Union’s Harmonised Rules on Artificial Intelligence (EU AI Act), it’s important to note that California is a multi-trillion dollar company and home to Silicon Valley, which is home to big tech companies like Apple, Cisco, Oracle, etc. To put it simply, this bill could potentially have a huge impact.


The Safety Framework in Artificial Intelligence Act gets straight to the point, but is light on details. The bill states that it would create a framework of disclosure requirements for companies developing advanced AI models. That would include plans for risk analyses, safeguards, capability testing, responsible implementation, as well as requiring improvements to all of that over time. The bill adds that the aim is to ensure high safety regulations against societal harms from AI models through security rules and liability, in the hope of preventing misuse and/or unintended consequences. It also suggests security measures that prevent AI systems from falling in the hands of foreign adversaries. Furthermore, the bill intends to mitigate AIs impact of potential workforce displacement and distribute the economic benefits reaped by AI. 


There’s another interesting piece to this bill involving AI assurance. The bill calls for the state of California to create, what it calls, “CalCompute,” a state research cloud. Once again, the bill is light on details, but the gist is that the cloud would provide the computing infrastructure necessary for groups outside of the big tech industry. That means academia and start-ups could utilize this cloud for advanced AI work. As mentioned, the bill is light on details, but there is a reason. According to a press release from Wiener’s office, the Safety in Artificial Intelligence Act is an intent bill, which means it’s generally meant to start the conversation for lawmakers moving forward. That’s because California’s legislative session ended on September 14, 2023 and won’t reconvene for another legislative session until January 3, 2024. This all comes on the heels of an executive order on AI, issued and signed by California Governor Gavin Newsom


The executive order mandates state agencies and departments to analyze the development, uses and risks of AI in the state. Agencies are also mandated to analyze threats to the state’s through generative AI (GenAI). On top of that, agencies will issue general guidelines for public use, procurement and training on GenAI. State departments must report on the uses, harms, and risks of AI for state workers, the government and communities throughout the state. State  workers will also be trained on approved AI systems. An Interesting caveat to the order is an encouraged partnership with the University of California, Berkeley and Stanford to advance California as a global leader in AI. This could be what “Calcompute” is. In California, lawmakers and the governor have welcomed talks of responsible AI as discussion of AI in general has picked up steam in the United States. Things are beginning to move at lightning speed in the states.


If you have questions on how this could affect your company, reach out to BABL AI. They can answer all your questions and more.

Congress Looks at a Roadmap for AI

Congress Looks at a Roadmap for AI

While the European Union puts the finishing touches on the Harmonised Rules on Artificial Intelligence, or the EU AI Act, the United States Senate has decided it’s time to look at the development of artificial intelligence. While there’s been rumblings for years about AI, 2023 could be viewed as the last branch of a marathon race. The U.S. is throwing on some new shoes and bolting to the finish line. As they say, it’s better late than never and honestly, it’s always good to see some bipartisanship in Washington D.C.

 

On September 12, 2023, at the Senate Judiciary Privacy, Technology and Law Subcommittee, parties from both sides of the aisle touted their plans for AI regulation. Democratic Senator Amy Klobuchar of Minnesota took the wheel, introducing the Protect Elections from Deceptive AI Act, which would ban the use of deceptive AI and deep fakes in political ads. While the bill would be added as an amendment to the Federal Election Campaign Act of 1971, it would be lumped in with a bipartisan roadmap introduced back in June 2023. In that month, Republican Senator Josh Hawley of Missouri and Democratic Senator Richard Blumenthal of Connecticut introduced the No Section 230 Immunity for AI Act. Outside of politicians, the hearing saw law professor Woodrow Harzog, Microsoft President Brad Smith and NVIDIA chief scientist William Dally. 

 

When all was said and done, lawmakers released a joint press release on their thoughts moving forward. First off, lawmakers want an independent oversight body established to license and audit companies who are developing high-risk AI systems like companies using facial recognition or large language models (think ChatGPT). On top of that, they want safety brakes for those high-risk AI uses, along with the right to human oversight like auditors. Customers would also be given control over their personal data. Lawmakers also want to ensure that there’s legal accountability for harms caused by AI. This would be done via oversight and the ability of private citizens to file lawsuits. Lawmakers want to also restrict the export of advanced AI to America’s adversaries in an effort to protect national security interests. The press release goes on to say that they want to mandate transparency when it comes to access, data, limitations, accuracy, etc. Lawmakers add they want disclosures for users that are interacting with an AI system. That disclosure needs to have plain language disclosures. Finally, the press release says that lawmakers want to create a public database of AI systems where people can read reports about the potential harms and harmful incidents.

 

The following day, Democratic Senator Chuck Schumer met on Capitol Hill with several high-profile CEOs, including the CEOs of IBM, Meta, X, etc. It wasn’t just the big players, several others were brought in, like Tristan Harris, the co-founder of the Center for Humane Technology and several tech researchers. The forum, which was open to all 100 senators, is unfortunately nine closed-door sessions, so we won’t be quite sure what’s happening until a press release or bill is revealed. Of course, the media may have some insight with private contacts and sources, so information could trickle out over time. But this week in September is another stride in the run towards regulating AI in the U.S.

 

In May of this year, President Joe Biden’s White House released AI policies to promote responsible AI innovation. Other items of note in the press release include a White House update to the National AI R&D Strategic Plan, a new request for public input as well as a plan to host listening sessions, and a report by the Department of Education on risks and opportunities in AI. Over the summer, the Washington Post reported that the Federal Trades Commission (FTC) had launched a major investigation into OpenAI to see if ChatGPT was harming consumers through data collection and misinformation. Even before that, the chairperson of FTC, Lina Khan, released an op-ed in the New York Times about their approach to AI. The list goes on and on, but the important thing to realize is that the Wild West days of completely unregulated AI are coming to an end in the U.S.

 

If you have questions on how this could affect your company, reach out to BABL AI. They can answer all your questions and more.