Oklahoma Executive Order on Artificial Intelligence

Oklahoma Executive Order on Artificial Intelligence

While United States lawmakers continue to mull over potential federal legislation on AI, governors are leading the way at the state level. Oklahoma Governor J. Kevin Stitt issued Executive Order 2023-24 on Tuesday, Sept. 25, joining the Governors of Virginia and Pennsylvania when it comes to executive orders directed at AI and Generative AI. Just like other government officials in the United States, Governor Stitt said that AI needs to be embraced, analyzed, properly harnessed, and deployed. However, just like other orders and laws we’ve seen this year, the state of Oklahoma will approach things slightly different.

In the Order, Governor Stitt orders the creation of the Governor’s Task Force on Emerging Technologies. The Task Force shall study, evaluate and develop policies and recommendations related to the responsible deployment of AI and generative AI. That means the Task Force will need to develop a set of principles and values with AI’s use in the state government. On top of that, the Task Force will develop and implement a governance framework focusing on data management, model development, model monitoring and human oversight. The Task Force will also determine how to educate and train workers in the fields of AI while improving government services and government efficiencies.

Under the Order, Joe McIntosh, the Chair of the Task Force, is allowed to create committees to facilitate work of the Task Force and appoint experts to serve on potential committees. The Order does lay out two mandatory committees, a committee that focuses on AI’s impact within education and a committee that focuses on AI’s impact within commerce and workforce development. The Order calls for directors of all state agencies to put one person on their team in charge of becoming an expert in the field of AI and Generative AI. The Governor adds in his Order that executive branch operations and IT leaders within the state’s Office of Management and Enterprise Services are already responsibly utilizing AI to help the Governor’s office achieve business solutions, but doesn’t state how.

The Task Force will be composed of 11 members, including McIntosh, the state’s Lieutenant Governor, the state’s Secretary of Operations and Government Efficiency, the leaders of the state Senate and House, the state’s Interim Executive Director of Commerce, the state’s Executive Director of the Center for the Advancement of Science & Technology, the Chancellor of the State System of Higher Education, and three at-large members appointed by Governor Stitt. The Task Force will have to present a full written recommendation to Governor Stitt, and the leaders of the state Senate and House on Sunday, Dec. 31, 2023. This could impact how the Oklahoma Legislature operates next year, as the Legislature is set to convene on Monday, Feb. 5, 2024.

If you have questions about how all these different U.S. state laws and orders could affect your company, reach out to BABL AI and one of their Audit Experts can help.

Pennsylvania Executive Order Signed for Generative AI

Pennsylvania Executive Order Signed for Generative AI

Another state in the United States is taking the leap when it comes to potential AI regulations. Governor Josh Shapiro of Pennsylvania has signed Executive Order 2023-19 – Expanding and Governing the Use of Generative Artificial Intelligence Technologies Within the Commonwealth of Pennsylvania. Just like what we saw earlier this month in the state of Virginia, Governor Shapiro recognizes the need for oversight and regulation when it comes to AI. 


The Executive Order, issued on Sept. 20, 2023, emphasizes the need for responsible and ethical use of Generative AI, while recognizing its potential to improve communication and services for Pennsylvanians. The order calls for the creation of the Generative AI Governing Board, which is a key component of the order. The Board will be responsible for providing guidance and direction on design, development, acquisition and deployment of Generative AI in state agencies, including state policies on the use of Generative AI. The Board is also responsible for ensuring transparency and accountability, as well as testing for bias and addressing any privacy concerns. The order refers to Generative AI as AI that uses algorithms to generate new content, data, or other inputs.


On top of that, the Board is responsible for engaging with experts in Generative AI from the public and private sectors. Those experts will advise the Board on short and long-term research goals, industry trends and some of the best practices. This means the Board can seek advice and input from experts while experts can provide guidance on the latest developments in AI, help identify emerging trends and provide ideas on the best practices for using AI in a responsible and ethical manner. The Board is expected to digest all of this information to develop policies and guidelines for state agencies while ensuring it has access to the latest information and insights so that the Board is constantly making informed decisions on how AI can benefit Pennsylvanians.


While industry experts are key, the Board is responsible for providing a mechanism that allows for internal and external feedback on policies. That means the Board is required to receive feedback or input from stakeholders, including state labor and workforce organizations, and other external stakeholders, like citizens. The feedback can be used by the Board to help craft policies and guidelines for use of Generative AI in state agencies. The purpose of the feedback is to help build trust and confidence in the use of AI and to ensure it’s being used in an ethical manner. The Order does emphasize that the disclosure of policies should not jeopardize the security of any individual or entity, state worker, infrastructure, systems or data. The Order emphasizes that AI should not and cannot replace human creativity, moral judgment or lived experiences. The Order recognizes that AI is a valuable tool that has the potential to improve life, but it should be used in a way that prioritizes employees and aligns with industry standards.


The Board will consist of 12 members, including the Governor’s Chief of Staff, Director of Digital Strategy, Chief Transformation and Opportunity Office, Secretary of Policy and Planning, General Counsel, Secretary of Administration, Deputy Secretary for Information Technology, Chief Information Security Officer, and others. The Order is immediately in effect and remains in effect until the Order is amended or rescinded.


If you have questions on how these Executive Orders could affect your company, reach out to BABL AI and one of their Audit Experts can help.

Virginia Executive Directive Number Five

Virginia Executive Directive Number Five

As lawmakers in Washington D.C. go back and forth on potential AI regulations, one stateside Governor has issued an executive directive on AI. Virginia Governor Gov. Glenn Youngkin announced Executive Directive Number Five on September 20. In the announcement, Gov. Youngkin acknowledges the critical role that state governments must play when it comes to regulation and oversight of AI.

Gov. Youngkin says that the growing expansion of AI and its analytical power over the coming years is the reason behind his directive. He says the need for this in Virginia is because the state is home to a rapidly evolving entrepreneurial ecosystem as well as several colleges and universities which are leading the nation in technological research and development. That’s why in the directive, Gov. Youngkin calls for the Office of Regulatory Management (ORM) to coordinate with the Chief Information Officer (CIO) and other secretariats to address the legal and regulatory environment, look at AI’s impact on education and workforce development, modernize the state government’s use of AI and develop a plan when it comes to AI’s impact on economic development and job creation.

When it comes to laws and regulations, the directive calls for the ORM and CIO to tackle this issue in three ways. First, they want both to comb over existing laws and regulations to see how they may already apply to AI, and if the laws will need updating. Second, they want both to ensure that use of AI by the state government is transparent, security and impartial. Finally, they want both to make recommendations for uniform standards for responsible, ethical and transparent use of AI across all state agencies and offices.

For education and workforce development, the ORM and CIO will work with the Department of Education and higher education institutions to develop a plan. That plan must promote guidelines for the use of AI tools which impact learning and prohibit cheating, as well as examine the potential uses of AI tools for personalized tutoring, and include AI-related topics in technology, computer science and data analytics courses. For workforce development, the ORM and CIO must ensure public school students are prepared for future careers that involve AI technologies, and support opportunities for state colleges and universities to contribute to AI research through collaboration with public and private entities.

For modernization, the ORM and CIO will identify opportunities for the secure and transparent use of AI systems to improve state government operations. The ORM and CIO will also evaluate the potential effects of AI systems on functions of the government while making sure they’re protecting the data and privacy of the public. Finally, the ORM and CIO will develop ethical guidelines and best practices for the use of AI across the state government with a focus on accountability and transparency.

As for economic development and job creation, the directive calls for the ORM and CIO to work with the Virginia Economic Development Partnership to develop a plan for five goals. The first goal is to identify potential industry clusters that may benefit from AI in the state. The second goal is to explore ways to encourage AI innovation and entrepreneurship in the state. The third goal is to assess the risk and opportunities of AI on the labor market. The fourth goal is to develop strategies to support workers who could be impacted by AI. The fifth goal is to coordinate with schools and workforce programs on the next steps to become AI-ready.

While finer details are missing from this executive order, the Directive was made official upon Gov. Youngkin’s signature. We may get those finer details by the end of this year because the ORM and CIO are tasked with completing the above actions and delivering recommendations on December 1, 2023.

If you have questions about government actions involving AI and AI audits, and how the rapidly changing legal landscape could affect your company. Reach out to BABL AI, they can answer all your questions and more.

CPPA Discusses Draft Cybersecurity Audit and Risk Assessment Regulations

CPPA Discusses Draft Cybersecurity Audit and Risk Assessment Regulations

The first dedicated privacy regulator in the United States discussed draft regulations when it comes to cybersecurity audits and risk assessments. While lawmakers continue to go back and forth in Washington D.C., the California Privacy Protection Agency (CPPA) discussed a litany of draft regulations at their September 8th meeting. The CPPA was created after California voters approved Proposition 24 in November 2020. The agency, which is governed by a five-member board, discusses and then implements and enforces privacy protection laws. As for the draft regulations discussed at their latest meeting, the draft regulations were formed this past summer after a public hearing earlier in the year. At the 6+ hour meeting, the CPPA Board went back and forth on audit regulations.


In the first part of the discussion, the board discussed cybersecurity audit regulations; specifically, which businesses would be under these cybersecurity audit regulations, who could audit these businesses and the required components of the audit. Under the draft regulations, businesses processing significant amounts of personal information would have to conduct annual cybersecurity audits. Generally speaking, the threshold for businesses to require an audit was discussed as a business with annual gross revenues exceeding $25 million and a business that has processed the personal information of 100,000 or more customers. The Board is also considering other thresholds like employee and customer thresholds. As for the auditors, they would have to be independent, but the independent portion in the draft states that “the auditor may be internal or external to the business but shall exercise objective and impartial judgment on all issues within the scope of the cybersecurity audit…” The auditor must document the business’s cybersecurity program, including authentication, encryption, access controls, monitoring, training, vendor oversight and incident response. Furthermore, the auditor would have to assess risk to security and privacy, including unauthorized access or destruction of information.


The second portion of the discussion on the board’s end dealt with regulations for risk assessments related to cybersecurity audits and automated decision-making technology (ADMT). Under the draft regulations, businesses would have to provide a summary of how they will process personal information, including how they collect, use, disclose and retain that information. The personal information would have to be categorized and businesses must identify whether they include sensitive personal information. However, the regulations do not provide a specific definition of sensitive personal information. Businesses would also have to provide context of processing, including the relationship between the business and the consumers whose personal information is being processed. The purpose of processing personal information must be described with specificity while businesses must also identify the benefits resulting from the process to the business, the consumer, the public and other stakeholders. Negative impacts and risks must be identified and described as well under the CPPA’s regulations.


Overall, the regulations laid out in the CPPA’s September meeting are to ensure businesses have adequate safeguards and practices in place to protect the personal information of consumers. Despite the lengthy meeting, draft regulations weren’t finalized. In fact, public comment on these regulations are still open as the CPPA remains in the beginning stages of potential rulemaking. Ultimately, the draft proposes mandatory cybersecurity audits and risk assessments for qualifying businesses in the state of California. We could learn more about the regulations at next month’s CPPA meeting, which as of right now, hasn’t been scheduled. 


If you have questions about AI and how the rapidly changing legal landscape could affect your company. Reach out to BABL AI, they can answer all your questions and more.

Geneva Association Releases Report on AI Regulation

Geneva Association releases report on AI regulation

While the European Union (EU) goes over the final details of the EU AI Act and the United States begins laying the groundwork for an AI roadmap, the only global association of insurance companies is weighing in on regulation of AI in insurance. The Geneva Association (GA) released a report that analyzes the regulatory developments for AI applications as well as their impact on insurance companies around the world. The report starts out by stating that while AI is transforming the industry by offering expanded risk pooling, reduced costs, risk prevention and mitigation, and improved customer service, it does also post a lot of risks like bias, discrimination, exclusion, lack of transparency and data privacy issues.


For the report, the GA looked at several insurance markets where AI regulation is being looked at and/or is happening. Those markets include Australia, China, the EU, Japan, Singapore, the United Kingdom and the U.S. The report notes that the EU has the most ambitious legislation when it comes to AI regulation and bias audits. The report says that under the EU AI Act, insurance utilizes AI applications that are deemed as high risk. Those applications, which are deemed high risk, are used by insurance companies for underwriting. In the insurance field, underwriting involves assessing and clarifying risks, and pricing those risks. For instance, AI could be used to assess your risks and overall cost for life and health insurance. Outside of that, the report says that most of what is stated in the EU AI Act most likely encompasses all of the analytical methods already used by insurers. As for the U.S., despite several guidelines issued by federal entities and several state laws, the report believes that regulation of AI in the insurance industry is already mainly shaped by existing anti-discrimination laws at the state and federal level.


When it comes to regulation, the report talked with several insurance industry experts who ultimately believe in insurance-specific regulation, but found that cross-sector AI regulation may end up hindering innovation because it doesn’t consider some of the unique characteristics in the insurance industry. That’s why the report concludes with a list of several ideas for policymakers and regulators moving forward. The report believes that they should carefully define AI for regulation, apply and/or update existing regulations, develop principles-based regulation, consider the unique uses of AI systems in insurance which would require unique regulations, focus on the customer outcomes through data governance and collaborate on AI guidelines and regulations internationally. 


Many more industries are expected to weigh in on AI regulations, AI assurance and other issues as more and more countries begin to examine how to move forward with AI. If you have questions about AI and how the rapidly changing legal landscape could affect your company. Reach out to BABL AI, they can answer all your questions and more.

ISO and IEC releases AI definitions to public

ISO and IEC releases AI definitions to public

As the United States works on its AI roadmap and the European Union hammers out the final details of the EU AI Act, two global organizations have released AI definitions to the public. The International Organisation for Standardization (ISO) and the International Electrotechnical Commission (IEC) released a PDF on AI concepts and terminology for a broad set of stakeholders around the world.


The definitions, which make up about 16 pages of the 70-page document, covers terms related to AI systems, data, machine learning, neural networks, trustworthiness, natural language process and computer vision. After listing off definitions, the document goes on to discuss different types of AI like general vs. narrow AI, symbolic vs. sub-symbolic AI, and weak vs. strong AI. After discussing the differences, it goes even further into key AI concepts like agents, knowledge representation, autonomy, automation, machine learning, deep learning, data mining, planning


The document provides several visuals when discussing an AI system functional overview, highlighting how data is processed into predictions, recommendations, decisions and actions. When breaking down the AI ecosystem model visually, it shows components like AI systems, machine learning, engineering approaches, data sources, cloud computing and hardware resources. The document draws to a close as it goes over AI fields which include computer vision, natural language processing, data mining and planning. The document ends by transitioning to several examples of applications of AI like automated vehicles, predictive maintenance and fraud detection. While those final two may sound similar, AI fields are concerned with advancing technical capabilities that enable AI systems while application focuses on the practical uses and impacts of deploying those AI systems. 


It’s a very eye-opening document for those who may know nothing about AI systems, while a great guide to explaining AI systems to those who have a deep knowledge of the processes. Overall, the document provides one of the most comprehensive overviews of AI concepts, terminology, applications, and life cycle management. 


If you have questions about AI and how the rapidly changing legal landscape could affect your company. Reach out to BABL AI, they can answer all your questions and more.

Brussels Privacy Hub and Over 100 Academics Sign Appeal for the EU AI Act

Brussels Privacy Hub and Over 100 Academics Sign Appeal for the EU AI Act

Before the European Parliament gathers for several planned meetings next week, over 130 academics and the Brussels Privacy Hub are hoping they get their attention. An appeal was signed for the Harmonised Rules on Artificial Intelligence, or the EU AI Act, calling for the EU AI Act to require a fundamental rights impact assessment (FRIA). A FRIA is a process for systematically assessing the potential impacts that a policy, AI system or other technology or initiative may have on human rights. A FRIA typically has an evaluation on impacts to rights like privacy, non-discrimination, freedom of expression, etc. It also considers the impacts on potentially affected groups and analyzes whether or not the policy/technology aligns with human rights and laws. A FRIA also would identify and mitigate risks early in the AI system design and deployment process. The overall goal of a FRIA is to embed respect for rights and laws into governance and systems.

While protections for fundamental rights are already in the EU AI Act, the press release on the appeal says there are risks that fundamental rights could be weakened when it comes time for negotiation on the legislation. The appeal also asks that a FRIA covers private and public sector AI, which would include independent oversight as well as transparency. The signers of the appeal believe that the FRIA should evaluate impacts on fundamental rights that high-risk AI systems may have. The appeal goes on to say that it should have clear parameters, public summaries, independent public authorities in assessments and involvement of the affected users. The appeal adds that the FRIA would complement existing impact assessments already in place, like the General Data Protection Regulation (GDPR).

The appeal, signed by various academics and experts on technology, law and policy at dozens of institutions, concludes with a statement that they believe a FRIA is pivotal to the EU AI Act. They conclude their thoughts by stating that a FRIA in the EU AI Act would uphold the European Union’s commitment to human rights and its value. The appeal ends with a statement that they will circulate a more detailed report in the coming day to explain their view on the best practices to regulate FRIAs.

If you have questions on how this could affect your company, reach out to BABL AI. They can answer all your questions and more.

California Bill Calls for AI Regulation and Proposed Research Cloud

California Bill Calls for AI Regulation and Proposed Research Cloud

With all the focus on Washington D.C., a lawmaker in one of the world’s top 10 economies has introduced an AI regulation bill. In California, State Senator Scott Wiener introduced Senate Bill 294, known as the Safety Framework in Artificial Intelligence Act. The roughly three-page bill wants to establish standards for the safe development of AI systems, ensure secure deployments of said systems, and for the responsible scaling of AI models throughout California. While not as dense as the European Union’s Harmonised Rules on Artificial Intelligence (EU AI Act), it’s important to note that California is a multi-trillion dollar company and home to Silicon Valley, which is home to big tech companies like Apple, Cisco, Oracle, etc. To put it simply, this bill could potentially have a huge impact.


The Safety Framework in Artificial Intelligence Act gets straight to the point, but is light on details. The bill states that it would create a framework of disclosure requirements for companies developing advanced AI models. That would include plans for risk analyses, safeguards, capability testing, responsible implementation, as well as requiring improvements to all of that over time. The bill adds that the aim is to ensure high safety regulations against societal harms from AI models through security rules and liability, in the hope of preventing misuse and/or unintended consequences. It also suggests security measures that prevent AI systems from falling in the hands of foreign adversaries. Furthermore, the bill intends to mitigate AIs impact of potential workforce displacement and distribute the economic benefits reaped by AI. 


There’s another interesting piece to this bill involving AI assurance. The bill calls for the state of California to create, what it calls, “CalCompute,” a state research cloud. Once again, the bill is light on details, but the gist is that the cloud would provide the computing infrastructure necessary for groups outside of the big tech industry. That means academia and start-ups could utilize this cloud for advanced AI work. As mentioned, the bill is light on details, but there is a reason. According to a press release from Wiener’s office, the Safety in Artificial Intelligence Act is an intent bill, which means it’s generally meant to start the conversation for lawmakers moving forward. That’s because California’s legislative session ended on September 14, 2023 and won’t reconvene for another legislative session until January 3, 2024. This all comes on the heels of an executive order on AI, issued and signed by California Governor Gavin Newsom


The executive order mandates state agencies and departments to analyze the development, uses and risks of AI in the state. Agencies are also mandated to analyze threats to the state’s through generative AI (GenAI). On top of that, agencies will issue general guidelines for public use, procurement and training on GenAI. State departments must report on the uses, harms, and risks of AI for state workers, the government and communities throughout the state. State  workers will also be trained on approved AI systems. An Interesting caveat to the order is an encouraged partnership with the University of California, Berkeley and Stanford to advance California as a global leader in AI. This could be what “Calcompute” is. In California, lawmakers and the governor have welcomed talks of responsible AI as discussion of AI in general has picked up steam in the United States. Things are beginning to move at lightning speed in the states.


If you have questions on how this could affect your company, reach out to BABL AI. They can answer all your questions and more.

Congress Looks at a Roadmap for AI

Congress Looks at a Roadmap for AI

While the European Union puts the finishing touches on the Harmonised Rules on Artificial Intelligence, or the EU AI Act, the United States Senate has decided it’s time to look at the development of artificial intelligence. While there’s been rumblings for years about AI, 2023 could be viewed as the last branch of a marathon race. The U.S. is throwing on some new shoes and bolting to the finish line. As they say, it’s better late than never and honestly, it’s always good to see some bipartisanship in Washington D.C.

 

On September 12, 2023, at the Senate Judiciary Privacy, Technology and Law Subcommittee, parties from both sides of the aisle touted their plans for AI regulation. Democratic Senator Amy Klobuchar of Minnesota took the wheel, introducing the Protect Elections from Deceptive AI Act, which would ban the use of deceptive AI and deep fakes in political ads. While the bill would be added as an amendment to the Federal Election Campaign Act of 1971, it would be lumped in with a bipartisan roadmap introduced back in June 2023. In that month, Republican Senator Josh Hawley of Missouri and Democratic Senator Richard Blumenthal of Connecticut introduced the No Section 230 Immunity for AI Act. Outside of politicians, the hearing saw law professor Woodrow Harzog, Microsoft President Brad Smith and NVIDIA chief scientist William Dally. 

 

When all was said and done, lawmakers released a joint press release on their thoughts moving forward. First off, lawmakers want an independent oversight body established to license and audit companies who are developing high-risk AI systems like companies using facial recognition or large language models (think ChatGPT). On top of that, they want safety brakes for those high-risk AI uses, along with the right to human oversight like auditors. Customers would also be given control over their personal data. Lawmakers also want to ensure that there’s legal accountability for harms caused by AI. This would be done via oversight and the ability of private citizens to file lawsuits. Lawmakers want to also restrict the export of advanced AI to America’s adversaries in an effort to protect national security interests. The press release goes on to say that they want to mandate transparency when it comes to access, data, limitations, accuracy, etc. Lawmakers add they want disclosures for users that are interacting with an AI system. That disclosure needs to have plain language disclosures. Finally, the press release says that lawmakers want to create a public database of AI systems where people can read reports about the potential harms and harmful incidents.

 

The following day, Democratic Senator Chuck Schumer met on Capitol Hill with several high-profile CEOs, including the CEOs of IBM, Meta, X, etc. It wasn’t just the big players, several others were brought in, like Tristan Harris, the co-founder of the Center for Humane Technology and several tech researchers. The forum, which was open to all 100 senators, is unfortunately nine closed-door sessions, so we won’t be quite sure what’s happening until a press release or bill is revealed. Of course, the media may have some insight with private contacts and sources, so information could trickle out over time. But this week in September is another stride in the run towards regulating AI in the U.S.

 

In May of this year, President Joe Biden’s White House released AI policies to promote responsible AI innovation. Other items of note in the press release include a White House update to the National AI R&D Strategic Plan, a new request for public input as well as a plan to host listening sessions, and a report by the Department of Education on risks and opportunities in AI. Over the summer, the Washington Post reported that the Federal Trades Commission (FTC) had launched a major investigation into OpenAI to see if ChatGPT was harming consumers through data collection and misinformation. Even before that, the chairperson of FTC, Lina Khan, released an op-ed in the New York Times about their approach to AI. The list goes on and on, but the important thing to realize is that the Wild West days of completely unregulated AI are coming to an end in the U.S.

 

If you have questions on how this could affect your company, reach out to BABL AI. They can answer all your questions and more.