Pennsylvania Executive Order Signed for Generative AI

Pennsylvania Executive Order Signed for Generative AI

Another state in the United States is taking the leap when it comes to potential AI regulations. Governor Josh Shapiro of Pennsylvania has signed Executive Order 2023-19 – Expanding and Governing the Use of Generative Artificial Intelligence Technologies Within the Commonwealth of Pennsylvania. Just like what we saw earlier this month in the state of Virginia, Governor Shapiro recognizes the need for oversight and regulation when it comes to AI. 

The Executive Order, issued on Sept. 20, 2023, emphasizes the need for responsible and ethical use of Generative AI, while recognizing its potential to improve communication and services for Pennsylvanians. The order calls for the creation of the Generative AI Governing Board, which is a key component of the order. The Board will be responsible for providing guidance and direction on design, development, acquisition and deployment of Generative AI in state agencies, including state policies on the use of Generative AI. The Board is also responsible for ensuring transparency and accountability, as well as testing for bias and addressing any privacy concerns. The order refers to Generative AI as AI that uses algorithms to generate new content, data, or other inputs.

On top of that, the Board is responsible for engaging with experts in Generative AI from the public and private sectors. Those experts will advise the Board on short and long-term research goals, industry trends and some of the best practices. This means the Board can seek advice and input from experts while experts can provide guidance on the latest developments in AI, help identify emerging trends and provide ideas on the best practices for using AI in a responsible and ethical manner. The Board is expected to digest all of this information to develop policies and guidelines for state agencies while ensuring it has access to the latest information and insights so that the Board is constantly making informed decisions on how AI can benefit Pennsylvanians.

While industry experts are key, the Board is responsible for providing a mechanism that allows for internal and external feedback on policies. That means the Board is required to receive feedback or input from stakeholders, including state labor and workforce organizations, and other external stakeholders, like citizens. The feedback can be used by the Board to help craft policies and guidelines for use of Generative AI in state agencies. The purpose of the feedback is to help build trust and confidence in the use of AI and to ensure it’s being used in an ethical manner. The Order does emphasize that the disclosure of policies should not jeopardize the security of any individual or entity, state worker, infrastructure, systems or data. The Order emphasizes that AI should not and cannot replace human creativity, moral judgment or lived experiences. The Order recognizes that AI is a valuable tool that has the potential to improve life, but it should be used in a way that prioritizes employees and aligns with industry standards.

The Board will consist of 12 members, including the Governor’s Chief of Staff, Director of Digital Strategy, Chief Transformation and Opportunity Office, Secretary of Policy and Planning, General Counsel, Secretary of Administration, Deputy Secretary for Information Technology, Chief Information Security Officer, and others. The Order is immediately in effect and remains in effect until the Order is amended or rescinded.

If you have questions on how these Executive Orders could affect your company, reach out to BABL AI and one of their Audit Experts can help.

Founder and CEO of BABL AI to kick-off inaugural D.C. conference on AI

Founder and CEO of BABL AI to kick-off inaugural D.C. conference on AI

The Founder and CEO of BABL AI will be speaking at the inaugural National Conference on AI Law, Ethics and Compliance this fall. Founder Shea Brown will be speaking at a pre-conference workshop at 9 a.m. to 12:30 p.m. Monday, Oct. 30 in the DC Bar Association at 901 4th St. NW, Washington D.C. The workshop is called “Level Setting” – The Fundamentals of AI, Algorithmic Decision-Making, Testing and How They All Work: The Essentials of ChatGPT, Bard and More Tools for Non-IT Professionals.

Several topics will be covered during the in-person only workshop, which is the first event of the inaugural conference. Brown will talk with attendees about the lengths and limits of AI-tools, as well as their associated risks and rewards. Brown will demystify and clarify several topics, including bias, responsible use, safety, deep learning and other key concepts. Attendees will also understand what is and isn’t AI and algorithmic decision-making vs. machine learning and data analytics. 

The workshop’s goal is to make sure attendees will gain a valuable blueprint towards laying the groundwork for their work after the conference and understand the responsible use of AI tools. Attendees will take away key regulations and compliance frameworks with respect to AI. Through the knowledge provided by Brown, attendees will be able to address privacy issues involved with AI, including the protection of sensitive data and client information. More information about Brown’s workshop, as well as tickets, can be found here.

The conference will last through Wednesday, Nov. 1 in D.C. Representatives from several facets of the U.S. government, as well as industries and companies, will be giving speeches and running workshops throughout the four-day conference. There will also be several breaks throughout the day so that attendees can network. For more information about the inaugural conference, click here. 

If you’re unable to attend, but have questions about how AI laws and AI Audits could affect your company, reach out to BABL AI and one of their Audit Experts can help.

What is the EU AI Act?

What is the EU AI Act?

The European Union is once again leading the way in digital regulation with its latest piece of legislation, the Harmonised Rules on Artificial Intelligence, or the EU AI Act. The EU has been on the cutting edge when it comes to digital rights and digital regulations, whether it’s the General Data Protection Regulation (GDPR), passed back in April 2016 which deals with information privacy and human rights, or the Digital Services Act, passed in July 2022 which helps moderate online information and social media content. Now, the EU is working on standards for managing AI systems. The governing body is looking to minimize potential risks and potential harm while ensuring the safety and rights of all humans.

There have been several incidents of legal, ethical and biased uses of AI. Companies, journalism outlets, academia, nonprofits and governmental bodies have found bias in AI over the years across the globe. There are several examples over the years, including in 2018 when Microsoft acknowledged that use of AI in its offerings may result in reputational harm or liability. In 2019, Denmark found that its tax fraud detection AI was incorrectly flagging low income and immigrant groups more than native Danes. Even AI-powered tools that were used during the COVID-19 pandemic to help save lives instead raised red flags about privacy and accuracy concerns. AI’s use has only accelerated since these issues and the problems have only increased across the landscape.

The EU AI Act was proposed in April 2021 before being drafted and adopted in December 2022. Over the past year, there have been several amendments and revisions before the latest version was approved in June 2023. It’s expected that a final version of the EU AI Act will be approved before the end of 2023, just in time for the European Parliament elections in 2024. Even with approval, there will likely be a two-year implementation period. So, don’t expect all the regulations to take effect until 2026 at the earliest.

That’s why now is the time to understand who this massive piece of legislation applies to. First and foremost, AI systems established within the EU must comply. However, the EU AI Act not only applies to AI systems developed and used within the EU, but also to countries outside the EU that have had their AI systems introduced and/or used within the EU market. So just because your AI system is in America, doesn’t mean you’re free of this law if it’s in the EU marketplace. That’s not all though, even AI providers and users located outside the EU come under the jurisdiction of the EU AI Act if their AI systems outcomes or results are utilized or have an impact within the EU. To summarize, most companies and others are going to have to adhere to the EU AI Act in some way. 

However, there are AI systems that are exempt under the EU AI Act. AI systems that are still being researched and tested before being sold are exempt from the EU AI Act. That is, as long as they follow basic rights and laws, and are not tested in real-life situations. For example, if you’re a pharmaceutical company developing an AI system to assist in the discovery of new drugs, you’re using AI to analyze vast datasets of chemicals, as well as those interactions between chemicals. As long as that AI is used in a controlled research environment, you’re exempt from the EU AI Act until it is considered safe and effective. Another entity exempt from the EU AI Act is government authorities from other countries or international groups working under international agreements. It also does not apply to AI systems made only for military uses. In addition, AI parts given out for free under open-source licenses don’t need to follow the regulation, except for large general AI models like ChatGPT or DALL-E.

If you have questions on how this could affect your company or would like help preparing for an EU AI Act Conformity Assessment, reach out to BABL AI and one of their Audit Experts can help. 

What is the Digital Services Act?

What is the Digital Services Act?

As the European Union works on the final touches of its AI regulation legislation, the Harmonised Rules on Artificial Intelligence, or the EU AI Act, we look at one regulation that has service providers scrambling to comply with before next year. The Digital Services Act (DSA) was submitted to the European Parliament in December 2020. After a year and a half of discussion, the European Council approved the DSA on October 4, 2022 and will be directly applicable across the EU on February 17, 2024.

To simply put, the DSA regulates digital services, marketplaces and online platforms operating within the EU, with the aim to create a safe and more open digital landscape while protecting the fundamental rights of users, and establishing clear responsibilities and accountabilities for online platforms. As long as an outlet is offering a service in the EU, regardless of their place of establishment, they are impacted by the DSA. This means that companies that provide digital services like cloud services, data centers, content delivery, search engines, social media, app stores and others will be impacted. That includes platforms like Google, Meta, Amazon, Apple, TikTok and more. So, while this is a European Union law, it will resound globally.

The DSA has core obligations that require platforms to assess and mitigate risks brought by their systems. It also requires platforms to remove illegal content, protect children, suspend users offering illegal services, ensure the traceability of online traders and empower consumers through various transparency measures. This also means that platforms must publicly report how they use tools for automated content moderation as well as disclose all instances of illegal content which is flagged by content moderators or by the automated content moderation.

The DSA highlights additional requirements for large platforms, which are referred to as very large online platforms (VLOPs). A VLOP will face additional requirements in the field of risk management, external and independent auditing, transparency reporting, access to data and algorithms, advertising transparency and user choice for recommendation algorithms. The threshold for a VLOP is 45 million+ monthly active EU users, which will most likely impact the platforms we mentioned above as well as several large EU firms. In an effort to catch other potential VLOPs, the DSA has these obligations aimed at fast-growing start-ups which are approaching similar scales and risk profiles of other VLOPs. 

Unlike others, VLOPs designated under the DSA will have four months before next February to comply with obligations like risk assessment, transparency reporting and data access. That means there are staggered timelines based on platform size before the final date when the European Commission and national Digital Services Coordinators will oversee enforcement. The DSA establishes oversight and enforcement cooperation between the European Commission and EU countries. As for penalties for non-compliance, it includes fines of up to 6% of global turnover, which means some VLOPS could face hundreds of millions in fines if they’re found to be non-compliant.

If you have questions about how to stay compliant with the Digital Services Act, reach out to BABL AI.

Virginia Executive Directive Number Five

Virginia Executive Directive Number Five

As lawmakers in Washington D.C. go back and forth on potential AI regulations, one stateside Governor has issued an executive directive on AI. Virginia Governor Gov. Glenn Youngkin announced Executive Directive Number Five on September 20. In the announcement, Gov. Youngkin acknowledges the critical role that state governments must play when it comes to regulation and oversight of AI.

Gov. Youngkin says that the growing expansion of AI and its analytical power over the coming years is the reason behind his directive. He says the need for this in Virginia is because the state is home to a rapidly evolving entrepreneurial ecosystem as well as several colleges and universities which are leading the nation in technological research and development. That’s why in the directive, Gov. Youngkin calls for the Office of Regulatory Management (ORM) to coordinate with the Chief Information Officer (CIO) and other secretariats to address the legal and regulatory environment, look at AI’s impact on education and workforce development, modernize the state government’s use of AI and develop a plan when it comes to AI’s impact on economic development and job creation.

When it comes to laws and regulations, the directive calls for the ORM and CIO to tackle this issue in three ways. First, they want both to comb over existing laws and regulations to see how they may already apply to AI, and if the laws will need updating. Second, they want both to ensure that use of AI by the state government is transparent, security and impartial. Finally, they want both to make recommendations for uniform standards for responsible, ethical and transparent use of AI across all state agencies and offices.

For education and workforce development, the ORM and CIO will work with the Department of Education and higher education institutions to develop a plan. That plan must promote guidelines for the use of AI tools which impact learning and prohibit cheating, as well as examine the potential uses of AI tools for personalized tutoring, and include AI-related topics in technology, computer science and data analytics courses. For workforce development, the ORM and CIO must ensure public school students are prepared for future careers that involve AI technologies, and support opportunities for state colleges and universities to contribute to AI research through collaboration with public and private entities.

For modernization, the ORM and CIO will identify opportunities for the secure and transparent use of AI systems to improve state government operations. The ORM and CIO will also evaluate the potential effects of AI systems on functions of the government while making sure they’re protecting the data and privacy of the public. Finally, the ORM and CIO will develop ethical guidelines and best practices for the use of AI across the state government with a focus on accountability and transparency.

As for economic development and job creation, the directive calls for the ORM and CIO to work with the Virginia Economic Development Partnership to develop a plan for five goals. The first goal is to identify potential industry clusters that may benefit from AI in the state. The second goal is to explore ways to encourage AI innovation and entrepreneurship in the state. The third goal is to assess the risk and opportunities of AI on the labor market. The fourth goal is to develop strategies to support workers who could be impacted by AI. The fifth goal is to coordinate with schools and workforce programs on the next steps to become AI-ready.

While finer details are missing from this executive order, the Directive was made official upon Gov. Youngkin’s signature. We may get those finer details by the end of this year because the ORM and CIO are tasked with completing the above actions and delivering recommendations on December 1, 2023.

If you have questions about government actions involving AI and AI audits, and how the rapidly changing legal landscape could affect your company. Reach out to BABL AI, they can answer all your questions and more.

CPPA Discusses Draft Cybersecurity Audit and Risk Assessment Regulations

CPPA Discusses Draft Cybersecurity Audit and Risk Assessment Regulations

The first dedicated privacy regulator in the United States discussed draft regulations when it comes to cybersecurity audits and risk assessments. While lawmakers continue to go back and forth in Washington D.C., the California Privacy Protection Agency (CPPA) discussed a litany of draft regulations at their September 8th meeting. The CPPA was created after California voters approved Proposition 24 in November 2020. The agency, which is governed by a five-member board, discusses and then implements and enforces privacy protection laws. As for the draft regulations discussed at their latest meeting, the draft regulations were formed this past summer after a public hearing earlier in the year. At the 6+ hour meeting, the CPPA Board went back and forth on audit regulations.

In the first part of the discussion, the board discussed cybersecurity audit regulations; specifically, which businesses would be under these cybersecurity audit regulations, who could audit these businesses and the required components of the audit. Under the draft regulations, businesses processing significant amounts of personal information would have to conduct annual cybersecurity audits. Generally speaking, the threshold for businesses to require an audit was discussed as a business with annual gross revenues exceeding $25 million and a business that has processed the personal information of 100,000 or more customers. The Board is also considering other thresholds like employee and customer thresholds. As for the auditors, they would have to be independent, but the independent portion in the draft states that “the auditor may be internal or external to the business but shall exercise objective and impartial judgment on all issues within the scope of the cybersecurity audit…” The auditor must document the business’s cybersecurity program, including authentication, encryption, access controls, monitoring, training, vendor oversight and incident response. Furthermore, the auditor would have to assess risk to security and privacy, including unauthorized access or destruction of information.

The second portion of the discussion on the board’s end dealt with regulations for risk assessments related to cybersecurity audits and automated decision-making technology (ADMT). Under the draft regulations, businesses would have to provide a summary of how they will process personal information, including how they collect, use, disclose and retain that information. The personal information would have to be categorized and businesses must identify whether they include sensitive personal information. However, the regulations do not provide a specific definition of sensitive personal information. Businesses would also have to provide context of processing, including the relationship between the business and the consumers whose personal information is being processed. The purpose of processing personal information must be described with specificity while businesses must also identify the benefits resulting from the process to the business, the consumer, the public and other stakeholders. Negative impacts and risks must be identified and described as well under the CPPA’s regulations.

Overall, the regulations laid out in the CPPA’s September meeting are to ensure businesses have adequate safeguards and practices in place to protect the personal information of consumers. Despite the lengthy meeting, draft regulations weren’t finalized. In fact, public comment on these regulations are still open as the CPPA remains in the beginning stages of potential rulemaking. Ultimately, the draft proposes mandatory cybersecurity audits and risk assessments for qualifying businesses in the state of California. We could learn more about the regulations at next month’s CPPA meeting, which as of right now, hasn’t been scheduled. 

If you have questions about AI and how the rapidly changing legal landscape could affect your company. Reach out to BABL AI, they can answer all your questions and more.

Geneva Association Releases Report on AI Regulation

Geneva Association releases report on AI regulation

While the European Union (EU) goes over the final details of the EU AI Act and the United States begins laying the groundwork for an AI roadmap, the only global association of insurance companies is weighing in on regulation of AI in insurance. The Geneva Association (GA) released a report that analyzes the regulatory developments for AI applications as well as their impact on insurance companies around the world. The report starts out by stating that while AI is transforming the industry by offering expanded risk pooling, reduced costs, risk prevention and mitigation, and improved customer service, it does also post a lot of risks like bias, discrimination, exclusion, lack of transparency and data privacy issues.

For the report, the GA looked at several insurance markets where AI regulation is being looked at and/or is happening. Those markets include Australia, China, the EU, Japan, Singapore, the United Kingdom and the U.S. The report notes that the EU has the most ambitious legislation when it comes to AI regulation and bias audits. The report says that under the EU AI Act, insurance utilizes AI applications that are deemed as high risk. Those applications, which are deemed high risk, are used by insurance companies for underwriting. In the insurance field, underwriting involves assessing and clarifying risks, and pricing those risks. For instance, AI could be used to assess your risks and overall cost for life and health insurance. Outside of that, the report says that most of what is stated in the EU AI Act most likely encompasses all of the analytical methods already used by insurers. As for the U.S., despite several guidelines issued by federal entities and several state laws, the report believes that regulation of AI in the insurance industry is already mainly shaped by existing anti-discrimination laws at the state and federal level.

When it comes to regulation, the report talked with several insurance industry experts who ultimately believe in insurance-specific regulation, but found that cross-sector AI regulation may end up hindering innovation because it doesn’t consider some of the unique characteristics in the insurance industry. That’s why the report concludes with a list of several ideas for policymakers and regulators moving forward. The report believes that they should carefully define AI for regulation, apply and/or update existing regulations, develop principles-based regulation, consider the unique uses of AI systems in insurance which would require unique regulations, focus on the customer outcomes through data governance and collaborate on AI guidelines and regulations internationally. 

Many more industries are expected to weigh in on AI regulations, AI assurance and other issues as more and more countries begin to examine how to move forward with AI. If you have questions about AI and how the rapidly changing legal landscape could affect your company. Reach out to BABL AI, they can answer all your questions and more.

ISO and IEC releases AI definitions to public

ISO and IEC releases AI definitions to public

As the United States works on its AI roadmap and the European Union hammers out the final details of the EU AI Act, two global organizations have released AI definitions to the public. The International Organisation for Standardization (ISO) and the International Electrotechnical Commission (IEC) released a PDF on AI concepts and terminology for a broad set of stakeholders around the world.

The definitions, which make up about 16 pages of the 70-page document, covers terms related to AI systems, data, machine learning, neural networks, trustworthiness, natural language process and computer vision. After listing off definitions, the document goes on to discuss different types of AI like general vs. narrow AI, symbolic vs. sub-symbolic AI, and weak vs. strong AI. After discussing the differences, it goes even further into key AI concepts like agents, knowledge representation, autonomy, automation, machine learning, deep learning, data mining, planning

The document provides several visuals when discussing an AI system functional overview, highlighting how data is processed into predictions, recommendations, decisions and actions. When breaking down the AI ecosystem model visually, it shows components like AI systems, machine learning, engineering approaches, data sources, cloud computing and hardware resources. The document draws to a close as it goes over AI fields which include computer vision, natural language processing, data mining and planning. The document ends by transitioning to several examples of applications of AI like automated vehicles, predictive maintenance and fraud detection. While those final two may sound similar, AI fields are concerned with advancing technical capabilities that enable AI systems while application focuses on the practical uses and impacts of deploying those AI systems. 

It’s a very eye-opening document for those who may know nothing about AI systems, while a great guide to explaining AI systems to those who have a deep knowledge of the processes. Overall, the document provides one of the most comprehensive overviews of AI concepts, terminology, applications, and life cycle management. 

If you have questions about AI and how the rapidly changing legal landscape could affect your company. Reach out to BABL AI, they can answer all your questions and more.

Brussels Privacy Hub and Over 100 Academics Sign Appeal for the EU AI Act

Brussels Privacy Hub and Over 100 Academics Sign Appeal for the EU AI Act

Before the European Parliament gathers for several planned meetings next week, over 130 academics and the Brussels Privacy Hub are hoping they get their attention. An appeal was signed for the Harmonised Rules on Artificial Intelligence, or the EU AI Act, calling for the EU AI Act to require a fundamental rights impact assessment (FRIA). A FRIA is a process for systematically assessing the potential impacts that a policy, AI system or other technology or initiative may have on human rights. A FRIA typically has an evaluation on impacts to rights like privacy, non-discrimination, freedom of expression, etc. It also considers the impacts on potentially affected groups and analyzes whether or not the policy/technology aligns with human rights and laws. A FRIA also would identify and mitigate risks early in the AI system design and deployment process. The overall goal of a FRIA is to embed respect for rights and laws into governance and systems.

While protections for fundamental rights are already in the EU AI Act, the press release on the appeal says there are risks that fundamental rights could be weakened when it comes time for negotiation on the legislation. The appeal also asks that a FRIA covers private and public sector AI, which would include independent oversight as well as transparency. The signers of the appeal believe that the FRIA should evaluate impacts on fundamental rights that high-risk AI systems may have. The appeal goes on to say that it should have clear parameters, public summaries, independent public authorities in assessments and involvement of the affected users. The appeal adds that the FRIA would complement existing impact assessments already in place, like the General Data Protection Regulation (GDPR).

The appeal, signed by various academics and experts on technology, law and policy at dozens of institutions, concludes with a statement that they believe a FRIA is pivotal to the EU AI Act. They conclude their thoughts by stating that a FRIA in the EU AI Act would uphold the European Union’s commitment to human rights and its value. The appeal ends with a statement that they will circulate a more detailed report in the coming day to explain their view on the best practices to regulate FRIAs.

If you have questions on how this could affect your company, reach out to BABL AI. They can answer all your questions and more.

California Bill Calls for AI Regulation and Proposed Research Cloud

California Bill Calls for AI Regulation and Proposed Research Cloud

With all the focus on Washington D.C., a lawmaker in one of the world’s top 10 economies has introduced an AI regulation bill. In California, State Senator Scott Wiener introduced Senate Bill 294, known as the Safety Framework in Artificial Intelligence Act. The roughly three-page bill wants to establish standards for the safe development of AI systems, ensure secure deployments of said systems, and for the responsible scaling of AI models throughout California. While not as dense as the European Union’s Harmonised Rules on Artificial Intelligence (EU AI Act), it’s important to note that California is a multi-trillion dollar company and home to Silicon Valley, which is home to big tech companies like Apple, Cisco, Oracle, etc. To put it simply, this bill could potentially have a huge impact.

The Safety Framework in Artificial Intelligence Act gets straight to the point, but is light on details. The bill states that it would create a framework of disclosure requirements for companies developing advanced AI models. That would include plans for risk analyses, safeguards, capability testing, responsible implementation, as well as requiring improvements to all of that over time. The bill adds that the aim is to ensure high safety regulations against societal harms from AI models through security rules and liability, in the hope of preventing misuse and/or unintended consequences. It also suggests security measures that prevent AI systems from falling in the hands of foreign adversaries. Furthermore, the bill intends to mitigate AIs impact of potential workforce displacement and distribute the economic benefits reaped by AI. 

There’s another interesting piece to this bill involving AI assurance. The bill calls for the state of California to create, what it calls, “CalCompute,” a state research cloud. Once again, the bill is light on details, but the gist is that the cloud would provide the computing infrastructure necessary for groups outside of the big tech industry. That means academia and start-ups could utilize this cloud for advanced AI work. As mentioned, the bill is light on details, but there is a reason. According to a press release from Wiener’s office, the Safety in Artificial Intelligence Act is an intent bill, which means it’s generally meant to start the conversation for lawmakers moving forward. That’s because California’s legislative session ended on September 14, 2023 and won’t reconvene for another legislative session until January 3, 2024. This all comes on the heels of an executive order on AI, issued and signed by California Governor Gavin Newsom

The executive order mandates state agencies and departments to analyze the development, uses and risks of AI in the state. Agencies are also mandated to analyze threats to the state’s through generative AI (GenAI). On top of that, agencies will issue general guidelines for public use, procurement and training on GenAI. State departments must report on the uses, harms, and risks of AI for state workers, the government and communities throughout the state. State  workers will also be trained on approved AI systems. An Interesting caveat to the order is an encouraged partnership with the University of California, Berkeley and Stanford to advance California as a global leader in AI. This could be what “Calcompute” is. In California, lawmakers and the governor have welcomed talks of responsible AI as discussion of AI in general has picked up steam in the United States. Things are beginning to move at lightning speed in the states.

If you have questions on how this could affect your company, reach out to BABL AI. They can answer all your questions and more.