AI Governance Report & Auditor Training

020. AI Governance and Report & Audit Training – Lunchtime BABLing

This week we discuss our recent report “The Current State of AI Governance”, which is the culmination of a year-long research project looking into the effectiveness of AI governance controls.

Full report here:  

We also discuss our new training program, the “AI & Algorithm Auditor Certificate Program“, which starts in May 2023.

This program has courses and certifications in 5 key areas necessary for AI auditing and Responsible AI in general:

1: Algorithms, AI, & Machine Learning

2: Algorithmic Risk & Impact Assessments

3: AI Governance & Risk Management

4: Bias, Accuracy, & the Statistics of AI Testing

5: Algorithm Auditing & Assurance

Early pricing can be found here:   

Available on YouTube, Simplecast, and all major Podcast streaming platforms. 

Interrogating Large Language Models with Jiahao Chen

019. Interrogating Large Language Models with Jaihao Chen

On this week’s Lunchtime BABLing (#19) we talk with Jiahao Chen; data scientist, researcher, and founder of Responsible Artificial Intelligence LLC.

We discuss the evolving debate around large language models (LLMs) and their derivatives (ChatGPT, Bard, Bing AI Chatbot, etc.), including:

1: Do systems like ChatGPT reason?

2: How do businesses know whether LLMs are useful (and safe) for them to use in a product or business process?

3: What kinds of gaurdrails are needed for the ethical use of LLMs (includes pompt engineering).

4: Black-box vs. Whitebox testing of LLMs for algorithm auditing.

5: Classical assessments of intelligence and their applicability to LLMs.

6: Re-thinking education and assessment in the age of AI.

Jiahao Chen Twitter  Responsible AI LLC

Available on YouTube, SimpleCast, and all major Podcast streaming platforms.

The 5 Skills you NEED for AI Auditing

018. The 5 Skills you NEED for AI Auditing – Lunchtime BABLing

You need way more than “five skills” to be an AI auditor, but there are five areas of study that auditors need basic competency in if they want to do the kinds of audits that BABL AI performs.

This is part of our weekly webinar/podcast that went very long so we’ve cut out a lot of the Q&A, which covered a lot of questions that we’ll address in future videos, like:

What kind of training do I need to become an AI or algorithm auditor?

Do I need technical knowledge of machine learning to do AI ethics?

Available on YouTube, SimpleCast, and all major Podcast streaming platforms 

The Current State of AI Governance

Our interdisciplinary team at the Algorithmic Bias Lab has produced one of the very first comprehensive reports on the current state of organizational AI governance. The report, partly funded by Notre Dame-IBM Technology Ethics Lab, is a result of a yearlong study which utilized surveys, interviews, and a literature review to examine the internal governance landscape. We asked what governance tools are being used across sectors, are they working, and if so, why?

Our analysis found that significantly less than half of all organizations that use or develop AI have any formal or substantial governance structures for AI. Among those that do have AI governance structures, there is a variety of governance tools being used and a variety of reasons for adopting them. The organizations that do have some governance structures are, in almost all cases, past the stage of building AI governance frameworks, but have not yet developed metrics to assess their effectiveness. On average, organizations that have built AI governance structures are generally at the beginning of the implementation stage. 

 

Among those organizations that do have some governance structures, there are some key trends emerging regarding implementation strategies and challenges. These include, for example: the need for repositories and inventories, the importance of risk assessments, difficulties finding employees with the right skills, lack of external stakeholder engagement, importance of organizational culture for uptake of AI governance initiatives, lack of clear metrics, and others.

This report is the first in an ongoing project to track and measure the effectiveness of AI governance tools across industries. The hope is that the results of our analysis can help guide decision makers, many of whom are propping up nascent AI governance structures.

Citation: Davidovic, Jovana, Shea Brown, Ali Hasan, Khoa Lam, Ben Lange, and Mitt Regan, The Current State of AI Governance. Iowa City, IA: BABL AI, 2023. https://babl.ai/wp-content/uploads/2023/03/AI-Governance-Report.pdf

Compliance Strategies for AI/ML Technologies & Automated Tools: What In-House Professionals Need to Know

BABL AI CEO Dr. Shea Brown was invited to talk with  Mariah Jaworski from Clark Hill and Managing Director Carol Piovesan from INQ Consulting to talk about various laws, legislative proposals and technical guidance into actionable compliance strategies for organizations that use AI/ML technologies and automated decision-making tools. 

In this panel discussion they talk about: 

  1. How to work with AI/ML vendors (due diligence, contractual arrangements);
  2. How to evaluate the use of AI/ML technologies and automated tools to ensure quality outcomes and prevent against bias and discrimination (bias audits and assessments);
  3. How to communicate the use of these technologies to impacted individuals;
  4. What rights to offer individuals who are impacted by a business use of AI/ML or automated tool technologies.

View webinar recording here

Criteria-Based Bias Audit

017. Criteria-based Bias Audit – Lunchtime BABLing

On this week’s Lunchtime BABLing, Shea goes over the difference between a direct engagement audit vs. an attestation engagement audit and give examples from our criteria-based attestation audit for NYC Local Law No. 144.

Available on YouTube, Streamcast and all major Podcast streaming platforms.