Careers in AI governance and auditing are often described in vague, aspirational terms. Job listings promise “cutting-edge” work, certifications claim industry relevance, and headlines suggest explosive demand—but few conversations explain what the work actually looks like or who realistically succeeds in these roles. In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by Mert Çuhadaroğlu, Training Programs Manager at BABL AI, to cut through that ambiguity with a grounded, experience-driven discussion.
The conversation draws on Mert’s unconventional professional journey and his day-to-day role evaluating students, reviewing capstone projects, and guiding professionals through BABL AI’s certification programs. What emerges is a practical look at AI auditing as a discipline—one rooted in risk assessment, governance, and applied judgment rather than hype.
A Nontraditional Path Into AI Governance
Mert’s background immediately challenges the assumption that AI auditing is reserved for engineers. He began his career in banking and insurance, transitioned into career coaching and publishing, and later moved into AI ethics and governance through European training programs. That path, he explains, mirrors the reality of the field itself. AI governance did not grow out of a single discipline. It sits at the intersection of risk management, ethics, compliance, quality assurance, and technology.
Now based in Istanbul, Mert works closely with BABL AI’s global student base, many of whom arrive from similarly diverse backgrounds. Lawyers, auditors, privacy professionals, risk managers, former educators, and business leaders all appear regularly in the program. The unifying factor is not technical pedigree but the ability to assess systems critically, identify risks, and reason through real-world impacts.
Do You Need a Technical Background?
One of the most common questions prospective students ask is whether they need deep technical skills to succeed. Mert’s answer is candid. The training is rigorous, particularly in areas like bias testing, statistics, and performance evaluation. For those without a technical background, the learning curve can be steep. But it is not prohibitive.
What matters more is motivation, discipline, and the willingness to engage deeply with the material. Mert describes spending significantly more time on certain topics than technically trained peers, relying heavily on course resources and live Q&A sessions. That extra effort, he argues, is not a disadvantage—it’s part of what builds real understanding. The goal is not memorization, but confidence in applying frameworks under real-world conditions.
What the Job Market Actually Looks Like
The episode also takes a sober look at hiring trends. While the demand for “AI auditors” as a standalone job title is still emerging, the demand for AI-specific governance, risk, and assurance skills is already here. Large consulting firms, certification bodies, financial institutions, and internal audit teams are increasingly upskilling existing professionals to handle AI-related risks.
Rather than waiting for a flood of job postings labeled “AI Auditor,” Shea explains that many organizations are embedding AI responsibilities into existing roles: internal audit, model risk management, compliance, privacy, and quality management. This is where certified professionals are finding opportunities—by bringing AI-specific expertise into roles organizations already understand.
Inside the Capstone Projects
As the primary evaluator of BABL AI’s capstone projects, Mert offers a rare glimpse into what applied AI auditing actually looks like. In recent months, most projects have focused on large language models used in hiring, education, healthcare, and pharmaceutical contexts. Students are not simply theorizing about risks; they are required to map systems using the Context–Input–Decision–Action (CEDA) framework, conduct risk assessments, design governance controls, and execute bias and accuracy testing plans.
The rise of accessible generative AI tools has made this work more tangible. Students can now interact directly with systems like ChatGPT or Gemini, define system prompts, and test performance under different conditions. This hands-on access, Shea notes, mirrors what is happening in the real world, where organizations increasingly rely on foundation models embedded into everyday tools.
Certification, Confidence, and Practice
A recurring theme throughout the discussion is confidence. Both Shea and Mert emphasize that certification alone does not make someone an independent auditor overnight. For professionals new to auditing, supervision and mentorship remain critical. However, the combination of applied coursework, capstone evaluation, and structured examination gives graduates something many other programs do not: a clear sense of what to do when faced with a real system.
This practical orientation is what distinguishes BABL AI’s training in the eyes of many graduates, including those who have completed certifications from larger, more established organizations. While those credentials carry brand recognition, students often report that BABL AI’s programs are where they learn how to actually perform the work.
Beyond a Single Career Path
The episode closes by broadening the lens. AI governance is not a single job title or career track. It is a growing set of responsibilities that touch nearly every function inside modern organizations. Whether someone goes on to work as an auditor, a risk manager, a compliance lead, or an internal advisor, the skills remain the same: understanding systems, identifying risks, and translating complex technical behavior into actionable governance.
For professionals considering a career shift—or those already tasked with managing AI risks inside their organization—this conversation offers clarity without overselling certainty. AI auditing is not a shortcut career. It is a discipline that rewards experience, judgment, and continuous learning.
Why This Episode Matters
As AI systems become more deeply embedded in business operations, organizations are quietly realizing they need people who can bridge the gap between technology, law, and risk. This episode doesn’t promise instant roles or easy answers. Instead, it provides something more valuable: a realistic picture of the field and the people who succeed in it.
For anyone curious about AI governance as a profession—or responsible for making it work in practice—this conversation is an essential starting point.
Where to Find Episodes
Lunchtime BABLing can be found on YouTube, Simplecast, and all major podcast streaming platforms.
Need Help?
Interested in building practical skills in AI governance and auditing? Visit BABL AI’s website for courses, certifications, and resources on AI risk management, algorithmic audits, and compliance.


