AI, Human Rights, and Governance: Key Takeaways from RightsCon 2025

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 03/03/2025
In Podcast

In the latest episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown reports live from RightsCon 2025 in Taipei, diving into the growing intersection of AI, human rights, and global policy. Joined by BABL AI COO Jeffery Recker and CSO Bryan Ilg, the discussion unpacks the key themes shaping the future of AI governance, including algorithmic auditing, investor concerns, and the urgent need for AI literacy.

 

 

Topics Discussed:

 
AI Dominates the Human Rights Agenda

 

RightsCon, an event historically focused on digital rights, has increasingly become a forum for AI governance discussions. This year was no exception. As Shea explains, AI is no longer just a technological advancement—it’s now a critical human rights issue. With powerful AI models concentrated in the hands of a few companies, discussions at the conference highlighted concerns around transparency, accountability, and the global AI divide.

 

This monopolization of AI development raises critical questions about ethical AI use, regulatory oversight, and the role of auditing in ensuring responsible deployment.

 

Algorithmic Auditing: The Push for AI Accountability

 

As AI becomes increasingly embedded in global institutions and businesses, the demand for transparency is growing. One of the major topics at RightsCon was algorithmic auditing—a structured process for assessing AI systems for compliance, fairness, and risk management.

 

Shea highlights the work of the International Association of Algorithmic Auditors (IAAA), a nonprofit coalition dedicated to defining and professionalizing the field of AI auditing.

 

With increasing regulatory scrutiny—particularly in the EU and U.S.—companies are facing greater pressure to provide evidence of compliance. Yet, many organizations still hesitate to conduct audits, claiming technical limitations or confidentiality concerns. Shea pushes back against this, emphasizing, “There is no excuse for any company to say they can’t do an AI audit. If they haven’t done it, it’s because they haven’t tried.”

 

Investors Are Paying Attention

 

A surprising yet powerful voice in the RightsCon discussions came from investors. As AI regulations tighten and enforcement ramps up, venture capitalists and private equity firms are recognizing the importance of AI risk management.

 

This shift signals a major change in AI development: AI ethics is no longer just a theoretical concern—it’s becoming a business necessity. Startups that prioritize responsible AI and risk assessments will have a competitive edge in securing funding.

 

AI Literacy: The Key to Ethical AI

 

Beyond audits and compliance, the discussion at RightsCon repeatedly returned to the theme of AI literacy. Nations looking to keep up with AI leaders in the U.S. and China recognize that education is the first step in building an AI-literate workforce.

 

The Path Forward: AI Governance & Risk Mitigation

 

As the discussion on RightsCon 2025 wraps up, one thing is clear—AI is at a crossroads. The decisions made today regarding transparency, education, and governance will shape how AI impacts human rights and global economies in the years to come.

 

Where to Find Episodes

 

Lunchtime BABLing can be found on YouTubeSimplecast, and all major podcast streaming platforms.

 

 

Need Help?

 

For more information and resources on AI Assurance, be sure to visit BABL AI’s website and stay tuned for future episodes of Lunchtime BABLing

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter