AI isn’t just reshaping industries—it’s reshaping workers. In the latest episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown returns with COO Jeffery Recker and Chief of Staff Emily Brown to dig into one of the biggest questions people are asking right now: how do you actually bring AI into your career without losing your value in the process?
The New Challenge: Adapting Without Overloading
Workers today are caught between two pressures: the expectation to use AI and the fear of using it incorrectly. Many companies still hesitate to adopt generative tools because of data security, reputation risks, or regulatory uncertainty. Shea acknowledges that this fear is often valid. But he also points out that avoiding AI doesn’t remove the risks—it simply removes your control over them.
The first step, he argues, is to learn how these tools work in your personal life. Experiment with consumer-grade language models and agents. Build small workflows at home. Understand the mechanics before trying to deploy them at work. Those who wait for permission will eventually find themselves behind colleagues who’ve already built competency. Jeffery echoes that advice, noting that most people are already surrounded by AI without realizing it. From email filtering to LinkedIn summaries to phone apps, AI is embedded everywhere. Creating an “AI inventory” of the tools already touching your life can make the technology feel less foreign—and reveal how much you already understand.
Filtering Is the Skill That Survives
One of the strongest themes in the discussion is the rise of what Shea bluntly calls “AI slop.” With so many people pasting unedited model outputs into emails, documents, and presentations, quality is dropping fast. The result: the people who can sift, refine, verify, and simplify information are becoming indispensable. Professionals who survive the shift will not be the ones who generate the most content. They will be the ones who know what’s correct, what’s safe, and what should be thrown out entirely.
Emily describes this as the return to judgment. AI can draft text, but it cannot understand context, read a room, or identify when a recommendation subtly undermines a company’s values or exposes it to risk. That responsibility still belongs to the human in the loop—and the people who embrace that role become the ones organizations trust.
When Your Company Is Scared of AI
For many workers, the barrier isn’t skill—it’s leadership hesitancy. Teachers, journalists, and public-sector employees in particular are facing organizations that worry about data breaches, hallucinations, and loss of integrity. Shea’s advice: treat that fear as your opportunity. The fastest way to become invaluable is to solve the fears your leadership can’t. That could mean researching compliant ways to use AI tools, documenting safe workflows, or identifying where human oversight must remain. Employees who can explain not just how to use AI, but how to use it safely, often become the people their organizations rely on. Instead of hiding your use of AI, demonstrate that you’re the one thinking critically about its risks.
Laid Off? Start a Dual-Track Plan
Amid ongoing layoffs, the episode doesn’t sugarcoat how difficult the job market is becoming. But it does offer a clear path forward. Shea recommends what he calls a dual-track strategy: continue applying for roles in your existing field while simultaneously building AI literacy in the background. You do not need to pivot overnight. But you do need to begin laying the foundation for a future where AI experience is expected in almost every role.
Emily speaks candidly about her own transition. She wasn’t an engineer. She wasn’t a data scientist. She moved into responsible AI because she recognized her existing skills—ethics, operations, communication—were exactly the skills companies needed to adopt AI responsibly. Her story serves as a reminder that AI governance isn’t a technical career track—it’s a knowledge and judgment career track.
Building Proof You Can Work in the AI Era
The episode stresses that breaking into AI-aligned work doesn’t require advanced math or coding. It requires proof of thinking. That can include:
- Documenting a workflow you improved with AI
- Writing short posts that demonstrate your reasoning and judgment
- Creating small case studies based on real problems in your domain
- Joining working groups, volunteer communities, or professional networks
Jeffery notes that people rarely get hired because they blasted out 500 applications. They get hired because someone remembers them, someone saw their work, or someone trusted their judgment. Showcasing your thinking publicly—or within communities—helps build that trust long before a job interview.
Why This Episode Matters
AI will continue to reshape organizations, but this conversation reframes the moment. The people who succeed in the AI era aren’t the ones who know the most tools. They’re the ones who stay adaptable, who build confidence through small experiments, and who protect the human qualities technology can’t replace: clarity, empathy, judgment, and the ability to ask better questions. This episode offers something rare in today’s AI discourse: realism with optimism. It acknowledges the fear—but doesn’t let fear set the limits.
Where to Find Episodes
Lunchtime BABLing can be found on YouTube, Simplecast, and all major podcast streaming platforms.
Need Help?
Looking to explore a career in AI governance beyond the headlines? Visit BABL AI’s website for more resources on AI governance, risk, algorithmic audits, and compliance.


