UPDATE — AUGUST 2025: Since hosting its 2024 symposium on the future of generative AI, New York has begun shaping state-level AI regulation, with legislators proposing an AI Accountability Bill that would require impact assessments, disclosure of AI use, and even a state registry for high-risk systems. The Attorney General’s office has also launched investigations into AI-generated deepfakes in political ads, echoing concerns raised at the symposium. Nationally, the Biden administration’s AI Executive Order and new OMB guidance now require agencies to disclose AI use and mandate human oversight, while Congress debates disclosure and deepfake bills. Globally, rules like the EU AI Act are pressing companies — including those in New York — to align with new transparency and accountability standards. The risks highlighted in 2024 around bias, misinformation, and healthcare AI are now driving real legislative and enforcement actions in 2025.
ORIGINAL NEWS STORY:
NY Attorney General Unveils Key Findings from Generative AI Symposium on Risks and Opportunities
The Office of the New York State Attorney General (OAG) recently unveiled what was discussed at a symposium, and it reveals significant steps in how New York and other jurisdictions might regulate artificial intelligence (AI) moving forward. The symposium, which happened on April 12, 2024, was titled “The Next Decade of Generative AI: Fostering Opportunities While Regulating Risks.”
This event, which focused on the rapidly evolving field of generative AI, brought together academics, policymakers, advocates, and industry representatives to discuss the opportunities and risks posed by AI, especially in generative models. Generative AI is a subset of AI that creates new content, such as text, images, audio, and video, offering substantial potential but also significant concerns.
Exploring Generative AI’s Potential
The symposium brought together policymakers, academics, advocates, and industry leaders to discuss the rapid rise of generative AI—technology that creates new text, images, audio, and video. Speakers highlighted how this technology could transform fields such as healthcare, education, and communications while also creating serious ethical and legal challenges. In healthcare, experts described AI tools that can detect cancer risk in mammograms up to five years earlier than traditional methods. They agreed that human oversight remains essential, warning that AI should complement—not replace—medical professionals.
Addressing Misinformation and Deepfakes
Participants warned that generative AI also carries risks for public trust and democracy. Chatbots and automated assistants can streamline communication, but they can also “hallucinate” and produce false information. Panelists noted that deepfakes—realistic but fabricated videos or images—pose a growing threat, especially ahead of upcoming elections. Speakers stressed the need for transparency in AI-generated media and stronger rules against malicious uses. They argued that governments must act quickly to prevent AI tools from spreading misinformation or manipulating voters.
Tackling Bias and Black-Box Algorithms
The symposium also examined AI use in government decision-making. Agencies increasingly rely on algorithms to review applications and deliver services faster. Yet these same tools can unintentionally reinforce discrimination. Experts cautioned that “black-box” algorithms—systems whose inner workings are hidden—make it difficult to detect or fix bias. They urged agencies to include bias testing, auditing, and clear documentation in every stage of deployment.
Data Quality and Fairness in AI Models
Another central theme was data governance. Generative AI systems require massive datasets, but many rely on copyrighted or biased material. Participants noted that creators deserve fair compensation and that underrepresented groups must be better reflected in training data. They also discussed “model collapse,” a phenomenon where AI systems trained on synthetic data degrade in performance over time. To counter these risks, speakers promoted data democratization, encouraging open and ethical data access that protects privacy.
Building Public Trust Through Education and Transparency
To build trust, experts recommended public education initiatives to improve AI literacy. Teaching people how to identify and understand AI-generated content can reduce misinformation and promote responsible use. Speakers also called for clear labeling standards, transparent audits, and visible disclosure requirements to help the public distinguish between human and AI-generated material.
Regulatory Approaches for AI Oversight
The event also discussed regulation. Some attendees favored a national AI framework similar to the EU AI Act, others preferred a sector-specific regulatory approach, allowing individual agencies to tailor regulations to their domains. The event underscored the need for ongoing government oversight to ensure AI technologies are developed and used in ways that align with societal values and legal standards.
Need Help?
If you’re trying to understand how New York’s AI initiatives—or any state or global policies—could affect your organization, contact BABL AI. Their Audit Experts can help you assess compliance risks, prepare for new regulations, and build responsible AI systems.


