NY Attorney General Unveils Key Findings from Generative AI Symposium on Risks and Opportunities

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 09/12/2024
In News

The Office of the New York State Attorney General (OAG) recently unveiled what was discussed at a symposium, and it reveals significant steps in how New York and other jurisdictions might regulate artificial intelligence (AI) moving forward. The symposium, which happened on April 12, 2024, was titled “The Next Decade of Generative AI: Fostering Opportunities While Regulating Risks.” 

 

This event, which focused on the rapidly evolving field of generative AI, brought together academics, policymakers, advocates, and industry representatives to discuss the opportunities and risks posed by AI, especially in generative models. Generative AI is a subset of AI that creates new content, such as text, images, audio, and video, offering substantial potential but also significant concerns.

 

Generative AI is not only a tool for creative content generation but also a transformative force in sectors like healthcare, where it holds promise for early disease detection, drug discovery, and even administrative efficiency in medical settings. For example, one speaker at the symposium highlighted an AI tool designed to review mammograms and identify abnormalities that may indicate cancer risk up to five years in advance. The discussions also emphasized that while AI tools could enhance medical processes, they must be used in conjunction with human oversight to avoid privacy and ethical risks, particularly concerning sensitive healthcare data.

 

The symposium also tackled the broader impacts of AI on information dissemination and the challenges associated with misinformation. Chatbots powered by AI, often used in customer service and public assistance roles, have proven valuable for streamlining information delivery. However, AI’s potential to “hallucinate” and provide incorrect or misleading information remains a significant flaw. Participants stressed that generative AI could be exploited by bad actors to spread misinformation or create deepfakes—realistic but false digital media designed to deceive, a concern that is particularly pressing with the upcoming U.S. elections. The risk that deepfakes could sow confusion and disrupt democratic processes was highlighted as a growing threat.

 

Another focus of the event was AI’s role in administrative and decision-making processes. AI is increasingly used in government agencies to streamline application reviews and distribute public services more efficiently. While this use can speed up bureaucracy, the symposium participants warned about the risks of algorithmic bias. AI tools, particularly those used in hiring or application screening, could inadvertently amplify existing biases, further entrenching discriminatory practices. The opaque nature of many AI models—commonly referred to as “black-box” algorithms—complicates efforts to understand their decision-making processes and ensure fairness. The consensus was that any deployment of AI in decision-making must include mechanisms to mitigate bias and ensure transparency.

 

Data quality and accessibility also emerged as critical themes. Generative AI models rely on vast datasets for training, but concerns about the use of copyrighted content without compensating creators have sparked legal battles. Moreover, the underrepresentation of minority groups in training data raises the risk that AI models will serve only certain populations effectively, leaving others behind. Participants at the symposium called for “data democratization” to foster innovation while balancing privacy concerns. They also warned of “model collapse,” where over-reliance on AI-generated synthetic data could degrade the model’s accuracy and reliability over time.

 

To address these risks, speakers proposed several mitigation strategies, including the need for greater public education on AI. Increasing AI literacy among the public would empower individuals to better understand AI’s capabilities and limitations, as well as recognize AI-generated misinformation. Symposium participants also urged the development of transparency standards, such as clear labeling of AI-generated content and transparent auditing of AI models. These measures, they argued, are crucial for fostering trust in AI technologies.

 

The symposium also explored potential regulatory frameworks for AI. While some advocated for comprehensive federal legislation similar to the EU AI Act, others preferred a sector-specific regulatory approach, allowing individual agencies to tailor regulations to their domains. The event underscored the need for ongoing government oversight to ensure AI technologies are developed and used in ways that align with societal values and legal standards.

 

 

Need Help?

 

If you’re wondering how New York’s AI approach, or any other state’s AI approach or global bill could impact you, don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance while answering your questions and concerns.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter