The Trump Presidency and Its Potential Impact on AI Regulation

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 01/21/2025
In Blog

The 2024 U.S. presidential election has ushered in a period of renewed debate about the direction of artificial intelligence (AI) regulation. The return of Donald Trump to the Oval Office has raised questions about how his administration will influence AI policy and governance. While Trump has expressed enthusiasm for technological innovation, the implications for regulation remain nuanced, particularly in light of his administration’s broader deregulatory approach. This blog explores what a Trump presidency might mean for AI regulation, from potential federal rollbacks to increased state-level and international activity.

 

A History of Deregulatory Policies

 

One of the hallmarks of Trump’s first term was his focus on deregulation. His administration sought to reduce the regulatory burden on industries to promote economic growth and innovation. This approach often included rolling back Obama-era regulations in areas such as environmental protections and financial oversight. If this approach is mirrored in his second term, it is likely that AI regulation will see similar treatment, with an emphasis on reducing federal oversight to encourage technological advancement.

 

A key indicator of this potential trajectory is Trump’s repeal of Biden’s executive order on AI governance. Biden’s order had tasked federal agencies with implementing best practices outlined by the National Institute of Standards and Technology (NIST). While NIST itself enjoys bipartisan support, the removal of the executive order signals a shift away from federal involvement in AI governance.

 

Federal Deregulation: Implications and Challenges

 

Trump’s approach to AI regulation is expected to focus on deregulation at the federal level. This could mean fewer top-down mandates for compliance and a greater reliance on market-driven solutions. The administration has also indicated a preference for prioritizing innovation over precaution, which may result in a regulatory environment that emphasizes economic competitiveness rather than risk mitigation.

 

However, deregulation comes with its own set of challenges. The absence of uniform federal guidelines could lead to a fragmented regulatory landscape. Companies operating across multiple states may face compliance challenges as they navigate varying state-level regulations. This lack of consistency could create a compliance quagmire, particularly for smaller companies without the resources to manage complex regulatory requirements.

 

The Rise of State-Level Regulation

 

As federal oversight potentially diminishes, state governments are likely to fill the void. States such as California, New York, and Colorado have already begun implementing their own AI-related laws. For example, Colorado’s AI insurance regulation and Illinois’ biometric privacy laws demonstrate how states are taking the lead in specific sectors.

 

California, with its robust economy and history of consumer protection laws, is poised to be a significant player in state-level AI regulation. The state’s existing laws, such as the California Consumer Privacy Act (CCPA), provide a foundation for addressing AI-related issues. Similarly, New York’s emphasis on bias audits for AI systems highlights the state’s focus on ensuring fairness and transparency.

 

The rise of state-level regulation, while filling gaps left by federal deregulation, may lead to a patchwork of laws. Companies will need to adapt to these variations, creating potential inefficiencies and compliance risks.

 

International Considerations: The EU AI Act and Beyond

 

While the U.S. grapples with its regulatory approach, international frameworks such as the European Union’s AI Act continue to set the standard for global governance. The EU AI Act emphasizes risk-based regulation, mandating rigorous oversight for high-risk applications while allowing greater flexibility for low-risk uses.

 

For U.S. companies operating internationally, compliance with the EU AI Act will remain a priority. The act’s extraterritorial scope means that U.S. firms must adhere to its provisions if they wish to do business in the EU. This dynamic could create a de facto global standard, influencing U.S. practices despite the absence of equivalent federal regulations.

 

Other countries, including Canada, Australia, and Japan, are also advancing their AI regulatory frameworks. Canada’s proposed Artificial Intelligence and Data Act (AIDA) and Australia’s AI ethics guidelines highlight the growing international focus on ethical AI development and deployment. These frameworks may serve as models or points of reference for U.S. states seeking to craft their own regulations.

 

Litigation as a De Facto Regulator

 

In the absence of robust federal oversight, litigation may emerge as a primary mechanism for addressing AI-related harms. Lawsuits over biased algorithms, discriminatory practices, and data privacy violations are already becoming more common. High-profile cases, such as those involving facial recognition technology, underscore the potential for legal challenges to shape industry practices.

 

California, in particular, has signaled its readiness to pursue litigation as a regulatory tool. The state’s Attorney General’s office has indicated a willingness to enforce existing laws in ways that address emerging AI risks. This approach could create a chilling effect, prompting companies to adopt more rigorous compliance measures to avoid legal liability.

 

The Role of Corporate Responsibility

 

Amid regulatory uncertainty, corporate responsibility will play a crucial role in shaping the future of AI governance. Large enterprises such as Microsoft and Google have already begun implementing internal policies and frameworks to address ethical and legal concerns. Microsoft’s requirement for vendors to obtain ISO certifications, for example, demonstrates how companies are taking proactive steps to manage AI risks.

 

Corporate leadership in AI governance is not just a matter of ethics but also a strategic imperative. By adopting best practices, companies can mitigate legal and reputational risks while gaining a competitive advantage in the marketplace. This dynamic is likely to drive greater investment in areas such as bias audits, risk assessments, and compliance training.

 

AI Safety and National Security

 

Trump’s previous statements about AI have included references to its potential for national security applications. His administration’s focus on defense-related AI projects, such as autonomous weapons systems and cybersecurity initiatives, suggests that this area will remain a priority. The idea of a “Manhattan Project” for AI in the military underscores the administration’s interest in leveraging AI for strategic advantage.

 

However, the emphasis on AI safety in national security contexts may not translate into broader regulatory safeguards. While military applications may benefit from rigorous testing and oversight, commercial and consumer uses of AI could face fewer guardrails under a deregulatory regime.

 

Educational and Workforce Implications

 

Another area of potential impact is education and workforce development. The Department of Education, under Trump’s administration, may face significant changes, including the possibility of reduced funding or even dissolution. This could affect efforts to integrate AI into educational curricula and to develop guidelines for using AI in schools.

 

Despite these challenges, there is likely to be continued interest in workforce development initiatives related to AI. Public-private partnerships and corporate-led training programs may fill gaps left by federal retrenchment. Companies investing in reskilling and upskilling programs could play a pivotal role in preparing workers for an AI-driven economy.

 

The Global AI Race

 

Trump’s emphasis on economic competitiveness is likely to influence his administration’s approach to AI regulation. Framing AI as a critical component of the global technology race, the administration may prioritize investments in research and development to maintain U.S. leadership. This could include incentives for private-sector innovation, such as tax breaks and grants for AI startups.

 

However, the focus on competitiveness may come at the expense of ethical considerations. Balancing innovation with responsibility will require careful navigation, particularly as other countries implement more comprehensive regulatory frameworks.

 

Conclusion: Navigating Uncertainty

 

The return of Donald Trump to the presidency marks a new chapter in the ongoing debate over AI regulation. While federal deregulation may create opportunities for innovation, it also introduces significant uncertainties for businesses and consumers. State-level regulations, international frameworks, and corporate responsibility will play increasingly important roles in shaping the AI landscape.

 

 

Need Help? 

 

If you want to have a competitive edge when it comes to AI regulations and laws, don’t hesitate to reach out to BABL AI. Their team of Audit Experts can provide valuable insights on implementing AI.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter