U.S. Department of Education Releases Essential AI Guide for Developers to Enhance Teaching and Learning
The U.S. Department of Education has unveiled a new guide titled “Designing for Education with Artificial Intelligence: An Essential Guide for Developers,” aimed at fostering innovation and ensuring safe and effective use of AI in educational settings. This comprehensive guide is set to support developers in creating AI-driven educational tools that enhance teaching and learning while maintaining safety, security, and trust.
The guide, released in response to President Joe Biden’s October 2023 Executive Order on AI, underscores the federal commitment to promoting the responsible development and deployment of AI in education. The order mandates the development of resources, policies, and guidance to address safe, responsible, and nondiscriminatory uses of AI in education, particularly concerning the impact on vulnerable and underserved communities.
Building on Previous Insights
This guide builds on the Department’s earlier report, “Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations.” Unlike that broad overview, the new guide targets product leads, developers, designers, and legal teams. It urges developers to align AI tools with educational goals. By doing so, products can improve student outcomes and support teachers while meeting ethical and pedagogical standards.
Five Key Recommendations
The guide provides five central recommendations, each with questions, next steps, and resources:
-
Design for Teaching and Learning
Developers should anchor products in educational values. They are encouraged to gather feedback from teachers and students throughout design and testing to ensure tools meet classroom needs. -
Provide Evidence for Impact
The guide requires strong proof that AI tools improve outcomes. This aligns with the Elementary and Secondary Education Act of 1965 (ESEA), which mandates evidence of effectiveness. -
Advance Equity and Protect Civil Rights
Developers must watch for bias, algorithmic discrimination, and accessibility gaps. Inclusive design ensures tools work for students with diverse needs, including those with disabilities. -
Ensure Safety and Security
Given AI’s rapid evolution, developers should safeguard data, prevent misuse, and protect system integrity. These steps reduce risks to both students and technology. -
Promote Transparency and Earn Trust
Trust requires openness. Developers should adopt transparent practices, engage educators in dialogue, and publicly commit to responsible AI development.
By following these five recommendations, developers can create AI educational tools that are safe, effective, and aligned with the ethical standards of the education system. The guide provides a comprehensive framework to support the responsible development and deployment of AI in education, ensuring that these technologies serve the best interests of students, educators, and the broader community.
Opportunities and Risks
The Department highlights AI’s dual nature: it can transform education but also creates risks like data privacy issues and misuse. To address this, the guide calls for a balanced approach. Developers are urged to embrace innovation while managing risks proactively.
Broad Stakeholder Input
This guide reflects insights gathered through public listening sessions with students, parents, educators, industry groups, and nonprofits. These sessions highlighted safety concerns, identified risks, and uncovered opportunities to build trust in educational AI tools.
Need Help?
For those curious about how this and other global regulations could impact their company, reaching out to BABL AI is recommended. One of their audit experts will gladly provide assistance.

