AI Standards in Education:
The Plain-English Hub for District Leaders.
Your district has to build an AI policy. Maybe by July 1. You shouldn't have to read 50-page NIST documents to do it. This is the working translation โ written by an educator, sourced from the people who write the standards.
If you read nothing else, read this.
- 26+ states have published AI guidance for K-12 schools as of 2026 โ your state likely has something already.
- Two states (Ohio, Tennessee) currently require districts to adopt formal AI policies. Ohio's deadline is July 1, 2026.
- NIST AI Risk Management Framework (AI RMF 1.0) is the foundational US framework. Voluntary, but cited by nearly every state guidance document.
- 53 bills across 25 states are pending in 2026 sessions on AI in classroom instruction. Expect the regulatory landscape to shift this year.
- Most state guidance shares 5 common priorities: data privacy, academic integrity, AI literacy, equity, and educator professional development.
- The biggest risks for districts without policy: FERPA violations, equity gaps, plagiarism disputes, and parent backlash from unclear use.
The Four Levels
AI policy in education
operates on four levels.
Knowing which level applies to you tells you who to listen to and what your district must comply with.
International
OECD, UNESCO, ISO/IEC standards. Sets global norms โ most US districts won't act on these directly, but state frameworks reference them.
Reference LayerFederal
NIST AI RMF 1.0 (voluntary), federal executive orders, and FERPA/CIPA implications for AI tools handling student data.
FoundationState
26+ states have AI guidance documents. 2 states (Ohio, Tennessee) have legal mandates. Your starting point is YOUR state.
Where Action HappensDistrict / Local
Your district's actual policy โ informed by all the above plus local context: community values, infrastructure, board priorities, resources.
Your JobFederal Foundation
The NIST AI Risk Management Framework, explained for districts.
Released January 2023 by the National Institute of Standards and Technology, AI RMF 1.0 is the federal government's voluntary framework for managing AI risk. It's referenced in nearly every state's AI guidance document โ making it the de facto starting point even for K-12.
The RMF is built around four core functions. Each one maps to something a district AI policy should address. Here's how to translate them.
Govern
Establish a culture of AI risk management. Define roles, responsibilities, accountability, and ongoing oversight processes.
Map
Establish the context in which AI risks operate. Understand your specific use cases, data flows, and stakeholders.
Measure
Use quantitative and qualitative methods to analyze, assess, benchmark, and monitor AI risk and related impacts.
Manage
Allocate risk resources to mapped and measured risks regularly and as defined by Govern.
Source: Download the full NIST AI RMF 1.0 (PDF, 48 pages) ยท See also: AI RMF Playbook
Where Your State Stands
State-by-state AI policy tracker.
As of April 2026, 26+ states have published K-12 AI guidance. Two have legal mandates. Many more have task forces issuing reports this year. Below is a working snapshot of states making notable moves.
Don't see your state? It's likely on the map โ most states have at least task force activity. Check Education Commission of the States for the most current tracker.
Your Action Plan
What to do this week.
If your district doesn't yet have an AI policy, here's the leanest path forward โ in the order that creates real momentum without paralyzing your team.
Find your state's guidance
Pull up your state Department of Education website OR the Ballotpedia state guidance tracker. Even if your state hasn't mandated, the guidance shapes what "reasonable" looks like.
Form your governance team
Per NIST RMF Function 1: a working group of 5-8 people. Educators, IT, special ed lead, board liaison, optional student/parent voice. Set first meeting within 2 weeks.
Inventory current AI use
Survey staff anonymously: what AI tools are people already using? You probably have shadow AI use happening right now. You can't govern what you don't see.
Draft a one-page interim policy
Don't wait for the perfect 30-page policy. Start with a one-pager covering: what's allowed, what's not, who to ask. Iterate from there.
Plan PD for educators
Most state mandates require educator training. Build the PD calendar now โ even informal sessions count. Documentation matters when audits come.
Set a quarterly review cycle
AI tools and standards evolve fast. Build review into the policy itself: "This policy will be reviewed every 90 days." That meta-decision is half the battle.
Risk Awareness
The five risks districts overlook.
These are the gaps lawyers, parents, and auditors point to when an AI incident happens. Most are preventable โ but only if you're looking for them.
โ ๏ธ FERPA Violations
Teachers pasting student work into ChatGPT for grading. Every AI tool's privacy policy matters. Free tools often train on input. This is the #1 quiet risk.
โ๏ธ Academic Integrity Disputes
Without a clear "what counts as cheating" policy, disputes go to court. AI detection tools have false positives โ innocent students get accused. Document the process.
๐ Equity Gaps
Students with paid AI access (parents who pay for ChatGPT Plus, Claude Pro) outperform peers. If you don't address access, you widen achievement gaps invisibly.
๐ค Algorithmic Bias
AI systems reflect their training data. Without bias monitoring, AI grading and feedback can systematically disadvantage students of color, ELLs, and students with IEPs.
๐จโ๐ฉโ๐ง Parent Backlash
Parents discovering "AI is being used on my kid" without prior notification = lawsuits and board meetings. Communication strategy is part of policy, not separate from it.
Need AI literacy resources
for your teachers?
The College & Career Launch Room is the resource library that makes implementing your district's AI policy feasible โ research-grounded lessons, plug-and-play prompt libraries, and curriculum alignment. Founding members lock in $19/mo for life.
Tour The Launch Room โ๐ Sources & Further Reading
- NIST AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology, January 2023. Download PDF
- NIST AI Standards Zero Drafts Pilot Project. NIST, 2026. View page
- FutureEd 2026 Legislative Tracker: AI in Education Bills (53 bills, 25 states). Updated March 2026. View tracker
- Education Commission of the States: How States Are Responding to AI in Education. Read report
- Ballotpedia: AI guidance issued by state departments of education. State-by-state tracker
- MultiState: 2026 State Policy Trends in AI Education (134 bills across 31 states). Read summary
- NIST AI RMF Playbook. View playbook
- AI for Education state guidance compilation. Browse
This page is updated as new state guidance, federal policy, and frameworks are released. Last refreshed April 28, 2026.