๐Ÿ“š The Knowledge Hub

AI Standards in Education:
The Plain-English Hub for District Leaders.

Your district has to build an AI policy. Maybe by July 1. You shouldn't have to read 50-page NIST documents to do it. This is the working translation โ€” written by an educator, sourced from the people who write the standards.

Last updated April 28, 2026 ยท Updated as standards evolve

If you read nothing else, read this.

  • 26+ states have published AI guidance for K-12 schools as of 2026 โ€” your state likely has something already.
  • Two states (Ohio, Tennessee) currently require districts to adopt formal AI policies. Ohio's deadline is July 1, 2026.
  • NIST AI Risk Management Framework (AI RMF 1.0) is the foundational US framework. Voluntary, but cited by nearly every state guidance document.
  • 53 bills across 25 states are pending in 2026 sessions on AI in classroom instruction. Expect the regulatory landscape to shift this year.
  • Most state guidance shares 5 common priorities: data privacy, academic integrity, AI literacy, equity, and educator professional development.
  • The biggest risks for districts without policy: FERPA violations, equity gaps, plagiarism disputes, and parent backlash from unclear use.

The Four Levels

AI policy in education
operates on four levels.

Knowing which level applies to you tells you who to listen to and what your district must comply with.

1

International

OECD, UNESCO, ISO/IEC standards. Sets global norms โ€” most US districts won't act on these directly, but state frameworks reference them.

Reference Layer
2

Federal

NIST AI RMF 1.0 (voluntary), federal executive orders, and FERPA/CIPA implications for AI tools handling student data.

Foundation
3

State

26+ states have AI guidance documents. 2 states (Ohio, Tennessee) have legal mandates. Your starting point is YOUR state.

Where Action Happens
4

District / Local

Your district's actual policy โ€” informed by all the above plus local context: community values, infrastructure, board priorities, resources.

Your Job

Federal Foundation

The NIST AI Risk Management Framework, explained for districts.

Released January 2023 by the National Institute of Standards and Technology, AI RMF 1.0 is the federal government's voluntary framework for managing AI risk. It's referenced in nearly every state's AI guidance document โ€” making it the de facto starting point even for K-12.

The RMF is built around four core functions. Each one maps to something a district AI policy should address. Here's how to translate them.

Function 1

Govern

Establish a culture of AI risk management. Define roles, responsibilities, accountability, and ongoing oversight processes.

For your district: Form an AI governance team. Include educators, IT, special ed, legal, and at least one student/parent voice. Meet quarterly. Document decisions.
Function 2

Map

Establish the context in which AI risks operate. Understand your specific use cases, data flows, and stakeholders.

For your district: Inventory every AI tool currently in use (yes โ€” including teachers using ChatGPT for lesson planning). Map data flows. Identify where AI touches student records.
Function 3

Measure

Use quantitative and qualitative methods to analyze, assess, benchmark, and monitor AI risk and related impacts.

For your district: Define what "good" looks like. Set measurable goals (e.g., "100% of staff using AI tools complete privacy training by August"). Track outcomes โ€” not just usage.
Function 4

Manage

Allocate risk resources to mapped and measured risks regularly and as defined by Govern.

For your district: When something goes wrong (a privacy breach, a plagiarism dispute, a flawed AI output) โ€” who responds? What's the protocol? Document this BEFORE you need it.

Source: Download the full NIST AI RMF 1.0 (PDF, 48 pages) ยท See also: AI RMF Playbook

Where Your State Stands

State-by-state AI policy tracker.

As of April 2026, 26+ states have published K-12 AI guidance. Two have legal mandates. Many more have task forces issuing reports this year. Below is a working snapshot of states making notable moves.

State
What's Happening
Status
Ohio
Mandate: Every public, community, and STEM school must adopt an AI policy by July 1, 2026. State has released a model policy template.
Required
Tennessee
Mandate: Districts have been required since March 2024 to publicly post AI use policies. Includes pilot programs and university partnerships.
Required
Idaho
SB 1227 enacted: statewide K-12 AI framework, mandates local policies, AI literacy standards, educator training, prohibits AI replacing teachers.
Enacted
Maryland
SB 720 / HB 1057 (AI Ready Schools Act): annual AI guidance, district-aligned policies, designated AI coordinators, certified AI tools.
Passed 1st
Vermont
January 2026: 50-page guidance with developmental framework (PreK-2 no chatbots; grades 3-5 curriculum-embedded; 6-8 structured; 9-12 AI fluency).
Guidance
California
CDE released TK-12 guidance emphasizing "human-centered" implementation. AI as enhancement, not replacement. Aligns with state academic standards.
Guidance
Alaska
7 guiding principles framework. Strong cultural responsiveness emphasis. Aligns with FERPA/CIPA. Designed as recommendations, not mandates.
Guidance
Connecticut
Public Act 24-151: comprehensive structured rollout. National model for teacher preparation pathways and PD investment.
Enacted
Utah
USBE released PreK-12 framework + AI steering committee. Educator AI summits with stipends. Utah Education Network offers AI toolkit.
Guidance
New Jersey
A 4352 / S 2862: requires K-12 AI instruction including ethical use; higher ed must offer AI degree and certificate programs.
Proposed
Georgia
SB 179: makes computer science (including AI) a graduation requirement starting 2031-2032. 2024 task force report on AI literacy already published.
Passed 2nd
Mississippi
SB 2294: high school students starting 2029-2030 must earn CS or CTE credit including AI instruction. AI task force established (SB 2426).
Passed 2nd
Arizona
HB 4040 + HB 4005: requires K-12 AI use policies, detection of unauthorized use, instruction on ethical/educational use.
Proposed
Florida
HB 1503 / SB 1694: AI instruction within computer science coursework + responsible AI application opportunities in general education.
Proposed

Don't see your state? It's likely on the map โ€” most states have at least task force activity. Check Education Commission of the States for the most current tracker.

Your Action Plan

What to do this week.

If your district doesn't yet have an AI policy, here's the leanest path forward โ€” in the order that creates real momentum without paralyzing your team.

1

Find your state's guidance

Pull up your state Department of Education website OR the Ballotpedia state guidance tracker. Even if your state hasn't mandated, the guidance shapes what "reasonable" looks like.

2

Form your governance team

Per NIST RMF Function 1: a working group of 5-8 people. Educators, IT, special ed lead, board liaison, optional student/parent voice. Set first meeting within 2 weeks.

3

Inventory current AI use

Survey staff anonymously: what AI tools are people already using? You probably have shadow AI use happening right now. You can't govern what you don't see.

4

Draft a one-page interim policy

Don't wait for the perfect 30-page policy. Start with a one-pager covering: what's allowed, what's not, who to ask. Iterate from there.

5

Plan PD for educators

Most state mandates require educator training. Build the PD calendar now โ€” even informal sessions count. Documentation matters when audits come.

6

Set a quarterly review cycle

AI tools and standards evolve fast. Build review into the policy itself: "This policy will be reviewed every 90 days." That meta-decision is half the battle.

Risk Awareness

The five risks districts overlook.

These are the gaps lawyers, parents, and auditors point to when an AI incident happens. Most are preventable โ€” but only if you're looking for them.

โš ๏ธ FERPA Violations

Teachers pasting student work into ChatGPT for grading. Every AI tool's privacy policy matters. Free tools often train on input. This is the #1 quiet risk.

โš–๏ธ Academic Integrity Disputes

Without a clear "what counts as cheating" policy, disputes go to court. AI detection tools have false positives โ€” innocent students get accused. Document the process.

๐ŸŒ Equity Gaps

Students with paid AI access (parents who pay for ChatGPT Plus, Claude Pro) outperform peers. If you don't address access, you widen achievement gaps invisibly.

๐Ÿค– Algorithmic Bias

AI systems reflect their training data. Without bias monitoring, AI grading and feedback can systematically disadvantage students of color, ELLs, and students with IEPs.

๐Ÿ‘จโ€๐Ÿ‘ฉโ€๐Ÿ‘ง Parent Backlash

Parents discovering "AI is being used on my kid" without prior notification = lawsuits and board meetings. Communication strategy is part of policy, not separate from it.

Need AI literacy resources
for your teachers?

The College & Career Launch Room is the resource library that makes implementing your district's AI policy feasible โ€” research-grounded lessons, plug-and-play prompt libraries, and curriculum alignment. Founding members lock in $19/mo for life.

Tour The Launch Room โ†’

๐Ÿ“š Sources & Further Reading

  • NIST AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology, January 2023. Download PDF
  • NIST AI Standards Zero Drafts Pilot Project. NIST, 2026. View page
  • FutureEd 2026 Legislative Tracker: AI in Education Bills (53 bills, 25 states). Updated March 2026. View tracker
  • Education Commission of the States: How States Are Responding to AI in Education. Read report
  • Ballotpedia: AI guidance issued by state departments of education. State-by-state tracker
  • MultiState: 2026 State Policy Trends in AI Education (134 bills across 31 states). Read summary
  • NIST AI RMF Playbook. View playbook
  • AI for Education state guidance compilation. Browse

This page is updated as new state guidance, federal policy, and frameworks are released. Last refreshed April 28, 2026.