
California’s AI Guidance Is Here | Many L.A. County Teachers Say the Support Isn’t.
California’s AI Guidance Is Here | Many L.A. County Teachers Say the Support Isn’t
On December 31, 2025, the California Department of Education (CDE) updated and published its state-facing guidance on artificial intelligence in public schools, positioning AI as both a literacy imperative and a practical classroom tool, while emphasizing guardrails around privacy, equity, and human-centered instruction.
But in Los Angeles County, home to the nation’s second-largest school district, educators’ feedback is sending a different signal: the state may have clarified what “responsible AI” should look like, yet many teachers still do not feel equipped to deliver it.
The disconnect is measurable. A needs assessment tied to Los Angeles County’s generative AI work found that only 16% of teachers and 23% of administrators rated themselves as having a “good or deep understanding” of generative AI in educational settings. At the same time, the same LA County effort documented a surge in demand for concrete support, professional learning, use cases, and guidance on tool selection and student-data protections.
This is not just a training problem. It is an implementation problem—one that raises a policy question California has not yet answered convincingly: When the state issues AI guidance that districts are not required to follow, who is accountable for ensuring teachers are actually prepared to use AI safely, effectively, and equitably?
What California Officially Released—and What It Actually Requires
CDE’s AI materials are explicit about their intent: they are “helpful guidance” and “not mandatory” for local educational agencies (LEAs). But the documents draw a sharp line between guidance and legal obligation: privacy and child-protection laws still apply, including FERPA and COPPA, regardless of whether a district adopts the guidance.
Here’s a 10-minute video breakdown of the CDE's guidelines:
CDE’s core framing is “learning with AI” and “learning about AI,” encouraging schools to integrate AI literacy across subject areas while establishing clear guardrails. The guidance explicitly warns against treating AI as a substitute for the educator’s role, emphasizing that AI should reduce routine burden so educators can focus on the “deeply human” aspects of teaching—connection, ethical reflection, and personalized support.
On paper, the north star is clear:
AI should augment, not replace, educators.
Implementation must prioritize equity and access, including guardrails and AI literacy training.
Data protection is non-negotiable: LEAs must treat student privacy as foundational, and the guidance warns against entering personally identifiable information into “open” AI systems.
The problem is not what the documents say. The problem is that guidance does not automatically become capacity.
LA County’s Survey Signal: Teachers Want AI Support—Now
Los Angeles County has not been idle. LACOE’s generative AI guidelines describe a research-based needs assessment conducted with Project Tomorrow and Arizona State University’s Learning Transformation Studios, focused on stakeholder familiarity, attitudes, and concerns about generative AI in K–12.
The needs assessment summary is revealing:
The survey was open from February 13, 2024, to May 31, 2024, and reported 1,100+ responses early in the collection window.
Respondents said they needed:
professional learning for educators,
use cases and best-practice recommendations,
leadership training for managing GenAI learning environments,
guidance on which tools are appropriate,
and clarity on student data protections and regulations.
The document later quantifies the demand: 77% want professional learning on using GenAI effectively; 68% want use cases; 66% want guidance on which tools are appropriate; and 55% want information on how student data is protected in GenAI tools.
This is not a “wait and see” posture. It is a direct request for operational support.
And it aligns with broader survey evidence beyond LA County. In Project Tomorrow’s 2025 Speak Up release, only 13% of teachers reported being “very confident” in their ability to use AI tools, and only 15% said their district provides ample professional development for classroom AI use. (Project Tomorrow)
The Policy Gap: “Human-Centered AI” Meets Unfunded Implementation
CDE’s documents repeatedly emphasize “human-centered” implementation, protect relationships, preserve student well-being, and keep professional judgment central. Yet teachers are describing the opposite lived reality: rapid tool exposure without consistent training, clarity, or time.
LACOE’s own AI Implementation Plan acknowledges the scale of the capacity challenge, setting targets that implicitly admit present readiness is not where leaders want it to be, such as establishing a staff-confidence baseline and targeting 60% reporting confidence in responsible AI use. (LACOE) The plan also explicitly ties countywide progress measurement to Project Tomorrow survey data, signaling that “perception data” is being treated as a core readiness indicator, not a side note. (LACOE)
In other words, leadership knows confidence is a bottleneck. Teachers are saying the same thing. The difference is that teachers are living the bottleneck daily, while still being expected to manage academic integrity, redesign assessments, and protect student data in a fast-moving AI environment.
Where the Tension Shows Up First: Classroom Integrity and Student Data
Two friction points rise to the top when you compare the policy language to educator concerns:
1) Academic integrity is no longer a “plagiarism” problem
CDE’s academic integrity framing acknowledges that unauthorized AI use may not always fit traditional plagiarism definitions, yet still undermines learning goals and fairness, necessitating clear, consistent policies and shared understanding with students and families.
Teachers are being asked to hold that line while many districts still lack:
defined classroom norms,
approved tool lists,
consistent expectations across sites,
and training on how to design “AI-resilient” assignments.
2) Data privacy and procurement require expertise many educators don’t have
CDE’s privacy guidance places responsibility on LEAs to review and approve AI tools and explicitly warns against putting personally identifiable student information into open AI systems. But this becomes operationally messy when:
teachers experiment with tools independently,
districts do not provide vetted alternatives,
and schools lack clear procurement pathways for AI-enabled products.
In LA County survey feedback, “how student data is protected” and “which tools are appropriate” emerge as top needs, exactly where confusion can create compliance risk.
What Teachers Are Actually Asking For (and What “Support” Has to Mean)
The LA County needs assessment does not read like a request for another webinar. It reads like a request for an implementation system:
Role-based professional learning (teachers, administrators, support staff)
District-approved use cases by grade span and subject area
Tool vetting and clear procurement pathways to reduce shadow adoption
Student data protection guidance that is practical, not just legalistic
Alignment to equity goals so AI does not deepen the digital divide
This is the part of the conversation that often gets obscured: teachers are not necessarily rejecting AI. Many are asking for the conditions that make responsible AI possible.
The Education Media Advantage
Education Media works with districts to translate AI policy into instructional practice—without compromising academic integrity, equity, or student data privacy.
We support schools and districts through:
AI-resilient curriculum and assessment redesign that preserves instructional rigor in an AI-enabled classroom
District AI implementation systems that provide oversight, documentation, and audit-ready governance
Teacher implementation support and professional learning aligned to district policy, academic integrity standards, and classroom realities: AI in Teaching Practice | From Policy to Classroom Implementation
Our approach is not tool-driven. It is curriculum-centered, policy-aligned, and built to protect both educators and institutions as AI becomes part of everyday instruction.
If your school or district is:
navigating AI use without consistent guidance,
concerned about academic integrity or data privacy,
redesigning instruction to remain meaningful in the age of AI, or
seeking a defensible, human-centered approach to AI implementation,
we invite you to continue the conversation.
Learn how Education Media helps move from AI uncertainty to classroom-ready solutions—without giving away control or risking compliance.
Learn more about Education Media
🔗 https://edmedia.productions
Explore Our Professional Trainng Programs
🔗 https://launch.edmedia.productions
Request a consultation or start a conversation
🔗 Launch With Ease™ Consultation
Schedule a district consultation
Request an AI readiness discussion for your leadership team
General inquiries
📧 Email or Chat Below
📞 +1 877-583-6207
Works Cited
California Department of Education. Artificial Intelligence in California Public Schools: Guidance for the Safe and Effective Use of Artificial Intelligence. California Department of Education, last reviewed 31 Dec. 2025, https://www.cde.ca.gov/ci/pl/aiincalifornia.asp.
California Department of Education. Human-Centered Artificial Intelligence: Professional Learning Guidance. California Department of Education, 2025, https://www.cde.ca.gov/ci/pl/aiincalifornia.asp.
California Department of Education. AI Literacy: Professional Learning Guidance. California Department of Education, 2025, https://www.cde.ca.gov/ci/pl/aiincalifornia.asp.
California Department of Education. Academic Integrity and Responsible Use of Artificial Intelligence. California Department of Education, 2025, https://www.cde.ca.gov/ci/pl/aiincalifornia.asp.
California Department of Education. Data Privacy, Security, and Procurement: Artificial Intelligence Guidance. California Department of Education, 2025, https://www.cde.ca.gov/ci/pl/aiincalifornia.asp.
California Department of Education. Equitable Access to Artificial Intelligence in Teaching and Learning. California Department of Education, 2025, https://www.cde.ca.gov/ci/pl/aiincalifornia.asp.
Los Angeles County Office of Education. Generative Artificial Intelligence in K–12 Education: Guidelines and Needs Assessment. LACOE, 2024–2025, https://www.lacoe.edu.
Project Tomorrow. Speak Up Research: Educator Views on Artificial Intelligence in Education. Project Tomorrow, 2025, https://www.projecttomorrow.org.
Arizona State University Learning Transformation Studios and Project Tomorrow. Generative AI in K–12 Education: Research Brief. Arizona State University, 2024, https://learningtransformation.asu.edu.
