NEWS & EVENTS
Schools Should Teach AI Literacy Before They Write AI Guidance Policies
Authors: Ally Barlow and Stephen Taylor
Across the country, school boards are under pressure to write AI policies. Parents want answers. Teachers want clarity. Administrators want to demonstrate that they are taking action. The instinct to move fast is understandable. But when districts rush to codify rules before building genuine understanding of generative AI technology, they often end up with policies that are unenforceable, obsolete within months, or both.
We have seen this pattern repeatedly. The districts that emerge with workable, defensible AI policies are not the ones that moved fastest. They are the ones that spent a few months on literacy before they drafted anything permanent.
The Case for Speed (and Why It Falls Short)
The argument for moving quickly on AI policy is reasonable. Parents deserve to know how their children’s schools are handling a technology that is already in students’ pockets. Boards want a clear governance framework. Teachers need guidance now, not in six months. Waiting to act feels negligent when the stakes involve academic integrity, student data, and the distribution of learning resources.
Some experts also argue that literacy and policy development should happen at the same time, and that building understanding while drafting rules is more efficient than waiting for perfect knowledge before acting. That is a fair point. Arlington, Virginia initially pursued a traditional policy approach, then abandoned it in favor of living, regularly revised guidance when they realized that rigid policy could not keep pace with how this fast moving technology behaves.1
There is a meaningful difference between building literacy alongside policy and rushing to publish something permanent before anyone in the room has spent real time with the tools.
Why Policy Without Understanding Fails
Here is what we know from the field: 38% of teachers feel that they do not have enough support for AI integration2. Only 45% of school principals and 34% of teachers reported that their schools provided any guidance on AI use at all.3 Those numbers tell us where most districts are starting.
When policy authors do not understand what they are governing, they tend to write rules based on misconceptions. A common example: many districts include language about using AI detection tools to catch student cheating. The problem is that AI detection tools do not work reliably and are increasingly inaccurate as models become more advanced, to the point that MIT recommends not using AI Detectors4. A policy built on that misunderstanding creates a false sense of security while consuming resources and potentially damaging student trust.
Major organizations including UNESCO, ISTE, CoSN, and the Center on Reinventing Public Education have all published guidance pointing districts toward building understanding of AI before or alongside writing formal policy.5
What “Literacy First” Actually Looks Like
We are not talking about delaying action indefinitely. We are talking about two to three months of structured learning before you develop a living guidance document.
This can look like staff workshops where teachers and administrators actually use the tools. Hands-on exploration of ChatGPT, image generators, and the AI-powered features that are already embedded in tools your district is paying for are all great places to start. Leadership briefings where board members hear directly from administrators about what they have learned help build shared understanding among the community.
The goal is for those who are authoring a living guidance document to have personal experience that allows them to be a knowledgeable guide. There is no need to become AI researchers, but to develop enough working knowledge that the policies you write are grounded in what is actually possible, not in fear or marketing hype.
When a principal has spent an hour writing with ChatGPT, they understand why a blanket ban on student use is not realistic and why a policy that allows unlimited access without any framework is not responsible. When a teacher has tested an AI detection tool and seen how often it misclassifies human writing, they stop recommending it as a solution. When a board member has talked to a group of students about how they are already using AI in their learning, the guidance they help draft tends to be more nuanced and enforceable.
This is the dividend of literacy: guidance that fits the actual technology and the actual lives of the people using it.
What Happens Next
We would encourage your district to take a straightforward step: invest two to three months in building genuine AI literacy among staff and board leadership before you finalize establish any formal AI policyguidance document. Run workshops, invest in curriculum, and play with the technology. Create space for teachers and administrators to ask questions and learn together.
The policy living guidance document you write after that investment will be sharper, more grounded in evidence, and more likely to hold up as the technology continues to change. That is the kind of policy guidance that actually serves a school community.
[1] https://www.apsva.us/digital-learning-innovation/aps-generative-ai-guidance/
[2] https://online.ysu.edu/degrees/education/msed/teacher-ai-usage-statistics/
[3] https://www.rand.org/pubs/research_reports/RRA4180-1.html
[4] https://mitsloanedtech.mit.edu/ai/teach/ai-detectors-dont-work/