Back to Home

A Response to 'How AI is Changing the Way You Teach'

A Response to 'How AI is Changing the Way You Teach'

AI is an emerging technology with varied applications, and its consumer-facing tools threw education into disarray as students discovered they could outsource their thinking entirely. The education sector has responded predictably: one camp shuns AI and technology wholesale, understanding that these tools are best wielded by well-adjusted, well-socialized, and well-learned people. The other camp are AI maximalists who tout transformative benefits and insist AI will revolutionize—or replace—teaching. Both approaches are being explored in private schools and homeschooled students around the country, and both have their pros and cons.

Public schools, as usual, have found the worst possible middle ground. New technology brings risks, and education has been burned repeatedly by products claiming to be panaceas for failing schools. Teachers in public schools receive no meaningful training on this, and the result is students who are simultaneously dependent on AI and incapable of leveraging it meaningfully. I saw something similar happen firsthand when my district gave every elementary school 3D printers in 2017 without a single hour of training. The printers gathered dust. Even if schools purchased AI tools tomorrow, who would actually use them?

This comes down to a fundamental structural flaw: public school teachers have no incentive to develop technology skills beyond surface-level familiarity. Union-negotiated pay scales ensure that a teacher who masters AI integration earns exactly the same as one who refuses to touch it. Step-and-lane compensation rewards seniority and graduate credits, not demonstrated skills or student outcomes. A teacher who spends hundreds of hours becoming proficient with emerging technology cannot be compensated for that expertise—job protections explicitly prevent differentiated pay for differentiated performance.

The predictable result? Teachers engage with technology at the most superficial level possible. They attend the mandatory professional development, check the compliance boxes, and return to familiar methods. This is rational behavior in a system that offers no reward for excellence and little to no consequences for mediocrity. When the NEA publishes articles about AI in education, they are publishing cover to the fact that their members are a workforce that is structurally prevented from developing genuine expertise, while also being asked to integrate tools they barely understand.

The Article

I won’t bury the lead, you can read the NEA’s attempt at exploring AI in education here. There are a few positives, but I think there is a general trend we might observe as we go through the different age brackets of students. Simply put, the more student facing AI applications we permit, the less authentic their praxis will be. The less emphasis we put on reality, the more disconnected education will be from reality - and right now, public school education is quite disconnected.

The Elementary Example: Right Approach, Wrong Emphasis

Brenda Álvarez starts in elementary school, by examining an elementary teacher’s use of AI. The first note is that there is a strict boundary—no unsupervised student access to AI. This is a good start, but the article’s treatment about actual applications for elementary school is frustratingly superficial. Despite noting that “private data shouldn’t be given to AI”, it is clear that the teacher is giving data to AI, which should trigger a review of FERPA or student data privacy concerns. How can you get personalized assessments without using names? You can, but it’s not something that I would expect a teacher to think about before they start using AI. The thing is, AI is best at using data like this. However, a responsible AI practitioner needs to sanitize the data before giving it to AI. This omission exemplifies the article’s broader problem: enthusiasm for AI adoption without rigorous consideration of pedagogical and legal safeguards. Additionally, it lacks any specific tool or product being mentioned, which means that most likely, this teacher is using the chat window. The chat window for AI has its purposes, but in terms of efficiency in data intensive projects, it’s like trying to shovel a driveway with a spoon.

The Middle School Fallacy: AI as a Thinking Substitute

The middle school example epitomizes everything wrong with shallow AI integration. The instructional coach suggests students could “use AI to come up with a theme” from Of Mice and Men, but then “have to tie it to themselves.” This is presented as personalization, but it’s actually the opposite—it’s outsourcing the most critical thinking to AI while reserving only the surface-level personal connection for the student.

How does this help connect a book to student life? It doesn’t. It invites students to find reasons not to think deeply about literature. The cognitive work of identifying themes—wrestling with the text, making connections, developing interpretations—is precisely where learning happens. Why is AI needed here at all? This approach treats AI as a shortcut around intellectual struggle rather than a tool to enhance it.

The High School Paradox: AI’s Limitations as Evidence

The high school scriptwriting example is particularly revealing. A student struggled to get AI to generate a simple image of a corpse, eventually settling for a “barely-breathing body” with an ax “coming out of the person’s mouth like a flower”. Think about what just happened. The student’s creative output was artificially constrained by AI, but this isn’t the issue. The fact that AI cannot generate certain kinds of images that make complete artistic sense is evidence of why we should actually learn to do the real techniques, not why we should accept AI’s limitations.

Expertise as Prerequisite, Not Product

The TPACK framework—Technological, Pedagogical, and Content Knowledge—has long been the theoretical gold standard for technology integration in education. The idea is elegant: effective teaching with technology requires the intersection of knowing your subject, knowing how to teach it, and knowing which tools can enhance both. In practice, however, the “T” in TPACK has always been held hostage by software companies fundamentally disconnected from classroom realities.

This is the primary thorn blocking meaningful technology adoption: technology divorced from curriculum is useless. Teachers don’t need another tool; they need tools that integrate seamlessly with what they’re already required to teach. Software companies understand this, which is why they don’t just sell platforms—they sell entire curriculum packages. The problem is that not all curriculums are equal, and districts have repeatedly abdicated proven approaches in favor of flashy, technology-forward alternatives.

Look at iReady. My daughter’s school uses it. I’ve heard endless complaints about both the curriculum and the platform—the rigid pacing, the soul-crushing adaptive exercises, the disconnect between what the program demands and what students actually need. But here’s the uncomfortable question: even if I were the most technologically proficient teacher in my building, what could I actually do differently? The answer is: not much. When curriculum is locked inside a proprietary platform, teacher expertise becomes irrelevant. You deliver what the software dictates.

This is where AI represents a genuine departure—but only for those willing to go deeper. AI gives us the tools to design curriculum enhancements based on our individual content knowledge and pedagogical expertise. A teacher who deeply understands their subject can use AI to generate differentiated materials, create assessments aligned to specific learning objectives, and build resources that respond to the actual students in front of them rather than some algorithmic average.

But you cannot develop this capability from a ChatGPT window. The chat interface is a shallow entry point—useful for brainstorming, perhaps, but fundamentally limited for serious curriculum work. It’s not to say it’s impossible, but it’s highly inefficient. Genuine technological expertise in AI requires diving into file systems, understanding context windows, learning prompt engineering, and developing new methods of organizing information. It means learning to code (even if it’s just reading it). It means treating AI as a development environment, not a magic answer box. The teachers who will actually leverage AI effectively are the ones willing to move past the surface and build real technical fluency—and as I’ve already noted, our current system provides zero incentive, and often roadblocks, to do so.

A Better Vision for AI in Education

I’m not against AI integration—but I believe children must develop certain mastery elements of learning before AI can serve them rather than diminish them. AI tools should be used to enhance the classroom and empower teachers, not to bypass the cognitive development that is education’s core purpose.

The true power of AI in education lies in granting specific, unique tools to fit niche situations—customizing materials for a variety of learners, generating practice problems, providing teachers with administrative support. But these applications work in service of human expertise and judgment, not as replacements for them.

Conclusion

The NEA article’s examples reveal a troubling pattern: AI is being integrated into classrooms without sufficient attention to whether it enhances or undermines genuine learning. Before we ask students to use AI, we must first ensure they have the foundational skills and critical thinking abilities to use it effectively. Otherwise, we’re not preparing students for an AI-augmented future—we’re teaching them to be dependent on tools they don’t understand and cannot evaluate.