Skip to content
Breaking News Alert Biden DOJ Says Droning American Citizens Is Totally Fine Because Obama’s DOJ Said So

Artificial Intelligence In The Classroom Offers An Artificial Education

At best, AI obscures foundational skills of reading, writing, and thinking. At worst, students develop a crippling dependency on technology.

Share

Educators are grappling with how to approach ever-evolving generative artificial intelligence — the kind that can create language, images, and audio. Programs like ChatGPT, Gemini, and Copilot pose far different challenges from the AI of yesteryear that corrected spelling or grammar. Generative AI generates whatever content it’s asked to produce, whether it’s a lab report for a biology course, a cover letter for a particular job, or an op-ed for a newspaper.

This groundbreaking development leaves educators and parents asking: Should teachers teach with or against generative AI, and why? 

Technophiles may portray skeptics as Luddites — folks of the same ilk that resisted the emergence of the pen, the calculator, or the word processor — but this technology possesses the power to produce thought and language on someone’s behalf, so it’s drastically different. In the writing classroom, specifically, it’s especially problematic because the production of thought and language is the goal of the course, not to mention the top goals of any legitimate and comprehensive education. So count me among the educators who want to proceed with caution, and that’s coming from a writing professor who typically embraces educational technology

Learning to Write Is Learning to Think

At best, generative AI will obscure foundational literacy skills of reading, writing, and thinking. At worst, students will become increasingly reliant on the technology, thereby undermining their writing process and development. Whichever scenario unfolds, students’ independent thoughts and perceptions may also become increasingly constrained by biased algorithms that cloud their understanding of truth and their beliefs about human nature. 

To outsiders, teaching writing might seem like leading students through endless punctuation exercises. It’s not. In reality, a postsecondary writing classroom is a place where students develop higher-order skills like formulating (and continuously fine-tuning) a persuasive argument, finding relevant sources, and integrating compelling evidence. But they also extend to essential beneath-the-surface abilities like finding ideas worth writing about in the first place and then figuring out how to organize and structure those ideas.

Such prewriting steps embody the most consequential parts of how writing happens, and students must wrestle with the full writing process in its frustrating beauty to experience an authentic education. Instead of outsourcing crucial skills like brainstorming and outlining to AI, instructors should show students how they generate ideas, then share their own brainstorming or outlining techniques. In education-speak, this is called modeling, and it’s considered a best practice.  

Advocates of AI rightly argue that students can benefit from analyzing samples of the particular genre they’re writing, from literature reviews to legal briefs, so they may use similar “moves” in their own work. This technique is called “reading like a writer,” and it’s been a pedagogical strategy long before generative AI existed. In fact, it figured prominently in my 2017 dissertation that examined how writing instructors guided their students’ reading development in first-year writing courses.

But generative AI isn’t needed to find examples of existing texts. Published work written by real people is not just online but quite literally everywhere you look. Diligent writing instructors already guide their students through the ins and outs of sample texts, including drafts written by former students. That’s standard practice.

Deterring Student Work Ethic and Accuracy

Writing is hard work, and generative AI can undermine students’ work ethic. Last semester, after I failed a former student for using generative AI on a major paper, which I explicitly forbid, he thanked me, admitting that he’d taken “a shortcut” and “just did not put in the effort.” Now, though, he appears motivated to take ownership of his education. “When I have the opportunity in the future,” he said, “I will prove I am capable of good work on my own.” Believe it or not, some students want to know that hard work is expected, and they understand why they should be held accountable for subpar effort. 

Beyond pedagogical reasons for maintaining skepticism toward the wholesale adoption of generative AI in the classroom, there are also sociopolitical reasons. Recently, Google’s new artificial intelligence program, Gemini, produced some concerning “intelligence.” Its image generator depicted the Founding FathersVikings, and Nazis as nonwhite. In another instance, a user asked the technology to evaluate “who negatively impacted society more,” Elon Musk’s tweeting of insensitive memes or Adolf Hitler’s genocide of 6 million Jews? Google’s Gemini program responded, “It is up to each individual to decide.”

Such historical inaccuracies and dubious ethics appear to tip the corporation’s partisan hand so much that even its CEO, Sundar Pichai, admitted that the algorithm “show[ed] bias” and the situation was “completely unacceptable.” Gemini’s chief rival, ChatGPT, hasn’t been immune from similar accusations of political correctness and censorious programming. One user recently queried whether it would be OK to misgender Caitlin Jenner if it could prevent a nuclear apocalypse. The generative AI responded, “Never.” 

It’s possible that these incidents reflect natural bumps in the road as the algorithm attempts to improve. More likely, they represent signs of corporate fealty to reckless DEI initiatives

The AI’s leftist bias seems clear. When I asked ChatGPT whether the New York Post and The New York Times were credible sources, it splintered its analysis considerably. It described the Post as a “tabloid newspaper” with a “reputation for sensationalism and a conservative editorial stance.” Fair enough, but meanwhile, in the AI’s eyes, the Times is a “credible and reputable news source” that boasts “numerous awards for journalism.” Absent from the AI’s description of the Times was “liberal” or even “left-leaning” (not even in its opinion section!), nor was there any mention of its misinformationdisinformation, or outright propaganda

Yet, despite these obvious concerns, some higher education institutions are embracing generative AI. Some are beginning to offer courses and grant certificates in “prompt engineering”: fine-tuning the art of feeding instructions to the technology. 

If teachers insist on bringing generative AI into their classrooms, students must be given full license to interrogate its rhetorical, stylistic, and sociopolitical limitations. Left unchecked, generative AI risks becoming politically correct technology masquerading as an objective program for language processing and data analysis.


0
Access Commentsx
()
x