AI and Information Literacy Instruction in the Composition Class and Beyond: Part 2

How can we prepare students for the shortcomings of AI?

Someone emerging from a computer screen, likely thinking about how prompt engineering is useful for librarians and composition instructors

In our previous installment, we—an instruction librarian and a first-year composition instructor —drew some of the similarities between our pedagogical goals. The highlight? We encouraged colleagues to teach students to treat AI in the same manner that Star Trek: The Next Generation’s Chief Engineer, Geordi LaForge, interacts with the ship’s computer.

And now… the conclusion.

While generative AI has complicated the lives of instruction librarians and first-year composition instructors, in reality many of the challenges we face are similar to those previous and still-relevant tools and technologies posed. In this piece, we will illuminate some of the challenges presented by student AI use and offer suggestions for how we can help students use this technology in the most felicitous manner.

Garbage In, Garbage Out

Whether digging through databases or conversing with ChatGPT, students must remember a concept that originated near the dawn of computing: GIGO (garbage in, garbage out). GIGO was perhaps first used in print in the Hammond, Indiana, Times with respect to Specialist Third Class William D. Mellin’s work for the Army Signal Corps. Working with computers, “some of which are as big as four basketball courts,” Mellin learned that computer output suffers when the human input is insufficient. Likewise, students who use shallow search terms in databases or vague AI prompts will simply not receive the help they need. 

GIGO also comes into play when you consider the innate nature of large language models. ChatGPT and other AI chatbots can only create responses from material that they have been fed or to which they have access. For obvious reasons, AI can’t draw on material that lives behind a paywall. That includes the scholarly articles we tell students to search for in the library databases. (At least, that seems to be the case; OpenAI is not fully open about the information it has proprietarily fed into ChatGPT.)

Additionally, writers have resisted (on copyright grounds) having their works fed into AI’s gaping maw. In September 2023, the Authors Guild and several prominent authors sued OpenAI and Microsoft for unlicensed use of the creative works. And when it comes to very recent or developing events, AI often can’t give students up-to-date information, as the latest news may not yet have been added to the language model.

Trouble with nuance

At the moment, AI can have some trouble distinguishing nuance. Case in point: while pretending to be a student completing a common first-year composition assignment, we asked Google’s Bard for claims related to World War II. Most of the results were simple factual statements, so we used a commonplace writing prompt word and asked Bard to make the claims more “argumentative,” as a student might do. Instead of making assertions, however, Bard merely rewrote the factual statements in more argumentative language. Like a drunk uncle at Thanksgiving dinner, it swiftly went from factual to outright aggressive, bypassing the more nuanced skill of coherent argumentation altogether.

🌟 Subscribe to the LibTech Insights newsletter for weekly roundups and bonus content, including: 


Perhaps the biggest challenge that librarians and instructors (and students) face with respect to the current state of generative AI is its proclivity to simply make things up. As most of us know by now, the term presently used for these mistakes is “hallucinations.”

Again, pretending to be a student, we asked ChatGPT and Bard for two books and two scholarly articles related to a specific claim—an extremely common assignment in a first-year composition class. Initially, the results looked good and made sense. The authors were real and had published on the topic. The journals were also real. Unfortunately, the articles themselves simply didn’t exist. And when asked to provide links to supplied sources, the AI was either unable to do so (with apologies, in Bard’s case), or provided a link that was only tangentially relevant.

Note: this fabrication problem is further compounded when the AI uses the hallucinated information to create future responses, resulting in flawed circular reasoning over time.

Citation challenges

Speaking of challenges for librarians and composition instructors, how in the world does a student cite an AI chat or response? The material is hidden behind the user’s password and the unreproducible response was generated according to black-box algorithms that produce information drawn from… who knows where? Interestingly, the folks at MLA (every first-year student’s favorite) produced a guide on how to cite AI, but it turned out that one of the resources in MLA’s example chat had been completely fabricated.

So what can we do?

As professionals charged with helping students learn to conduct research and write, how can we use these emerging (and sometimes frustrating) technologies to reinforce much-needed critical thinking and information literacy skills and guide beginning scholars? The answers are a blend of reinforcing the basics and adapting them to address generative AI’s pros and cons.

Librarians and other instructors would do well to offer students more substantial training in the process of source evaluation. Students must learn how to think through when content is credible and when the authors have the requisite authority to speak on an issue in a scholarly capacity.

We would be similarly wise to learn about and explain the evolving limitations of AI and the limited (and sometimes out-of-date) material from which it draws. Importantly, students must be taught that AI, like a five-year-old who stole from the cookie jar, will sometimes make things up and create fabulist stories (i.e., hallucinations). Thus, it’s a good idea to require students to include permalinks or DOIs in their citations. This has the dual benefit of demonstrating that the resource does, in fact, exist, and further, that it can be tracked back to a real item record in library databases.

Finally, we need to encourage students to understand both research and writing as narrative processes that use inquisitive, layered questioning that builds on human thought. In keyword selection (for searching databases) and in prompt engineering (for using AI), the primacy of the human mind simply cannot be overlooked.

Final thought

Though our students have new tools at their disposal, the secret to good research and writing has remained the same for centuries. As partner educators, first-year composition teachers and librarians must facilitate a scholar’s way through a nonlinear process that includes fits and starts as well as dead ends. Instead of seeing generative AI as a shortcut to an easy finish, students should consider the technology as a partner whose vast memory can offer them the freedom to focus on critical thinking and innovation (especially if they don’t start their papers at 3 a.m. on the day they’re due.) 

🔥 Sign up for LibTech Insights (LTI) new post notifications and updates.

✍️ Interested in contributing to LTI? Send an email to Deb V. at Choice with your topic idea.