What (Else) to Read about ChatGPT

Readings on labor, bias, AI policies, and more!

To put it mildly, the internet has not quieted down about generative AI since I wrote my first reading list on ChatGPT. Backlash continues. AI figures prominently in the ongoing Hollywood labor strikes, Congress is set to deliberate on two bills for regulating AI, and AI has even become an insulting epithet. After six or so months of worry about what AI could possibly do, these struggles seek to offer much-needed boundaries for what AI should be allowed to do.

With the fall semester starting in a little more than a month (I’m so sorry to remind you!), we will once again be immersed in questions about the uses and limits of generative AI in coursework, research, and labor at the university. I gathered some articles below that might help address these issues and offer practical guidance for librarians engaged with them.

“If you’re not using ChatGPT for your writing, you’re probably making a mistake,” Dylan Matthews (Vox)

This slightly meandering essay explores how some people have recentered their jobs on the use of generative AI tools. Though it defies easy summary, the essay offers a few practical uses for AI in academic context. My favorite is to have it summarize a scholar’s entire oeuvre, highlighting key themes, strengths and weaknesses, and—I’d add—suggestions for background reading. This can be a great tool to recommend to students who are still learning the lay of a field or the contributions of its major figures. 

Another takeaway is an experiment that you can conduct yourself. Much of the fear around generative AI is the possibility that it will take your job. But as tech librarian Nicole Hennig emphasized in our illuminating interview with her on building AI literacy, “AI will not replace you. A person using AI will.” Matthews’s article describes a business class conducted in spring 2023 with the explicit aim of using generative AI for more-or-less all aspects of the class. The goal was to teach students AI and AI literacy in a trial by fire. If you can, try using generative AI for your job duties as much as you can for a day or even a week. Not only will you gain a better understanding of how you can use AI effectively to make your workload easier, but you will also come away with a clearer sense of what AI can’t do. (One proviso: Until generative AI’s privacy concerns are ironed out, you should be careful not to upload proprietary organization information to it.)

“Teaching originality: an essential skill in the age of ChatGPT,” Alastair Bonnett (Times Higher Education)

In this article, Bonnett highlights one scholarly skill that ChatGPT can’t easily replicate: originality. He argues, ”Academic originality is not about chance, genius or magic. It is about engagement and a clear sense of scholarly contribution. Its aim is not a revolution in thought but something modest and do-able, namely adding value to the literature.” When we teach undergraduates about scholarship, we often look for their mastery of an issue. Have you read the appropriate books? Are you using reputable sources? Perhaps what we need to stress as a part of information literacy is evaluating the originality of material. In prizing originality as not only a value but also a skill that students should strive to achieve, we distinguish academic work from mere AI output.

🌟 Making a LibGuide on generative AI? Consider adding these favorites:

“11 Tips to Take Your ChatGPT Prompts to the Next Level,” David Nield (Wired)

Understandably in academia, most people default to thinking that ChatGPT’s only outputs are essays and prose. This article offers several different outputs ChatGPT can provide. For instance, you can role play scenarios with it—a useful professional tool for preparing for a difficult conversation with a coworker, for instance. You can describe the specific issue at stake and even describe the personality ChatGPT should use to shape its responses. Nield’s article outlines several different formats ChatGPT responses might take, signaling a wide array of professional uses.

“AI automated discrimination. Here’s how to spot it,” Abby Ohlheiser (Vox)

An under-discussed problem (except here on LTI!) is the bias baked into the algorithms underlying generative AI. Scholars have roundly rejected the techno-utopian belief that machine-processing, by bypassing human intervention, can eliminate bias. Many have already devised strategies for helping students become aware of algorithmic biases. Ohlheiser offers a helpful primer on the subject by outlining common areas where bias manifests and some preliminary solutions for recognizing and defeating it.

“GPT-4 Is Here. But Most Faculty Lack AI Policies,” Susan D’Agostino (Inside Higher Ed)

I include D’Agostino’s article on this list because she points to a practical problem that librarians could help solve. D’Agostino writes that individual faculty members, in writing their syllabi, may want to include specific policies about the use of AI for research and coursework but lack the right language or a holistic understanding of generative AI to do so effectively. This seems like precisely the sort of systemic problem savvy, AI-literate, ethically-minded librarians could solve. Just putting that out there.

🔥 Sign up for LibTech Insights (LTI) new post notifications and updates.

✍️ Interested in contributing to LTI? Send an email to Deb V. at Choice with your topic idea.

GALE logo

Gale partners with librarians and educators to create positive change and outcomes for researchers and learners. The company empowers libraries to be active collaborators in the success of their institutions and communities by providing essential content that leads to discovery and knowledge, and user-friendly technology that delivers engaging learning experiences. For more information, please visit gale.com/academic