What (Else Else) to Read about ChatGPT

Your guide to new research and top stories

We backed off a bit from generative AI coverage in the past couple months, lest we give the impression that it’s the only interesting thing happening in the library tech field. Clearly it isn’t, and yet the “meantime” has hardly been idle. We outlined the new AI regulations in the US, the EU, and the UK that came out recently. For people still mourning the end of Succession, OpenAI CEO Sam Altman underwent a dramatic “saga” a few weeks ago. The OpenAI board fired him but was pressured into rehiring him later that weekend. People are already talking about “wearable AI” (holiday season 2024’s must-have item?). And of course, new research, new AI models, and new concerns continue to bubble forth. 

To begin catching up, I put together a follow-up to a follow-up to my original “What to Read about ChatGPT” post. Here are a few stories that I think librarians and other academics should take note of. Most are short enough to read while your coffee is cooling. The others might get you through an entire pot. At any rate, onward!


Our approach to responsible AI innovation,” YouTube (YouTube Official Blog

Many people get news and information from YouTube (for better or worse). For that reason, I encourage all information professionals to read YouTube’s description of its policies on AI-generated content and guardrails. Specifically, the platform will now require creators to select an option when uploading a new video to disclose whether “they’ve created altered or synthetic content that is realistic, including using AI tools.” If they have, YouTube will apply a label in the content description or, in very sensitive cases, to the video player itself. Viewers will also have the ability to petition moderators to remove content if their voice or likeness is replicated artificially, preventing the distribution of deepfakes.

I have my doubts whether the voluntary disclosure of generated AI content is an effective strategy. (Consider, for instance, that people lie.) Weaknesses notwithstanding, it’s a good policy to know. 

Generative AI in Scholarly Communications” (STM

This white paper offers guidance on the use of generative AI in scholarly communications. It evaluates many different use cases, from preparing manuscripts to disseminating research publications. The report is short and immensely readable, but if you want the two-line takeaway: “Like all tools, [generative AI] should be used only for assistance. Human oversight is always necessary.” I would also direct attention to the appendices, which include a great compendium of various publishers’ policies and guidance and a resource list. 

Generative AI and libraries: 7 contexts,” Lorcan Dempsey (Lorcan Dempsey.net) 

University of Washington professor and librarian Lorcan Dempsey has written a four-part blog series on the implications of AI for library and information science and workers. The third installment examines seven different concerns that generative AI raises for librarians, including staffing, skill acquisition, and copyright issues. Dempsey distills a lot of the ongoing discourse, and his analyses are wonderfully parsed. But I think the main contribution of this piece is its agenda-setting potential. Readers will walk away from it with a clear sense of the specific areas where further research, policy, and advocacy are needed.

Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance,” Nicholas Kluge Corrêa et al. (ScienceDirect

As I said, what I appreciate about Demsey’s article is its careful parsing of a large swell of discourse. In a similar vein, what I find useful about this academic study by Corrêa et al. is the scale of its analysis of guidelines and recommendations put forth for AI governance. The researchers analyzed more than 200 guidelines from governments, NGOs, corporations, and the like to discover common ethical principles (and also sort ethical principles by institution type). Perhaps unsurprisingly, the top ethical principles are related to transparency, reliability, and justice. The researchers also draw attention to missing or undervalued principles, such as labor rights and copyright. The paper is worth reading in full, but if you want only a quick glimpse of their findings, figure 5 will give you that

Embrace AI tools to improve student writing,” Pamela Bourjaily (Times Higher Education

In these roundups, I always try to include at least one piece that deals directly with students. (I would be very curious to learn of what research has been done into college students’ perceptions and uses of generative AI, especially at scale.) Bourjaily teaches business writing at the University of Iowa, but her advice in this short article offers sound exercises and pedagogical strategies for just about any area of instruction. Throughout, she stresses prompt engineering, which we happen to have had a webinar on.

In particular, I like her strategy to have students examine the outputs of generative AI and identify what is unclear, needs improvement, etc. For Bourjaily, the goal of this exercise is to help students refine their prompts—which is a useful skill, to be sure—but it’s also an important exercise for cultivating students’ literacy of the shortcomings of AI. Equally important, it teaches students the value of their own creativity, voice, and skill sets. 

*

🗣 PS: In case you missed it, LTI’s sister blog, Toward Inclusive Excellence, held a webinar on “Inclusive and Ethical AI for Academic Libraries.” The recording of it is free on YouTube. Everybody loved it. You will too!


🔥 Sign up for LibTech Insights (LTI) new post notifications and updates.

📅 Join us for a free webinar on AI citations and ethics for librarians.

✍️ Interested in contributing to LTI? Send an email to Deb V. at Choice with your topic idea.