What ARL’s Quick Poll Reveals about AI in the Library

A Q&A with the authors discusses takeaways and areas for exploration

Attitudes toward AI have changed since OpenAI first unveiled ChatGPT to the public in late 2022. With more time to digest the implications of this new technology, librarians, along with everyone else, have shifted through differing and complex feelings—some doubling-down on the problems with generative AI, others looking for ways to integrate it into their workflows, and others still mulling over what they feel.

Leo S. Lo and Cynthia Hudson Vitale have captured this complexity in their recent quick poll for the Association of Research Libraries (ARL). Though not intended as a comprehensive survey, their poll offers a brief snapshot of the perspectives, adoption plans, and institutional conversations on generative AI based on the responses of a handful of directors from ARL’s member institutions. Their report is well worth reading in full. Below is a brief conversation we had with Lo and Vitale to clarify and reflect on their findings.

Can you give us some background about this survey? How did you design it, and what was your aim for it?

In the fall of 2022, ChatGPT and other generative AI technologies (such as DALL-E and BARD) disrupted higher education from both an academic and a research perspective. The survey was designed with the aim of gauging the level of engagement, interest, and perceptions of research libraries toward generative AI technologies. Our objective was to capture a snapshot of the current landscape—how research libraries are exploring, implementing, or planning to utilize generative AI. We also wanted to understand the perceived potential of these technologies in enhancing library services in the near future. The brief survey was structured around these key areas, with questions tailored to gather both quantitative data (e.g., percentage of libraries actively implementing AI solutions) and qualitative insights (e.g., views on the potential of AI in library services).

You report a number of key findings about attitudes, possible implementations, and impacts, among other topics. What themes do you see running through these findings?

Several key themes emerged from the survey findings. First, there is a clear recognition of the potential of generative AI technologies in enhancing library services, with many libraries exploring or planning to explore these applications. Second, libraries see the value in leveraging AI for various functions, with a particular interest in using chatbots for user support and automating cataloging and metadata generation. Third, libraries acknowledge the importance of educating both library employees and library users about AI, with an emphasis on incorporating AI literacy into information literacy programs. Lastly, while there is enthusiasm, there is also a degree of caution and uncertainty, signaling the need for more discussions, learning, and collaborations to navigate potential challenges.

🌟 More on generative AI in libraries:

Did any of the results surprise you?

The level of uncertainty about the potential of AI in library services stands out. While most respondents see the potential benefits, a significant proportion remains neutral, suggesting that the library field is still figuring out how to best harness and integrate these technologies. This underlines the importance of ongoing dialogue, education, and collaborative learning in our sector as we navigate this evolving landscape. That being said, a relatively high percentage of responding libraries are actively implementing or exploring generative AI technologies. Given that these technologies are relatively new and complex, it’s encouraging to see that many libraries are already engaging with them. That being said, we had a small sample of 19 respondents, combined with a self-selecting bias, so we think that many other libraries likely haven’t started exploring generative AI yet.

One idea reflected in the quick poll results states: “Affirm how the information is generated (AI or otherwise) is less important than the ability to recognize what is reliable, what is not, and how those decisions are made.” Why make this distinction?

That was simply to convey the responses we received in the QuickPoll. Understanding the origin of information—knowing whether it’s human-generated or AI-generated—still plays a critical role in the evaluation of sources. This understanding forms a key component of the overall context in which the information is situated. It gives us insight into the potential biases, limitations, and accuracy of the content.

When we think about transparency there are generally three categories of information that are prerequisites for accountability: (1) procedures or standards about the information itself; (2) content, such as data inputs and outputs; and (3) outcomes and impacts of the information on such things as the environment, finances, human quality of life, etc. Libraries and other scholarly communications stakeholders should advocate for more transparency from tech companies in this regard. Tech companies and policy makers must work together to foster an information environment where users can confidently and effectively navigate the sea of both human- and AI-generated content.

Research libraries may also act as a key partner in developing training and education for our communities on understanding any future transparency disclosure statements and information. This aligns with the core values of libraries and information literacy programs, which promote access to information, research integrity, and critical thinking.

The ability to recognize reliable information and how those decisions are made is undoubtedly crucial. However, understanding the source of information, especially in the age of AI, remains a significant element of information literacy. Therefore, a balanced emphasis on both these aspects would be beneficial to promote a comprehensive approach to information literacy in the era of AI.

Your piece highlights several really interesting projects libraries are currently undertaking to evaluate possible uses of AI. Where do you see the need for more work to be done?

Currently, the use of AI in libraries presents an exciting opportunity for innovation and service enhancement. However, it’s not enough to simply implement AI technologies; we need to ensure that those interacting with them understand their underlying mechanisms and potential implications. This understanding will foster informed and conscious use of these technologies, helping to mitigate the risk of misinformation and manipulation.

While some libraries are already experimenting with AI and its various applications, there’s a clear need for an expanded focus on AI literacy. This should go beyond merely teaching users how to interact with AI tools and delve into the underpinnings of AI, including its strengths, weaknesses, and potential biases. By teaching library users to critically evaluate AI-generated information, we can equip them with the tools they need to navigate the increasingly AI-driven information landscape.

For library employees, AI literacy is equally, if not more, important. Employees need to be prepared not just to use AI tools, but also to guide users in their interactions with these tools, answer their questions, and alleviate their concerns. They also need to be able to make informed decisions about the deployment of AI technologies in library services.

To put it succinctly, an investment in AI literacy is an investment in the future of libraries. This will involve collaboration among libraries, academic institutions, and other stakeholders to develop effective AI literacy programs, share best practices, and jointly address the challenges that come with this new era of information.

📄 Read full ARL report on generative AI in libraries.

🔥 Sign up for LibTech Insights (LTI) new post notifications and updates.

✍️ Interested in contributing to LTI? Send an email to Deb V. at Choice with your topic idea.