ChatGPT as a Tool for Library Research – Some Notes and Suggestions

Can ChatGPT make research tasks easier?

It has been just over a year since OpenAI’s ChatGPT was made available to the public. Since that time, there has been a vast amount of news reporting and debate in academia about how to use it. In the middle of the year, I decided to learn about ChatGPT and about prompt engineering as a part of my ongoing professional development. ChatGPT and similar products currently have plenty of value when used as tools for writing original content, but nothing really seemed to be written about them as tools for library or database searching. I found this to be curious. Whilst learning about AI engineering and prompt engineering, I collected information and ideas about ChatGPT and library or database searching that I considered worth sharing with a smaller audience—now I am offering it to a wider one.

I must preface the rest of this article by stating that AI use and prompt engineering are very rapidly evolving and developing areas. (Witness the apparent sacking and re-hiring of Sam Altman in late November). Part or all of what I say in this article may be completely out of date in three months’ time. I would also like to state that I am discussing ChatGPT 3.5, the current free version of OpenAI’s chatbot—not ChatGPT 4, which is more advanced (and requires payment).

I see ChatGPT and its alternatives as having partial value as tools for library searching. You can use them without any training, but they will perform better when you know some details about them. The details I will cover here are:

  • about knowing its limits
  • about knowing how to treat it
  • that specificity is key
  • that ChatGPT works best when you converse with it
  • that you should roleplay with it

Then I will explain the four concrete tasks I believe it can currently help academic librarians and academic library users with.

The limits of ChatGPT

The first step to understanding ChatGPT is to understand what it can’t do: to know its limits.

ChatGPT is not a single source of accuracy and truth. I think many of us have already heard plenty of anecdotes in general media about this! “Hallucinations” are ChatGPT’s dreams of electric sheep, and students have contacted academic librarians seeking hallucinated references from some of its outputs. OpenAI are upfront about accuracy and truth, though. Here are screenshots, one full and edited, of their own declarations about this:

Pop-up message on ChatGPT emphasizing that ChatGPT isn't always truthful
Figure 1: Pop-up message
Banner at the bottom of chatgpt telling users to check important information
Figure 2: Reminder along the bottom of the screen when logged in

A second limit to consider is that ChatGPT likes continuity and staying focused.  Sometimes it can get confused if you change topics in the middle of a chat. When you want to change the subject, or substantially alter a chat’s topic, you are actually better off starting a whole new chat.

A third limit is that ChatGPT 3.5 doesn’t acquire new information connected to the internet and doesn’t function like a standard search engine. It was trained in 2021, using a finite block of data. It is not collecting new information, like Google’s search engine spiders do every second. Therefore, it is automatically limited in content because it isn’t cross-referencing academic databases, Google Scholar, and the wider web in real time. It will be weak in a number of subject areas and useless for information post-2021. This affects its value to STEM academics and researchers, or anyone at the cutting edge of a field.

That said, a wonderful thing about ChatGPT is that it has a memory. It will remember what you’ve said earlier in a chat, so you don’t have to repeat everything again. It remembers everything even if you log out and return to continue the same chat a week later. Just scroll to the bottom of the chat and give it your next prompt.

🌟 Subscribe to the LibTech Insights newsletter for more great posts and bonus resources, including:

Improving your use of ChatGPT

Know how to treat it

ChatGPT works more effectively when you know how to treat it. The most straightforward advice about this that I’ve read is, “Think of ChatGPT as your personal intern. They need very specific instructions, and they need you to verify the information.” Adopt this mindset and treat ChatGPT as a living person. Now, you could just treat it like a robot, or as an AI style-assistant like on Star Trek and direct it to respond to your commands. But this is an exhortation to not just direct it, but also to give it feedback on its responses (I’ll use the term outputs instead of responses for the remainder of this article).  An example of feedback during a chat about engineered hydrophobic surfaces could look like this: “Thanks for that. It’s very helpful. Are you able to tell me about engineered hydrophobic surfaces derived from surfaces seen in nature?”

The outputs improve when you give periodic feedback as the first part of your subsequent prompts. Of these two components of your interactions—instructions and feedback—your instructions are the more important one. How you build your instructions in your prompt is very important.

Be specific

Specificity is the key to excellent prompts. Be very specific about both the instruction and task you want it to perform. Providing one or more examples as a part of the prompt can be helpful in getting good results. Providing examples and additional context further improves its outputs. Without well-thought out and specific prompts, you will experience “garbage in, useless output out.”

Here are four quick tips about the importance of specificity:

At this point, I must mention a technological barrier. You want to be clear; you want to be specific; you can suggest steps to take; you can provide an example or two. However, your prompt length will matter because prompt length is limited. X/Twitter allows 160 characters per entry. ChatGPT seems to have a 4,096-character limit, which is roughly 500 words.


Good practice with ChatGPT is to give commands and then give feedback on outputs, but better practice, as stated before, is to treat it as a living person, and have a conversation with it. To take this a step further, best practice is to not just converse with ChatGPT, but roleplay with it. Here is a roleplay scenario that gave me a favorable output (and increased my personal knowledge of construction management and project management simultaneously):

“I am a civil engineer but have no experience in construction management. You are an experienced construction management engineer with additional experience in project management. Give me a basic plan and timeline to manage a construction project for a new small high school.”

I gave both myself and ChatGPT roles; both of us understand our respective, pre-existing level of understanding; I gave specific information; and I outlined an expectation of the level of detail for the output.

Four research library tasks most suited to ChatGPT

I’ll now discuss the four tasks that I firmly believe ChatGPT can assist with in library research and search construction. They are:

  1. Gathering keywords to use in searches
  2. Suggesting synonyms for keywords
  3. Summarizing text
  4. Truncating keywords

The first two are closely related.

Firstly, ChatGPT can be very helpful for gathering keywords to use in searches. You can prompt it to provide a list of keywords related to a subject, e.g. “Give me a list of keywords related to innovation in biomedicine.” You can then ask for more detail. An alternative word to use in prompts is to brainstorm, e.g. “Brainstorm a list of keywords related to ‘green hydrogen’” (Note that using double quotation marks is required when phrase searching.)

Here is an example incorporating roleplay and setting limits and subject knowledge (I am aware that anything published in 2022 or 2023 will not be found):

I am an undergraduate student researching the recycling of bricks and building debris into concrete. I am interested in finding out if spent tea leaves could be included in a concrete mix. You are an academic librarian. Brainstorm 20 keywords for me to try using in academic databases to see if there are already publications about this topic.

Secondly, if you or a library patron don’t have access to a thesaurus, ChatGPT can work as one. It can suggest synonyms for select keywords you’ve identified as best for your research. Continuing with the subject matter of the previous example, here is an example of a prompt: “Suggest 10 synonyms relevant to engineering that I can use for the following words: manufacture; properties; debris; bricks.” The output will have 40 synonyms, ten for each of the four words.

The third task is to ask ChatGPT to summarize a portion of text for you. Carefully select what portion of text you want to insert, because of ChatGPT’s roughly 500 word limit per prompt. Alternately, cut the text into 500 word chunks, and ask ChatGPT to summarize each chunk one by one. You can add a sentence limit to the summary in your prompt, and further add a reading comprehension level to the summary, e.g. “Summarize this text in five sentences or less. Summarize it for me as if I am a first year undergraduate student.”

The fourth and final task is one that I’m still not very happy with. I’ve experimented with it but have sometimes been disappointed by the outputs. It is to truncate a list of single keywords so that they have a specific ending—for example, *—so the keywords can then be added into search strings, with a Boolean operator already added to them. Here is an example:

Please truncate this list of keywords and ensure that the Boolean Operator * is the last character in each word: Production; Manufacture; Fabrication; Specifications; Rubble; Remnants; Fragments; Waste; Residue; Masonry; Blocks; Construction; Structural.

ChatGPT output of keywords
Figure 3: Output of keyword prompt

A colleague in New Zealand at an online workshop today (December 8, 2023) suggested a solution to this, by reminding me that I should have added a couple of examples to my prompt. Here is the new prompt with feedback and examples:

Thanks. It’s not quite what I wanted. What I want is the Boolean operator * placed as the last character in each word but I want it placed where the core part of the word is retained and all the possible endings are still able to be considered, for example manufac*, fabricat*. Here is the list: Production; Manufacture; Fabrication; Specifications; Rubble; Remnants; Fragments; Waste; Residue; Masonry; Blocks; Construction; Structural.

Figure 4 shows the output:

Output from ChatGPT prompt asking for boolean variables added to keyword list
Figure 4: Output from ChatGPT prompt

This is certainly an improvement. I will need to experiment further. A downside is that the prompt needs more intensive engineering. This fourth task is not perfect, but it’s somewhat helpful, and that may make a difference to a time-poor user.

One final useful thing: ChatGPT will automatically title each chat and save and date it by the first day that chat commenced. You can change those titles and keep changing them when you come back on subsequent days or weeks and continue different chats (or start new ones). This helps with the personal organization of your chats in your account, for fast retrieval and viewing later.


Currently, ChatGPT 3.5 is a partially or somewhat useful tool for three tasks in library research and search construction, with truncation requiring more investigation on my part. I see it as best for generating keywords and synonyms for keywords to try in searches. It can create summaries and explanations aimed at certain types of audiences. That may help in digesting particularly dense journal articles (especially those from journals that place heavy emphasis on management theory). I realize that others may disagree with my analysis, and I am aware of one article that declares ChatGPT as beneficial in complex systematic review search construction. As these LLMs and chatbots continue their hypersonic evolution and growth, I can only comment on this snapshot in time.

🔥 Sign up for LibTech Insights (LTI) new post notifications and updates.

✍️ Interested in contributing to LTI? Send an email to Deb V. at Choice with your topic idea.