Librarians’ Guide to Answering Students’ Ethical Questions about AI

AI raises many ethical questions for students – here's how to answer them

Librarian answering a student's questions about AI and ChatGPT

What questions are library users asking about generative AI and how can we be prepared to answer them? To help with that, we’ve drafted sample questions and answers about ChatGPT and similar tools.

This is the second part of a three-part series. Last week’s post focused on answers to technical questions. This week, we turn to ethical questions, and next week we’ll cover practical questions.

Of course, sometimes students aren’t yet asking questions we would like them to know the answers to! This is why we, at the University of Arizona Libraries, decided to include them in our LibAnswers FAQ system. This gives us a central place where our staff can direct students for their questions. We can also use them in different ways, such as in LibGuides, tutorials, and workshops.

As always, make sure these answers align with your campus policies. Sometimes, these policies come from a writing center or other groups on campus. At the University of Arizona, it’s up to individual instructors to decide on classroom policies, so we always tell students to find out the policy for each class they are in.

How are generative AI models biased, and how can I avoid biased results?

Generative AI models like ChatGPT will often output biased results. For example, if you ask ChatGPT to write a short story about a boy and a girl choosing their careers, the story will likely portray the boy choosing engineering, while the girl chooses nursing. If you ask an image creation model to depict a doctor, it will likely portray the doctor as a man. Why does this happen?

These models are often trained on large amounts of data from the internet, and that data likely have more examples from particular countries, languages, and cultures—groups that aren’t representative of the entire world. So the model learns what to output from those data.

Many developers of these large AI models have implemented some guardrails to address this problem. But those developers may not have thought of every type of biased data to address, since like all of us, they see the world from their own viewpoint. 

Some models have behind-the-scenes instructions telling the model to use different ethnic groups and genders with equal probability when generating images of people. Other models use data that estimate the skin tone distribution of a user’s country, and applies it randomly to any image of a human that it generates. But neither solves the problem for all situations.

So what can you do? Keep an eye out for possible biased outputs and modify your prompts to correct for it. For example, you could say, “Write a story about a boy and a girl choosing their careers. Choose careers that avoid gender stereotypes. For example, the boy should not choose engineering or computer science and the girl should not choose teaching or nursing.”

💡 Learn more

What is a “deepfake”? How can I recognize images that have been created with generative AI?

A deepfake usually refers to a highly realistic but fake image, video, or audio of a person saying or doing something they never actually said or did.

Deepfakes were around long before generative AI tools became popular. Some were made with Photoshop, and some merely changed the caption of an image to make it appear to be from a different context. But now there is more concern about harms of deepfakes since generative AI tools make them so easy to create.

Here are a few tips for recognizing an image that may have been generated with AI:

  • Look for missing or extra fingers or hands that are deformed in some way.
  • Look at the faces. Small faces are often distorted.
  • Look for misspelled text in the image.
  • Look for inconsistent reflections.

Keep in mind that AI image generators are continually getting better at generating realistic images, so these tips won’t always work.

Here’s another option. Use the Content Credentials website to look for metadata that indicate the origin of the image. Upload an image and it will tell you if it’s been generated by Adobe, Microsoft, or OpenAI. However, not all image-generation tools include these metadata. So if it shows a Content Credential, you know it was generated, but if it doesn’t, you don’t know whether it was generated with AI or not.

Another way to investigate is to use a reverse image search tool to find other copies of the image in question. Try uploading the image to Google Images. The results will show websites that contain that exact image and also similar images. This can help you verify that an image is not AI-generated or sometimes that it is AI-generated.

If it’s not AI-generated, you will likely find it on more than one website, like news sites reporting on a story with that image. And you’ll find photos of the event from different angles.

If it was AI-generated, you might find other copies of it that are clearly marked as made with AI. There are many social media groups where people share AI-generated works and you might find your image there, confirming that it’s AI-generated.

💡 Learn more


🌟 Subscribe to the LibTech Insights newsletter for weekly roundups and bonus content, including: 


In the US, can text or images created with generative AI be copyrighted?

Currently the answer is no. In order to be copyrighted, a work must have a human author. However, the US Copyright Office decided that the selection and arrangement of AI images in a graphic novel (Zarya of the Dawn) could be copyrighted as a whole work, but the images themselves (generated with Midjourney) could not be.

In another case, an author who used ChatGPT extensively while writing a novel filed a copyright registration for it. She was aiming to get the US Copyright Office to overturn its policy on work made with AI. The office made a similar decision where the “selection, coordination, and arrangement of the text that was generated by AI” could be copyrighted. So no one can copy the book without permission. But the actual sentences and paragraphs are not copyrighted, so theoretically, someone could rearrange them and publish it as a different book.

Copyright rulings in other countries may have different opinions. A Chinese court awarded copyright protection to AI-generated images in one case. It ruled that the human intellectual input for prompting and selecting images “reflects the plaintiff’s personalized expression.”

💡 Learn more

In the US, is it legal for developers to use copyrighted material to train generative AI tools?

There isn’t a clear answer yet. Some say no, it’s unlawful. There are several lawsuits underway against companies like OpenAI, Microsoft, and Stability AI. Many artists and writers feel that AI is appropriating their work without consent or compensation, threatening their creative livelihoods. Others say that training AI models on copyrighted works is fair use. They argue that AI models learn from these works to generate transformative original content, so no infringement occurs.

Many scholars and librarians agree that training AI language models on copyrighted works is fair use and essential for research. If restricted to public domain materials, AI models would lack exposure to newer works, limiting the scope of inquiries and omitting studies of modern history, culture, and society from scholarly research.

This issue is complex, and it will likely take a long time before the lawsuits are settled. Some courts have thrown out parts of the lawsuits, but kept others. Some cases may be settled out of court.

In the meantime, companies like Adobe, Google, Microsoft, and Anthropic have offered to pay legal bills from lawsuits against users of their tools.

💡 Learn more


Feel free to share and modify these questions for your own use. Since generative AI products change often, we’ll be working to keep these answers current. You can find all of our AI-related FAQs at the University of Arizona LibGuide. And look for next week’s post for our third set of questions, focusing on the practical side of AI.


🔥 Sign up for LibTech Insights (LTI) new post notifications and updates.

✍️ Interested in contributing to LTI? Send an email to Daniel P. at Choice with your topic idea.