Ask Dana

Threre are lots of ways to contact us. Choose what works best for you.

HOURS TODAY

9:00 am - 11:00 pm

CONTACT US BY PHONE

(802) 656-2201

Reference

(802) 656-2200

Main Desk

MAKE AN APPOINTMENT OR EMAIL A QUESTION

Schedule an Appointment

Meet with a librarian or subject specialist for in-depth help.

Email a Librarian

Submit a question for reply by e-mail.

WANT TO TALK TO SOMEONE RIGHT AWAY?

Library Hours for Sunday, December 1st

All of the hours for today can be found below. We look forward to seeing you in the library.
HOURS TODAY
9:00 am - 11:00 pm
SEE ALL LIBRARY HOURS
AT DANA

Dana Health Sciences Library9:00 am - 11:00 pm

After Hours Study11:00 pm - 7:30 am

ELSEWHERE

Howe Library10:00 am - 12:00 am

Special CollectionsClosed

Media ServicesClosed

Howe ReferenceClosed

 

CATQuest

Search the UVM Libraries' collections

Chat GPT: Fast, But Not Foolproof

Since its release to the public in November 2022, ChatGPT-3 has captured the attention of people around the world. Students, in particular, have been drawn to the chatbot, because of the way it seems to effortlessly generate answers to questions and assist with other coursework. As several people have noted, however, it’s not perfect. It can, and does, make mistakes that may be difficult to spot at first glance.

For example, when asked to find 10 studies in PubMed that address the question, “Is music therapy effective in reducing insomnia among adults,” ChatGPT-3 quickly generated a list that looks accurate:

A quick title search for all these articles in PubMed and Google Scholar, however, confirms that none of them are real, although one comes close:

There is a study with that title in PubMed and it was published in 2014, but the author listed is incorrect. It was actually written by Chun-Fang Wang, Ying-Li Sun, and Hong-Xin Zang. ChatGPT-3 may have found the name Harmat because there is a researcher with that last name who has published a study on the impact of music and sleep, but that his name is Laszlo Harmat.

Another interesting quirk in these results is that the same fake citation is essentially generated twice:

There are several differences:

  1. The word “effectiveness” in reference #6 is changed to “effect” in reference #7
  2. The word “older” is added to reference #7
  3. The date is changed from 2016 in reference #6 to 2015 in reference #7

Another overall difference is that the article title in reference #6 is in title case – i.e. the first letter of each word is capitalized. The words in reference #7 are all lower case.

The fact that ChatGPT-3 makes up references is not a new finding. The creators of ChatGPT-3, OpenAI, readily admit this is the case. And while the next generation of the chatbot, ChatGPT-4, does a better job at addressing these mistakes, it’s still not perfect either. That’s why it’s always important to verify the information you get from ChatGPT by checking another source, which a librarian can help you with. Sure, it takes a little more time, but it’s more important to be accurate rather than fast.