Core responsibilities, from U-M GENAI:
"When using generative AI tools, it is important to avoid entering sensitive information to protect privacy and ensure data security. These tools may not guarantee the confidentiality of the information shared. Please refer to the University of Michigan Safe Computing guidelines for AI tool use.
This list is for purely informational purposes. The University of Michigan does not endorse or support any of these specific external AI tools."
The universe of AI resources and tools is constantly growing and changing. Our computers, phones, and e-book readers come bundled with AI apps, and people bring strong feelings about AI into discussions around almost everything else. There are definitely pros and cons. The U-M AI resources were designed to try to protect users from many of the risks associated with public GenAI tools, but there may be tasks for which you want an AI tool that does something the U-M tools don't yet do. You'll want to be aware of some of the most common risks, so that you can keep your information and reputation safe. The Medical School and Michigan Medicine have guidelines that you will want to follow, both to keep the organizations safe and yourself. There is even a special term to describe the risks of using AI without permission in a professional environment where the policies say not to do so — Shadow AI.
Here are just a few of the risks of 3rd party AI tools for individual users:
Bias and discrimination
Cybersecurity
Intellectual property
Privacy
Some of these overlap. Have you heard of "training data extraction attacks"? This is where someone uses special tools to discover private information that was uploaded to AI tools for analysis, and then was incorporated into their training data. If you upload it, you can't assume that no one else will find it. People may do this for malicious reasons, to harm someone, or they may do it to protect their organization. This impacts on privacy, security, and intellectual property.
Here, "[the Library] advises against uploading licensed PDFs from U-M collections into 3rd party AI tools for analysis because of the potential license implications." Simply put, this means that you may use the PDFs for personal study and research, but you don't legally have the right to post these copyrighted PDFs where other people can find them. That includes uploading them to ChatGPT, for example.
Similarly, before uploading any content into a public or 3rd-party AI tool you should ask yourself who owns it. Is this content that was created by a colleague, teacher, or student? Do you have permission to upload their content? Is there any information in the upload that is private or protected data, such as covered by HIPAA or FERPA, or which includes personal identifiers, such as social security numbers or personal phones? For 3rd party AI tools, there are some that operate in ways that are legal but not entirely ethical, such as persuading people to share private information or to monetize the user. There are even malicious AI tools.
When you use a non-U-M AI tool, you'll want to pay special attention to the information you share with it — is it legal, is it safe, is it wise?
Now, let's take a closer look at some of the 3rd party tools we are seeing used in our work. Still, make your own decisions about using any of these.
Elicit
Elicit is an AI-powered research assistant designed to streamline and enhance the research process, particularly for systematic reviews and academic investigation. Key features of Elicit include the ability to automatically extract and summarize data from scientific papers, identify key concepts and research gaps, and generate literature reviews that align closely with user-defined research questions. The tool is designed to save time and reduce the cognitive load on researchers by quickly filtering through large amounts of information to highlight essential information and findings.
*Important Note: "[The library] advises against uploading licensed pdf's from U-M collection into 3rd party AI tools for analysis because of the potential license implications." If articles are uploaded into Elicit, they should come from an open access publication to avoid the potential for publisher license infringement.
OpenEvidence is an AI tool intended specifically for healthcare providers. It was created with funding or collaboration from Mayo, JAMA, and NEJM, and identifies high quality information resources in the responses to questions with a blue star.
OpenEvidence has a limited (three question) open access version, after which users are requested to join and create a free account as a medical professional or trainee. Account creation requires that you attest to your professional medical credentials and validate with your NPI number.
The video below provides a brief demonstration of OpenEvidence for clinicians and students from Dr. Tanisha Jowsey of Bond University, Australia.