Skip to Main Content

UMMS Artificial Intelligence

Policy & Ethics Overview

Responsible AI and ethical guidance for use of AI are currently burning and unsettled topics, as the AI space continues to rapidly evolve. Policy formation around these is similarly a moving target. We have selected a very few high level resources in this area, but strongly encourage users of this guide to explore more deeply, and avoid assumptions of benefit or harm in favor of more nuanced approaches and assessments. Some grant organizations and journals encourage use of AI in writing while others ban the use of AI tools. We encourage you to make a habit of checking for policies early on, and to consider the risks carefully.

U-M Guidance

U-M: Generative Artificial Intelligence: Resources for Research

  • Principles Related to using GenAI in Research
    • Responsible Use

    • Documentation

    • Account for and Limit Bias

    • Privacy Protection

U-M: MIDAS: Using Generative AI for Scientific Research

U-M Safe Computing Sensitive Data Guide to IT Services

Federal Guidance

Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government (A Presidential Document by the Executive Office of the President on 12/08/2020) 

NIH: The Use of Generative Artificial Intelligence Technologies is Prohibited for the NIH Peer Review Process (NOT-OD-23-149) 

NIH: Frequently Asked Questions (FAQs): Use of Generative AI in Peer Review

NSF: Notice to research community: Use of generative artificial intelligence technology in the NSF merit review process 

  • "NSF reviewers are prohibited from uploading any content from proposals, review information and related records to non-approved generative AI tools.
  • Proposers are encouraged to indicate in the project description the extent to which, if any, generative AI technology was used and how it was used to develop their proposal." 

International Guidance

GLOBAL 

Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models. Geneva: World Health Organization; 2024. Licence: CC BY-NC-SA 3.0 IGO.

AFRICA

Continental Artificial Intelligence Strategy: Harnessing AI for Africa’s Development and Prosperity. Accra, Ghana: African Union, July 2024.  

EUROPE

Living guidelines on the Responsible Use of Generative AI in Research. (ERA Forum Stakeholders’ document) European Commission, March 2024. 
 

Other Expert Guidance

Hosseini, M., Gordijn, B., Kaebnick, G. E., & Holmes, K. (2025). Disclosing generative AI use for writing assistance should be voluntary. Research Ethics, 0(0). 

  • The article examines the evolving role of generative AI (GenAI) in manuscript writing and challenges the need for mandatory disclosure of its use. The authors now advocate for voluntary disclosure, arguing that GenAI’s contribution is often minimal, hard to delineate, and that mandatory policies may unfairly disadvantage non-native English speakers and undermine peer review integrity.

Lucian Leape Institute. Patient Safety and Artificial Intelligence: Opportunities and Challenges for Care Delivery. Boston: Institute for Healthcare Improvement; 2024. 

Kostick-Quenet, K. (2024).  Ethical, Legal, and Social Implications of Generative AI (GenAI) in Healthcare. In: ELSIhub Collections. Center for ELSI Resources and Analysis (CERA). 

  • A brief curated bibliography organized in four sections: Accountability and Liability, Bias, Error and Hallucinations in GenAI & LLMs, GenAI, Bioethics, and Medical Ethics Education, Privacy & Consent.

McCormack, Leigh. (2024). Improving Health Equity Through AI. In: Federation of American Scientists' AI Legislation Policy Sprint, 06.27.24. 

Spector-Bagdady, Kayte. Generative-AI-Generated Challenges for Health Data Research. Bioethics Today October 2023 23(10):1-5. doi.org:10.1080/15265161.2023.2252311

Last Updated: Oct 12, 2025 11:05 PM