Skip to Main Content

UMMS Artificial Intelligence

What is U-M GPT?

U-M GPT is an AI tool developed by the University of Michigan that leverages advanced language model technology to assist with various tasks. The tool is designed to provide support in areas such as education, research, administrative functions, and more by generating human-like text based on user input. Developed with a focus on the specific needs and standards of the university environment, U-M GPT can be used to draft documents, answer questions, create educational content, and assist with a variety of other academic and operational purposes.

Differences in ChatGPT and U-M GPT

The primary differences between ChatGPT and U-M GPT lie in their customization, use cases, and development contexts.

  1. Customization and Context:

    • ChatGPT: Developed by OpenAI, ChatGPT is a general-purpose AI language model designed to serve a broad audience across various domains and tasks. It offers a wide range of functionalities without being tailored to the specific needs of any one institution or sector.
    • U-M GPT: This version is specifically developed by the University of Michigan. It is fine-tuned to align with the university’s academic standards, administrative processes, and specific terminologies relevant to the campus.
  2. Use Cases:

    • ChatGPT: Generally used for a wide variety of applications including casual conversation, content creation, answering questions on myriad topics, and more. It serves businesses, individual users, educators, and developers among others.
    • U-M GPT: Aimed specifically at supporting the University of Michigan’s diverse needs. It can assist with educational content, research inquiries, administrative help, and other tasks.
  3. Development and Oversight:

    • ChatGPT: Overseen by OpenAI, which is responsible for the continual development, ethical considerations, and updates to the model.
    • U-M GPT: Developed and managed by the University of Michigan, ensuring that it adheres to the university’s guidelines, ethical standards, and data privacy requirements.

U-M AI Tools Security Policy

U-M GPT's security policy is designed to ensure the confidentiality, integrity, and availability of information while adhering to relevant legal, regulatory, and institutional requirements.

ITS AI Services Privacy Notice

 

Does U-M GPT Need Biosecurity?

It's important to clarify that "U-M GPT" would refer to language models or AI systems developed or used by the University of Michigan for various purposes such as research, education, or administrative tasks. Biosecurity typically pertains to the handling of biological materials and the containment of potential biological threats. Since AI systems like GPT deal with digital information and not biological materials, traditional biosecurity measures are not directly applicable to such AI systems.

However, if you're asking whether U-M has security measures in place for the use of AI technologies, the answer is likely yes. Here are some general types of security measures that an institution like U-M might implement for any AI system:

  1. Data Security: Ensuring that data used to train and operate AI systems are stored securely and that access is controlled.

  2. Access Controls: Limiting access to the AI models and associated data to authorized personnel only.

  3. Ethical Guidelines: Ensuring that AI research and applications comply with ethical standards to prevent misuse.

  4. Regulatory Compliance: Adhering to legal requirements and guidelines related to data privacy and security.

  5. Incident Response: Having protocols in place to respond to any security breaches or misuse of AI systems.

If you have specific questions about security measures for U-M's AI systems, you might want to reach out to the university's Information Assurance Office or a similar department responsible for IT and data security.

Can U-M GPT be Hacked?

As with any digital tool, there is always a potential for security vulnerabilities, but the University of Michigan likely takes robust measures to protect U-M GPT from hacking attempts. Here are some key points regarding the security considerations for such AI tools:

  1. Security Measures:

    • Encryption: Data transmission between users and the AI system is typically encrypted to prevent interception.
    • Access Controls: Strict access controls and authentication mechanisms are likely in place to ensure that only authorized users can interact with U-M GPT.
    • Regular Audits: Regular security audits and vulnerability assessments are likely conducted to identify and mitigate potential risks.
  2. Data Privacy:

    • The university would have policies and protocols to ensure data privacy and compliance with regulations like FERPA (Family Educational Rights and Privacy Act) and other applicable laws.
  3. Monitoring and Incident Response:

    • Continuous monitoring of the system for any unusual activity or potential breaches.
    • An incident response plan to quickly address and mitigate any security incidents.
  4. Updates and Patches:

    • Regular updates and security patches are applied to keep the system protected against new threats.

Despite these measures, no system is completely immune to hacking. Therefore, it's crucial for users to follow best practices, such as using strong passwords and reporting any suspicious activity to the university's IT department.

If you have specific concerns or need more detailed information on the security measures in place for U-M GPT, it would be best to contact the University of Michigan's IT department or the team responsible for managing U-M GPT.

Last Updated: Oct 12, 2025 11:05 PM