Use of Artificial Intelligence Policy

Effective: 1/8/2026

To whom this policy applies

UNR Med administrative faculty, academic faculty and staff.

Definitions

Generative AI: Generative Al is a type of artificial intelligence that can create new content, such as text, images, videos, audio, or software code, in response to user prompts. These systems use generative models, like large language models, to statistically sample new data based on the training data set used to create them.

Purpose/Background

  1. The purpose of this policy is to provide a framework for the responsible use of Artificial Intelligence (AI) at UNR Med.
  2. Artificial Intelligence applies to all stand-alone or integrated functions of software that uses AI to generate new content. AI creates content from user supplied data, which means the information you supply is often retained within the model, posing an unintended risk of sharing sensitive data.
  3. The use of AI does not preclude the responsibility of the user in checking and editing the result of Al-generated content.
  4. This policy provides guidance on compliance, security and operations and is not intended to limit academic freedom

Policy

  1. Checking Al Content: It is the responsibility of the end user to ensure that the Al-generated content is accurate and acceptable. Users should check for the following:
    1. Al Hallucination: This occurs when an AI generates plausible but incorrect information. For example, an AI might create a false summary of a news story or generate incorrect details about an event.
    2. Bias: AI systems can exhibit biases based on the data they were trained on. This can lead to unfair or discriminatory outcomes, such as favoring certain groups over others in decision-making processes.
    3. Misinterpretations: These errors happen when an AI system incorrectly understands or processes the input it receives. This can result in inappropriate or irrelevant responses.
    4. Entity Recognition Errors: These occur when an AI fails to correctly identify and categorize entities such as names, dates, or locations within the text.
    5. Context Handling Failures: AI systems sometimes struggle to maintain context over long conversations or complex tasks, leading to responses that are out of context or irrelevant.
    6. Reliable Sourcing: AI systems may not differentiate between reliable and unreliable source materials.
  2. Open-Source Language Model Sharing: End users will not supply regulated, sensitive, or confidential information to any open-source generative AI model.
    1. UNR Med's preferred generative-AI platform of Microsoft Co-Pilot for Office 365, which does not supply end user data to its large language model and can be used for regulated, sensitive, or confidential information.
    2. Use of non-enterprise, commercially available generative AI platforms (e.g., ChatGPT Free or Plus) is discouraged due to their potential to retain and use inputted data.
    3. Users purchasing a subscription to non-enterprise AI platforms must complete the ’Restricted Use of Commercial AI Tools’ agreement available from Med IT.
    4. AI must not be used for the purpose of deidentifying protected health information, since datasets deidentified in this manner may be reverse engineered.
  3. Use of Al for Original Content: To the extent that the end-user relies substantially on the use of AI to produce original content, such use will be disclosed with the following statement: Al was used to generate this content, and the content was reviewed and edited for validity.
  4. AI for Transcribing Meeting Minutes: Use of transcription services must comply with confidentiality requirements, and recordings or Al-generated transcripts must be deleted following conversion to official minutes, unless retention is otherwise required. Meetings with legal counsel must not be transcribed.
  5. Software Containing AI: Any software purchases that include the use of AI must be reviewed by UNR Med IT to ensure that sensitive information is not being shared with an external large language model.
  6. Training for AI: All users intending to use generative AI for academic, clinical, or administrative tasks are encouraged to complete a brief institutional AI orientation module.

References