Staff and Administrative

This guidance applies to staff and leadership (operational and committee leaders) on the appropriate use of AI tools in operational, analytical, and administrative work at WMed. AI tools can improve efficiency and support decision-making when used responsibly. They must be applied in ways that protect data, maintain institutional trust, and ensure accountability. This guidance is intended to support effective use of AI while managing risk and aligning with institutional governance. Individuals who also engage in teaching, research, or clinical work should consult the appropriate role-based guidance.

  • AI tools may be used to support administrative and operational activities such as drafting or organizing internal communications; summarizing meetings or documents; supporting analysis or reporting; and assisting with planning or workflow design. AI tools are intended to support administrative work, not replace professional judgment, institutional processes, or accountability structures. All AI-generated content must be reviewed and validated before use.

  • Staff and administrators must only use vetted and approved AI tools for institutional work. Information Technology leadership coordinates AI tool approval and may involve additional stakeholders as appropriate. Explicit approval is required before using AI tools that:

    • Access or process WMed systems or institutional data.
    • Integrate with enterprise platforms, such as Microsoft Teams.
    • Automatically record, summarize, or analyze meetings or communications.
    • Are deployed broadly across departments or units.
  • Administrative work often involves sensitive or protected information, which must be safeguarded. Staff and leaders must not enter personally identifiable information (PII), FERPA-protected student information, protected health information (PHI), confidential review materials, and non-public institutional documents or analyses into third-party AI tools. When exploring AI use cases, de-identified or hypothetical data should be used whenever possible.

  • AI-generated outputs must not be used as final decisions without human oversight. Staff and leaders remain responsible for accuracy and completeness of information, interpretation and application of AI-assisted analysis, and decisions informed by AI-generated content. Higher-impact decisions require higher levels of review, validation, and stakeholder involvement.

  • Transparency and disclosure are required when AI tools contribute to significant administrative tasks (e.g., hiring decisions, performance summaries, policy drafting; analyses or recommendations; external-facing communications; and decisions with ethical, legal, or reputational implications. Disclosure helps ensure shared understanding of how information was developed and reviewed.

  • Administrative AI use should support equitable and inclusive outcomes. Staff and leaders should review AI-generated content for bias or exclusionary language, accessibility for diverse audiences, clarity and appropriateness of tone. These considerations are especially important for public-facing or institution-wide communications.

  • When considering new AI tools or expanded uses, staff and leaders may be encouraged to conduct pilot testing, document intended use and limitations, share lessons learned and feedback. Prior approval is required before any pilot can begin. Responsible adoption supports institutional learning and reduces unintended risk.

  • Staff and leaders may find support through Information Technology leadership and relevant institutional AI guidance on Approved Tools and Governance as well as Data Protection and Safe Use.

When You Are Unsure

If you are uncertain whether an AI use is appropriate in an administrative context, pause before proceeding. Follow institutional guidance for when you are unsure and consider consultation with appropriate leadership, such as supervisors or Information Technology. Requests for assistance can also be sent to Support+AI@wmed.edu.