This guidance is relevant to clinicians and clinical educators who are not students, residents, or fellows. This applies to members of the WMed community who provide clinical care, serve as clinical faculty or preceptors, teach or supervise learners in clinical settings, or participate in clinical education outside formal GME training programs. This includes attending physicians and other licensed clinicians involved in patient care and education. Clinicians and clinical educators may encounter AI tools in clinical care, teaching, supervision, scholarship, and administrative work. This is intended to help clarify expectations, boundaries, and applicable resources across these roles.
-
AI tools may be used to support aspects of clinical work when appropriate and approved. AI tools do not replace clinical judgment, professional responsibility, or individualized patient care.
Clinicians remain fully accountable for all clinical decisions, recommendations, and documentation, regardless of whether AI tools are used as part of the workflow. Clinical use of AI may be subject to additional requirements beyond general institutional guidance. These requirements may be defined by the Office of Clinical Affairs, departmental leadership, or applicable regulatory standards.
-
Only vetted and approved AI tools may be used in clinical contexts. AI tools that access or process patient information, integrate with clinical systems, or influence clinical decision-making require explicit approval and may have additional safeguards. Clinicians should not assume that tools approved for administrative, educational, or research use are appropriate for clinical care.
-
Patient information must be always protected. Clinicians must not enter protected health information (PHI), identifiable patient data, or confidential clinical materials into unvetted and unapproved third-party AI tools. Use of de-identified information, hypothetical examples, or institutionally approved clinical tools is expected when exploring AI-supported clinical tasks.
-
AI-generated content may appear authoritative but can contain inaccuracies or omissions. Clinicians are responsible for reviewing, verifying, and validating any AI-assisted output before it is used in patient care, documentation, or clinical teaching. AI tools may support efficiency or information synthesis, but final decisions must be based on clinical expertise, evidence-based practice, and patient-specific considerations.
-
Clinicians should ensure learners understand appropriate and inappropriate uses of AI; model responsible, transparent AI use; and provide oversight consistent with learner level, competency, and institutional expectations. Additional information for residents and fellows is provided in their role-based guidelines.
-
Transparency and disclosure of AI use are required when AI tools meaningfully contribute to clinical documentation, educational materials, or patient-facing decisions, as applicable. Disclosure expectations may vary based on clinical context, institutional guidance, and professional standards.
-
Clinicians and clinical educators should review and follow the overall institutional guidance with specific attention to Approved Tools and Governance, Data Protection and Safe Use, and Human Review and Accountability. Additional clinical-specific requirements may apply and are addressed in resident and fellow role-based guidance or other clinical affairs documentation.
When You Are Unsure
If you are uncertain whether a particular AI use is appropriate in a clinical or clinical education context, pause before proceeding. Follow institutional guidance for when you are unsure and consider consultation with appropriate leadership, the Office of Clinical Affairs, or departmental contacts. Requests for assistance can also be sent to Support+AI@wmed.edu.