AI Tools

This overview of AI tools is provided for information and awareness only. As the AI tool landscape changes rapidly, this is not an exhaustive listing. These AI tools are not necessarily approved or endorsed by WMed. Only vetted and approved AI tools may be used for institutional work (with institutional data), in accordance with policy IT18 Artificial Intelligence (AI) Use, institutional AI use guidance, data protection requirements, and role-based expectations. 

    • Typical Use: Drafting, summarization, explanation, brainstorming, code assistance 
    • Fit with Academic Medicine: High interest, high risk. Widely used by faculty, learners, clinicians, and staff. Requires strong governance and clear boundaries in an academic medicine environment. 
    • Tool Examples: ChatGPT (public/enterprise), Claude (public/enterprise), Google Gemini, Microsoft Copilot (LLM layer), Meta Llama (open-source variants) 
    • Data Types That May Be Used: Public information; de-identified or hypothetical scenarios; non-sensitive institutional content explicitly approved for use. 
    • Security Guidance: Public versions remain at high risk. Enterprise or education tenants offer improved controls but still require approved use, strict data protections, human review and accountability, and disclosure when AI contributes meaningfully. Output is not authoritative and must be validated. 
    • Typical Use: Literature discovery, citation context, research synthesis 
    • Fit with Academic Medicine: Strong fit for research and scholarship when used as support tools, not authoritative sources. Aligns well with library-supported workflows. 
    • Tool Examples: Elicit, Scite, Semantic Scholar, Consensus, ResearchRabbit 
    • Data Types That May Be Used: Publicly available literature; citation metadata; high-level research questions that do not include confidential review materials or unpublished data. 
    • Security Guidance: Lower inherent data risk, but confidential review materials remain protected. Citations and summaries must be independently verified. Disclosure is increasingly expected by publishers and funders. 
    • Typical Use: Writing, editing, summarization, organization, task support 
    • Fit with Academic Medicine: Good fit when embedded in institutionally governed platforms and used for non-sensitive academic or administrative work. 
    • Tool Examples: Microsoft Copilot (M365), Google Workspace AI, Grammarly, Notion AI 
    • Data Types That May Be Used: Non-sensitive drafts; general communications; materials without PII, PHI, FERPA, or confidential institutional content. 
    • Security Guidance: Risk depends on platform governance and configuration. Even when institutionally managed, sensitive data must still be protected and outputs reviewed before use in decisions or publications. 
    • Typical Use: Transcription, meeting summaries, action items 
    • Fit with Academic Medicine: Limited fit. Elevated privacy and compliance risk in healthcare and education environments; appropriate only in narrowly approved scenarios. 
    • Tool Examples: Otter.ai, Fireflies.ai, Fathom, Zoom AI Companion, Microsoft Teams transcription 
    • Data Types That May Be Used: Non-sensitive meetings with informed participants; operational discussions explicitly approved for recording. 
    • Security Guidance: Highest privacy and compliance risk. Likely to capture PHI or sensitive information. Consent, notification, retention controls, and explicit approval are required. 
    • Typical Use: Data analysis, dashboards, natural-language querying of datasets 
    • Fit with Academic Medicine: Strong fit when aligned with institutional analytics governance, data stewardship, and reporting standards. 
    • Tool Examples: Power BI Copilot, Tableau Pulse/AI, Looker, ThoughtSpot 
    • Data Types That May Be Used: Approved institutional datasets; de-identified or aggregated data; analytics environments governed by IT and Data Analytics. 
    • Security Guidance: Risk is driven by the underlying data, not the AI interface. AI-generated insights must be validated and contextualized and do not replace data governance or human interpretation. 
    • Typical Use: Documentation support, decision support, triage, ambient scribing 
    • Fit with Academic Medicine: Highly constrained fit. Requires close coordination across Clinical Affairs, compliance, legal, IT, and risk management.  
    • Tool Examples: Nuance DAX, Abridge, Suki, Nabla, Freed, Heidi Health 
    • Data Types That May Be Used: Limited PHI as permitted under formal agreements, business associate agreements, and clinical policy; documented workflows only. 
    • Security Guidance: High regulatory and patient safety risk. Requires formal approval, contractual safeguards, clinician oversight, patient communication, and ongoing monitoring. AI does not replace clinical judgment. 
    • Typical Use: Embedded clinical decision support, documentation, predictive analytics 
    • Fit with Academic Medicine: Best-fit clinical AI category when vendor-integrated and governed within the EHR environment. 
    • Tool Examples: Epic-integrated AI features (ambient documentation, CDS modules, predictive risk tools) 
    • Data Types That May Be Used: PHI as permitted within the EHR under existing governance, audit, and security controls. 
    • Security Guidance: Still high risk, but mitigated by EHR governance, vendor accountability, and auditability. Requires clinical validation, monitoring, and adherence to clinical policies.
  • Users may encounter AI-enabled features through Epic or Epic-connected modules used by partner organizations. These capabilities may appear embedded or available by default but should not be assumed to be approved for use at WMed.  Availability within Epic does not equal approval. Use requires alignment with the WMed Office of Clinical Affairs, Information Technology, compliance, and legal requirements. Examples may include ambient or assisted documentation, predictive alerts or risk stratification, in basket or workflow prioritization, population health or operational analytics. If you encounter a new AI-enabled Epic feature and are unsure pause before proceeding. Do not enable or rely on the feature for patient care. Consult appropriate leadership or IT for evaluation.