Strategic Framework

Brown Health has established a multi-layered governance structure featuring an Executive Oversight Council and AI Governance Council that brings together organizational leaders to define responsible AI practices. The framework prioritizes six core components: robust governance, workforce education, technology and data alignment, high-impact use cases, strategic partnerships, and research-driven innovation.

The organization's approach centers on four guiding principles: security, safety, responsibility, and ethics. This includes secure data handling, mandatory human oversight, value-driven implementation, and continuous cost monitoring. Specialized subcommittees focus on enabling different portfolios while ensuring knowledge sharing across departments.

Implementation Approach

Brown Health employs a tiered risk assessment protocol through its AI Center of Excellence, which evaluates new AI use cases before scaling. The framework categorizes solutions by risk level—low-risk tools require basic validation, while medium and high-risk applications undergo extensive testing and monitoring.

The enablement strategy follows a phased approach: starting with secure access to tools like Microsoft Copilot for personal productivity, advancing to paid subscription tools for specific use cases, and ultimately developing custom AI agents trained on department-specific policies and procedures.

Practical Applications

Current implementations include generative AI tools like Ambient and Epic Assistant for clinical documentation, patient discourse summarization, in-basket navigation, and appointment scheduling. A prototype discharge assistant demonstrates AI's capability to analyze patient cases, recommend discharge dispositions, draft instructions, and arrange referrals—all while adhering to Brown Health's policies.

Clinical tools such as Dragon/Nuance and Epic Gen AI are deployed with clinical oversight and patient feedback loops to ensure quality and safety. The organization uses secure versions of large language models, including Copilot and Claude, to maintain data privacy and security through InfoSecurity reviews and approved tool registries.

Key Takeaways

  1. Human-Centered Design: AI implementation focuses on empowering healthcare workers rather than replacing them, with HR involvement in governance to ensure thoughtful deployment and reduce burnout.
  2. Responsible Governance: Comprehensive oversight structures, risk assessment protocols, and continuous monitoring ensure safe, ethical AI deployment while preventing bias and maintaining equity.
  3. Phased Implementation: A structured approach moves from personal productivity tools to custom solutions, allowing for proper validation and staff education at each stage.
  4. Clinical Integration: AI tools address real challenges including documentation burden, administrative tasks, and care coordination while maintaining human oversight.
  5. Continuous Learning: Ongoing education, nursing champions, and regional innovation workshops promote responsible adoption and collaboration across departments.
  6. Risk Mitigation: Controls prevent AI hallucinations and errors through validation processes, bias monitoring, and secure data handling practices.

Brown Health's initiative demonstrates that successful AI integration in healthcare requires balancing innovation with responsibility, ensuring technology enhances rather than replaces human expertise while maintaining patient safety and data security.