Side Meetings

SMB117

Generative AI for Health for All: Guiding Responsible Implementation through Continuous Evaluation

29
Jan

  • 14:00 - 17:30 HRS. (BKK)

  • Contact Person : Kailey Seiler, kailey.seiler@yale.edu

Organizers
  • Laboratory for intelligent Global Health and Humanitarian Response Technologies (LiGHT), Yale, EPFL, CMU-Africa
  • World Health Organization, Department of Digital Health and Innovation

Generative AI, particularly large language models (LLMs), have the potential to reshape healthcare and accelerate progress toward the Sustainable Development Goals and global health equity. However, if poorly governed or implemented, AI could exacerbate existing inequities rather than alleviate them. Current global frameworks often overlook the unique challenges and disparities faced by low- and middle-income countries. A cohesive global response is essential to support countries in addressing priority issues in AI governance and regulation. Developers of evaluation criteria and systems should strive to create standards that not only measure performance but promote co-creation and deployment of these standards worldwide

Continuous, comprehensive evaluation is essential for evidence-based governance and successful AI implementation. Such evaluation should not only measure outcomes but also integrate user feedback to refine these systems. A participatory approach involving end users in the co-creation process can build trust, ensure accountability, and enhance user vigilance. This strategy allows for localized, real-time validation and contextually adapted models that align with global safety and efficacy standards.

These massive scale models offer a significant opportunity to streamline and unify the digital health landscape through coordinated evaluation and iterative development. Realizing this vision necessitates collaborative efforts and open dialogue among key stakeholders—including public and private sector leaders, academia, and non-governmental organizations—to foster a global community for shared expertise and collective action.

This session explores evidence-based implementation pipelines for generative AI, emphasizing continuous evaluation as a cornerstone for successful deployment. We will feature perspectives from leaders in digital health across various sectors and showcase an approach called MOOVE (Massive Open Online Validation and Evaluation), concluding with practical guidance on engaging with evaluation frameworks for impactful outcomes.

  • Cross-Sector Collaboration: Create a dialogue among public, private, and academic sectors to build robust, evidence-based AI evaluation frameworks and promote shared ownership.

  • Participatory continuous evaluation frameworks: Recognize continuous evaluation as essential for guiding ethical AI use, aligning with WHO’s standards for AI ethics and governance.

  • Localization for Impact: Advocate for AI models adapted to the unique health needs and cultural contexts of LMICs to support inclusive healthcare.

  • Create a community of action for Massive Open Online Validation and Evaluation: Build a dedicated network of stakeholders committed to guiding and promoting, collaborative expert evaluation.