WHO Releases Guidelines on AI Ethics and Governance

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 01/31/2024
In News

The World Health Organization (WHO) is weighing in on AI and its impact on healthcare. The “Ethics and Governance of Artificial Intelligence for Health” offers insights and guidance into the integration of large multi-modal models (LMMs) within the healthcare sector. 


The document states that one of the primary advantages of LMMs is their ability to streamline administrative tasks, thereby allowing healthcare professionals to allocate more time to patient care. For instance, LMMs can facilitate communication between clinicians and patients by simplifying medical jargon and generating more comprehensible dialogue. They can also assist in filling in missing information in electronic health records and drafting clinical notes following patient visits, whether in person or virtual.


Moreover, LMMs hold the potential to automate various administrative functions such as drafting clinical notes, prescriptions, appointment scheduling, billing codes, and discharge summaries. These automated processes can significantly reduce the administrative burden on healthcare providers and improve overall efficiency in healthcare delivery.


Despite the promising applications, there are inherent risks. Errors, inaccuracies, and misinterpretations can occur, leading to serious consequences for patients and healthcare organizations. The guidance underscores the importance of maintaining human oversight over AI systems to mitigate the risk of errors and ensure patient safety.


Transparency, accountability, fairness, and data privacy emerge as critical principles guiding the ethical use of LMMs in healthcare. Healthcare organizations are urged to be transparent about the use of LMMs, including how they are deployed, what data is collected, and how it is utilized. Transparency is vital in building trust among healthcare professionals and patients and fostering a clear understanding of AI systems’ capabilities and limitations.


Furthermore, accountability mechanisms must be in place to hold healthcare organizations responsible for the ethical use of LMMs. Clear lines of responsibility should be established for the development, implementation, and oversight of AI systems, ensuring accountability for any errors or mistakes that may occur.


Fairness in the deployment of LMMs is paramount to prevent discrimination and ensure equitable healthcare outcomes for all patient populations. Healthcare organizations must ensure that AI systems are developed and utilized in a manner that upholds principles of fairness and does not perpetuate biases or inequalities within the healthcare system.


Data privacy and security are also fundamental considerations in the use of LMMs. Healthcare organizations must implement robust policies and procedures to safeguard patient data against unauthorized access or misuse, ensuring compliance with relevant privacy regulations and standards.


Lastly, stakeholder engagement emerges as a key component of responsible AI governance in healthcare. Healthcare professionals, patients, policymakers, and other stakeholders must be actively involved in the development, implementation, and evaluation of AI systems to ensure that they meet the diverse needs and interests of all stakeholders.

As global entities strive to harness the economic benefits of AI, feel free to contact BABL AI if you need assistance. Their team of audit experts is ready to provide valuable guidance and support.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter