Why Your Medical AI Model Might Not Work in Africa: A Python Guide to Measuring Bias

Track

AI / ML

Type

Talk

Level

intermediate

Language

English

Duration

20 minutes

Abstract

Medical AI models often fail in African settings due to hidden dataset and model biases. This talk shows how to detect and measure bias in medical imaging using Python. We will explore real examples with MRI and X-ray data, learn subgroup performance analysis, apply fairness improvements, and discuss practical lessons for building more reliable and equitable healthcare AI.

Speakers

Ilerioluwakiiye Abolade