Mitigating Bias in AI: Holding Innovation to a Higher Standard

By Ed Ikeguchi & Lei Guan

When it comes to building equitable, quality AI, prioritizing diverse data sets needs to be embedded in a developer’s DNA – rather than a “nice to have,” it should be a deliberate framework in which AI is built.   Before deploying an algorithm to patients, it should be rigorously evaluated under both common and rare scenarios to ensure it performs as intended and sufficiently takes into account the variety of real-world populations. In order for AI innovation to advance and be trusted across the industry, ensuring diversity is the linchpin of AI development and, as a non-negotiable standard, is a must.

Our Own Journey to Unbiased AI

Admittedly, achieving diversity is a monumental challenge when building AI. When designing computer vision algorithms in particular, there are so many dimensions of diversity to take into account – from skin color, to tongue pigmentation, to a room’s lighting. As I mentioned in my last blog, diversity is a deep learning from the history of our company – long before bias in AI was a commonplace discussion. 

When we first built our AI, we used readily available and widely adopted open-source data sets. While an essential tool in kickstarting our development, we quickly realized that our algorithm wasn’t working properly with darker skinned patients because that open-source data set we used to train it was largely built using fair-skinned people. We knew building a quality, equitable product was the only path forward we were willing to take, so we built our own diverse dataset. We recruited diverse volunteers from sources like Craigslist to contribute videos, including people wearing hats, sunglasses, artificial nails, and more to train the AI to distinguish a patient no matter their appearance or environment. Now, with over one million dosing interactions recorded, we are proud of how far AiCure’s algorithm has come and its ability to work with all patients.

For us, we knew unbiased data capture was the only way to provide the objective insights our customers needed. That meant envisioning our algorithm outside of an isolated testing environment and in a real-world, diverse capacity. It took time and effort to get it right, but it was a critical lesson we took with us for the next decade: Diversity isn’t going to happen on its own – it has to be built in proactively from the start. 

Governance Means Progress

The same technology that holds such influence is also one that is less governed. As AI innovation and adoption grows, there needs to be accountability and robust peer review of how an algorithm will perform in the real world. As an industry, we need to ask the hard questions and encourage transparency, with companies readily able to answer how exactly their algorithm was trained. 

Ultimately, the market will eventually hold AI to a higher standard than it is today. More sophisticated buyers will want to know exactly how they can expect an AI solution to work with their diverse patients, and algorithms that aren’t trained on diverse data will soon be outperformed. With more scrutiny comes a higher bar for developers to meet, and a higher quality end product.

Building a Foundation for Equitable Care

Emphasizing the role of diverse datasets in building AI goes far beyond just having good technology or diversity for diversity’s sake. Ultimately, these healthcare tools are just that: tools to support the health and well-being of patients. Ensuring that AI is built to reflect the populations who will be using it is a critical element of delivering equitable care. In clinical trials, your patient population needs to be representative of the people who will actually take the medication. Similarly, the datasets to build AI need to be representative. The makeup and needs of a trial for sickle cell anemia are drastically different than those for cystic fibrosis – and the algorithms need to be trained accordingly. 

Embedding this notion of diversity and representation in innovation has a long way to go. But, just the mere fact that we are starting to have more honest conversations about its implicit biases is a step in the right direction in not only advancing AI, but also having a common language in which we talk about these sensitive issues. Together, we can hold each other to a higher standard so we can do right by our patients and realize the true promise of these technologies.