The Inclusive Design Team at Microsoft have written a thoughtful piece about artificial intelligence (AI) and inclusion. They ask, can AI be racist? What if a software algorithm for facial recognition was based on light skinned people, how would it recognise dark skinned people? Using these questions they discuss how bias in a system can cause design “missteps”. The consequences of these missteps are that trust between the design system and the user is diminished. As the digital world expands, we need to have trust in the technology and programming for it to be of social and economic benefit to us all.
Microsoft says its first inclusive design principle is to recognise exclusion and identify bias, which could apply to any design professional. The article goes on to describe five biases: Association, Dataset, Interaction, Automation, and of course Confirmation bias. An interesting article because the digital world touches all of us. So you might also be interested in Weapons of Math Destruction that discusses the role that software and its algorithms play in our lives without us realising it.