Overcoming bias in AI

Artificial Intelligence (AI) is entering our everyday lives with increased speed and sometimes without our knowledge. But it is only as good as the data it is fed, and the worry about bias is a concern for marginalised groups. AI has the potential to enhance life for everyone, but that requires overcoming bias in AI development. In his article, Christopher Land argues for more advocacy and transparency in AI.

The power of machine learning comes from pattern recognition within vast quantities of data. Using statistics, AI reveals new patterns and associations that human developers might miss or lack the processing power to uncover.

A background of computer code with a female face overlaid. Overcoming AI bias.

Designing for the average is fraught with problems. Statistical averages do not translate to some kind of human average. That’s because statistics don’t measure human diversity. That’s why AI processes are at risk of leaving some people behind. But in gathering useful data there are some privacy issues.

AI shows great promise with robot assistants to assist people with disability and older people with everyday tasks. AI imaging and recognition tools help nonvisual users understand video and pictures.

Christopher Land outlines how AI and machine learning work and how bias is introduced into AI systems if not prevented. He also has some recommendations on strengthening legal protections for people with disability. The paper is not technical. Rather it explains clearly how it works, where it’s used, and what needs to be done.

The title of the article is, Disability Bias & New Frontiers in Artificial Intelligence. The “Black Box” issue is explained and the need for a “Glass Box” is presented.

From the abstract

Bias in artificial intelligence (AI) systems can cause discrimination against marginalized groups, including people with disabilities. This discrimination is most often unintentional and due to a lack of training and awareness of how to build inclusive systems.

This paper has two main objectives: 1) provide an overview of AI systems and machine learning, including disability bias, for accessibility professionals and related non-development roles; and 2) discuss methods for building accessible AI systems inclusively to mitigate bias.

Worldwide progress on establishing legal protection against AI bias is provided, with recommendations on strengthening laws to protect people with disabilities from discrimination by AI systems. When built accessibly, AI systems can promote fairness and enhance the lives of everyone, in unprecedented ways.

Diversity and inclusion in AI

An Australian book chapter takes a comprehensive and practical approach to how equity and inclusion should be considered throughout development. This should be done at both governance and development levels by applying inclusive design and human-centred design to the AI ‘ecosystem’.

The title of the chapter is Diversity and Inclusion in Artificial Intelligence.