AI Data Bias and Digital Ageism

An insightful National Institute of Health article defines bias as “any trend or deviation from the truth in data collection, data analysis, interpretation and publication which can lead to false conclusions”. So, as you can see, the issue of bias elimination is a goal and concern at all stages of a data cycle from collection to data inference.

Given advances in technology, many current components of high-stakes decision making data cycles are governed through the use of artificial intelligence (AI) technologies. And although we can understand AI in general as intelligence of machines; it is an intelligence that has been derived to simulate human intelligence and understanding.  Therefore, it makes sense to ponder that if human thinking and processes could introduce bias into the various data cycle stages, then what’s to stop bias from materializing through the use of an AI data governed process?

AI Data Bias

Well, it turns out that the potential for AI data bias is a real issue. And the many talented folks who work with these data and technologies recognize the  need to identify and addressed AI data bias with similar levels of urgency as would be applied to investigations of a human driven data decision cycles. And, although AI data bias has been shown to have potential adverse impact on many different types of groups;  my personal interest within this blog post is to explore what the introduction of such biases may potentially mean for our seasoned adult population.

Digital Ageism

There is an interesting phrase in use, digital ageism, which is something that we should all likely want to become more familiar with.  We of course understand ageism historically as the existence of some level of systematic societal bias that is intentionally , or not intentionally, directed towards older adult populations. However, digital ageism is defined as “ age bias in technology”.  Who knew that we’d ever have to be concerned with such?  And yes, this is an issue with technologies and that includes AI technologies as well. For me this NIH article is  a fairly straightforward read that summaries digital ageism concerns as it relates to AI. One area of concern shared in the article, that I found particularly interesting, is the potential for AI data bias due to the existence of data underrepresentation as it relates to seasoned adults.

Everything else being equal, it appears that there is comparability much less diversity in available seasoned adult data content used to train AI models compared to the quantity of data availability for other age groups. One of the reasons this could be an issue is that much of the functionality benefits realized from the use of AI is grounded in the ability of the technology processes to learn from prior available data. If we as a group have very little prior data available, then there’s less learning about us that can take place.

Additionally, it appears that much of the available seasoned adult AI data gathering has been focused in the areas health management and medical needs. Yes, these are important areas of consideration for us; but health and medical issues do not completely define us as a group.  We like  to socialize, be entertained, have fun,  and enjoy leisure activities among other things. The potential for AI data bias, due to data underrepresentation, and how this impacts us needs to be a concern for our seasoned adult community.

So what can we do about this? What kinds of actions can we take ?  Will write about this in a follow up post.