Algorithmic Bias in Learning Analytics

posted in: Learning Analytics | 0

You may have heard some discussion about bias in analytics, but what exactly is it? And surely we have fixed it by now? Human beings build analytic models and algorithms – and therefore, human bias will make it into the code of these models and algorithm in various ways. This article takes a look at four causes of bias in machine learning, which would apply to learning analytics as well. These four categories are: Sample Bias (the data is not trained on an accurate sample of the target population), Prejudice Bias (the data training is influenced by sociocultural stereotypes in small and large ways), Measurement Bias (the devices used to collect or measure data have a problem or flaw), and Algorithm Bias (finding a balance between complexity and noise when analyzing the data). While completely removing bias of any kind from our algorithms is not currently realistic, the article looks at some quick ways to start addressing these biases in our data. There are other types of bias that are possible, and the four listed here are much more complex and nuanced than a short article can cover. Why is this so important in learning analytics? One recent report measures the current worth of the learning analytics market size at USD 2.6 billion – which is projected to increase to USD 7.1 billion by 2023.This means that learning analytics is being utilized in a fairly significant (and rapidly growing) part of the learning field, and any bias that is ignored will possibly spread to and affect a massive number of learners. Recognizing and addressing any type of bias should be a core goal of any company or department developing any type of analytic product.

(image by Gertrūda Valasevičiūtė on Unsplash)

Leave a Reply

Your email address will not be published. Required fields are marked *