Member-only story

Towards more ‘fair’ AI: algorithmic intent in a disparate society — part 8

Linda Margaret
3 min readJan 22, 2024

Can ‘fair’ algorithms help address unfair policies offline? Part 8 of 9 (because I want to let this go for a little bit and I’m sure you do too.)

Quick review of previous posts.

Oh, just go read them.

We can (choose to) learn from algorithms.

Sonia Sotomayor: Until we get equality in education, we won’t have an equal society.

A lot of algorithms are trained and applied according to metrics that we humans choose to classify and prioritize either deliberately or ‘innocently’ via the data that we select and use when training the algorithm.

If the data is biased, the algorithm is biased, unless we compensate with additional demands for the algorithm and mathematically revise how the data is weighted to account for biases we do or don’t want.

When this occurs within the context of ‘what is fair,’ this usually involves judging the parity of the results of an algorithm’s implementation (how close the algorithm gets to the desired ‘predicted outcome’ and/or how far away it gets from the less desirable ‘ground truth’ or measured status quo.)

With fairness, those desired outcomes fall along two very human axes: demographic parity and conditional demographic parity.

Demographic parity is…

--

--

Linda Margaret
Linda Margaret

Written by Linda Margaret

I write academic grants etc. in Europe's capital. Current work: cybersecurity, social science. https://www.linkedin.com/in/lindamargaret/

No responses yet