Member-only story

Towards more ‘fair’ AI: algorithmic intent in a disparate society — part 4

Linda Margaret
5 min readJan 18, 2024

Can ‘fair’ algorithms help address unfair policies offline? Part 4 of nine (I think.)

Quick review of posts 1 to 3.

Post 1: we live in a WEIRD digital world.

Post 2: terms and conditions for my discussions.

  • Algorithms are apathetic math deployed and fostered by emotional humans.
  • Each human embraces a ‘ground truth,’ or what s/t/he/y think(s) is measurably true right now.
  • Humans use algorithms to aim for ‘predicted outcomes,’ or what s/t/he/y would like to be able to measure and thus perceive as reality in some specifically defined future.

Post 3: Reality is built on the ever-shifting sands of what we think we know right now, which will change depending on the individual and the timing.

So if we can’t really ‘know’ anything, what can we remember?

Context and perspective are critical.

Chris Rock: A white boy that made Cs in college can make it to the White House.

I’ve mentioned the need for comparison when assessing how ‘fair’ outcomes are. This means that any humans and/or tech that do the measuring must first concretely define then compare and contrast variables identified via evocative labels like gender or ethnicity.

--

--

Linda Margaret
Linda Margaret

Written by Linda Margaret

I write academic grants etc. in Europe's capital. Current work: cybersecurity, social science. https://www.linkedin.com/in/lindamargaret/

No responses yet