Member-only story
Towards more ‘fair’ AI: algorithmic intent in a disparate society — part 2
Can ‘fair’ algorithms help address unfair policies offline? Part 2 of 9 (I think.)
Yesterday we considered how deceptively WEIRD data is — and, by extension, the algorithms that feed off WEIRD data and the regulations that attempt to constrain them.
Today, let’s consider what an algorithm is and define a few terms.
All algorithms share some basic characteristics.
Much bile and bemusement have spilled as coders work with legal experts, ethical philosophers, and analytical sociologists, amongst many others, to try and determine what ‘fair’ treatment looks like online.
Much of this research focuses on algorithms, which are the digital incarnation of real-world biases and aspirations writ large and with arguably less reason to change than the average human.
Algorithms are mathematical calculations that seek to assess solutions between what individual variables (and the people that these variables may represent) seek to achieve and what is permissible and likely within the existing community or framework with which the algorithm is familiar.
What does this mean in practice?