Impossible Fairness
Rants about algorithmic bias are a dime a dozen, but this one actually taught me something! Let me relay it, in story form.
Suppose you have a decision in front of you! You have some gamble you can make – make a microloan to somebody far away, not send somebody to jail, admit or employ somebody, whatever. You want to know whether this is a good idea.
Fortunately, to help you, you have a carefully tuned “scoring function,” into which you plug information about the opportunity, and it tells you how likely it is to be a good idea:
You do a bunch of analysis and determine that you can afford to take any opportunity with a score of at least 70.
Of course, you know algorithmic bias is a thing, and you don’t want to unfairly discriminate against anybody, so to make sure your algorithm isn’t doing anything nasty under the covers, you take the data you tuned your algorithm on; partition it into two piles based on, say, what color hair your counterparty had; and re-draw your calibration plot for each hair color.
(“Red? Blue? Hair color?” We’re in an anime, duh.)
Hmm, your scoring algorithm is slightly miscalibrated in a way that depends on hair color. That means, if you stick with your strategy of “take any opportunities with a score of at least 70”…
You’re kinda prejudiced against redheads. You’re willing to make-loans-or-whatever with blue-dos who are only 65% reliable, while redheads have to be something like 75% reliable. Yikes.
The next day, in response to the Cerulean City Herald’s vicious exposé on your discriminatory practices – apparently some insider leaked your graph to them – you publicly promise to make it right, by integrating hair color into your decision algorithm in order to correct for your scoring algorithm’s bias against redheads. Now, you will make your decisions based on actual probability of the gamble being a good idea, rather than this “score” proxy you’d developed. Surely this will be un-hashtag-problematic!
The next day, you make the front page the Tokyo-3 Times: you’re accepting score-65 redheads but not score-75 blue-dos – what’s more, you’ve deliberately incorporated anti-blue-do bias into your decisions. In these times of etc. it’s hardly surprising to find such a blatantly etc. and it just goes to show you stuff.
You scratch your head for a bit – you can’t make decisions based on score, you can’t make decisions based on predicted outcome… maybe you need a different score-function? One that isn’t simultaneously biased against redheads and blue-dos? But after some careful thought, you find that unless you can predict the outcome almost unerringly accurately, there will always be some reasonable fairness metric you’re violating.
So, in response to the Times, you publicly promise to make it right, and towards that end, you replace your algorithm-based decision-making process with a fractal flow-chart of black-box human decisions about which nothing can ever be definitively proved. When the Shiganshina Inquisitor collects and publishes statistics about some of the few publicly-observable actions that you’ve been regretfully unable to obfuscate, and it turns out that you’re still being unfair (as you already knew you must be, by some reasonable metric), you declare in no uncertain terms that you absolutely stand with the courageous mumble mumble pileal justice, and wait for it to blow over.