THE LATEST THINKING
The opinions of THE LATEST’s guest contributors are their own.

We Need A Collective Reckoning With AI
Posted on May 24, 2021 04:39
2 users
Examples of biased algorithms are myriad; but they're allowed to run largely unchecked, making decisions in crucial areas ranging from healthcare to the justice system while their flaws are retroactively worked out.
Artificial intelligence has a problem — in fact, a lot of them.
AI algorithms learn from our data, and so inevitably propagate and entrench existing inequalities ranging from healthcare to the justice system. Bias mitigation attempts, through synthetic data or the construction of models that are discrimination-conscious by design, are hampered by a debate over what constitutes ‘fair’ in an algorithm when society itself is not.
And bias isn’t the only problem.
Companies with proprietary concerns at best, and a desire to hide unethical practices at worst, can rely on intellectual property protections to avoid exposing the workings of their AI. Even where the willingness to be transparent exists, the algorithm’s process isn’t always clear. Of course, algorithms designed by outside groups can approximate the methods of black box AI. But designing AI with explainability in mind doesn't really solve the problem either.
We tend to over-trust decisions made by technology. It’s called automation bias, and explanatory AI has been shown, in some cases, to make us more trusting of incorrect conclusions. If nothing else, more work must go into crafting effective explanations.
It’s not like the black box problem is unmanageable; nor is the problem of how to make its explanations count. It’s not like any of these problems are unmanageable. Our society collectively suffers another form of bias, this one without a name: the idea that speed and efficiency outweigh efficacy and thoughtfulness in implementation. The results of this bias can be seen in our fragile supply chains, which faltered under the weight of COVID-19 stockpiling. And they can be seen today, in the architecture of a society that’s being steadily redesigned by algorithms whose workings are imperfect, whose methods are widely unknown, and have already begun to revise our society while still needing revisions of their own.
Solving the AI problem requires answers to hard questions: how much fairness we’ll sacrifice in the name of accuracy, how to weigh transparency and innovation, how much we can rely on machines, and which societal trends we want to propagate — or not.
We can answer these questions. AI isn’t as alien as it appears; we’ve always had algorithms. Laws are essentially algorithms, a system of instructions and commands that determines our behavior. A sometimes retroactive, iterative process of establishing and reworking regulations and laws has mostly worked to date. But technology moves at warp speed, and we shouldn’t be figuring out AI’s flaws by studying its outcomes in retrospect.
Fundamental societal restructuring demands democratic consent. We need to decide what we want from AI, in which areas we’ll allow it, and what kind of result we’d call good. The standards to which AI will be held should have been determined preemptively; they need to be determined now. And there needs to be a comprehensive system of monitoring after the fact.
We need to ensure that any changes wrought by AI are the ones we wanted to see, not the ones we didn't know to look out for.
Comments