Our Machines Now Have Knowledge We’ll Never Understand
By
Josh Clark
Published Apr 22, 2017
David Weinberger considers what it means that machines now construct their own models for understanding data, quite divorced from our own (more simplistic) models. “The nature of computer-based justification is not at all like human justification. It is alien,” Weinberger writes. "But ‘alien’ doesn’t mean ‘wrong.’ When it comes to understanding how things are, the machines may be closer to the truth than we humans ever could be.”
The complexity of this alien logic often makes it completely opaque to humans—even those who program it. If we can’t understand the basis of machine-delivered “truths,” Weinberger suggests, they become categorically different from what we’ve always considered to be “knowledge”:
Clearly our computers have surpassed us in their power to discriminate, find patterns, and draw conclusions. That’s one reason we use them. Rather than reducing phenomena to fit a relatively simple model, we can now let our computers make models as big as they need to. But this also seems to mean that what we know depends upon the output of machines the functioning of which we cannot follow, explain, or understand. … If knowing has always entailed being able to explain and justify our true beliefs — Plato’s notion, which has persisted for over two thousand years — what are we to make of a new type of knowledge, in which that task of justification is not just difficult or daunting but impossible? …
One reaction to this could be to back off from relying upon computer models that are unintelligible to us so that knowledge continues to work the way that it has since Plato. This would mean foreswearing some types of knowledge. We foreswear some types of knowledge already: The courts forbid some evidence because allowing it would give police an incentive for gathering it illegally. Likewise, most research institutions require proposed projects to go through an institutional review board to forestall otherwise worthy programs that might harm the wellbeing of their test subjects.
This is super-intriguing: what are the circumstances where the stakes are so high that we simply can’t allow ourselves to trust the conclusions of our machines, not matter how confident we may be in the algorithm? When it comes to “forbidden” areas of machine-learning models, Weinberger points out credit agencies are already forbidden from tying certain predictive models to credit scores. If the machines decide that certain races, religions or ethnicities are prone to lower or higher credit scores, for example, credit agencies are legally forbidden from acting on that info.
The reason this is a dangerous area is because the machines’ conclusions are only as valuable as the training data we feed to them. And that training data depends on the perspective (and bias) of the folks who collect it:
For example, a system that was trained to evaluate the risks posed by individuals up for bail let hardened white criminals out while keeping in jail African Americans with less of a criminal record. The system was learning from the biases of the humans whose decisions were part of the data. The system the CIA uses to identify targets for drone strikes initially suggested a well-known Al Jazeera journalist because the system was trained on a tiny set of known terrorists. Human oversight is obviously still required, especially when we’re talking about drone strikes instead of categorizing cucumbers.
We’re still in the early days of what this oversight and machine-human partnership might look like, but we’re going to have to learn fast. Machine learning has suddenly become inexpensive and accessible to a whole range of organizations and uses, and we see it everywhere. This revolution has revealed the complexity of everyday systems at the same time that it’s let us cut right through them through the capacity and speed of modern computing—even if we don’t understand how we got there.
Where once we saw simple laws operating on relatively predictable data, we are now becoming acutely aware of the overwhelming complexity of even the simplest of situations. Where once the regularity of the movement of the heavenly bodies was our paradigm, and life’s constant unpredictable events were anomalies — mere “accidents,” a fine Aristotelian concept that differentiates them from a thing’s “essential” properties — now the contingency of all that happens is becoming our paradigmatic example.
This is bringing us to locate knowledge outside of our heads. We can only know what we know because we are deeply in league with alien tools of our own devising. Our mental stuff is not enough.