Peering into the black box of machine learning, part 1

Author’s note: This is a multi-part series talking about current work on demystifying the black box nature of machine learning.

Part 1. Confidence:  Probability vs Trust

As a computer scientist working with data classification, I often get the question “what’s your confidence as a percentage, in this classification result from the software?” It’s taken me some time and a number of false starts to work out what the word confidence means in the real context of this question.

It’s tempting – but mistaken – to think the user’s confidence question is about a confidence level in inferential statistics. Confidence levels in statistics provide a quantitative way to describe a set of outcomes from an experiment; they describe a fact about already-collected data, as a probability. For example, “95% of these data points fall into this range of values.”

But I’ve found most often the question “what’s your confidence, as a percentage?” is not trying to describe a picture of the past, but of the future. People use computer classifiers to help work out “should I do X or Y”, or even just “should I do X”, and users want to know if they can trust the answer coming from the machine. When the machine is a black box to you, it’s often unclear how much you should trust the answer the machine produces.

So the question a decision maker has is “Should I trust the machine’s answer?” Trust, in this sense, involves a level of understanding what the machine is doing, and why it’s doing it.

Some classification techniques have a fairly straightforward logic path to follow, to establish the trust the user is looking for. For these methods, there’s a clear connection between the logic the technique is using, and the resulting bucket or category a given blob of data falls in. For rule-based systems and decision trees, you can consult the rule base or the decision tree, and see the path of reasoning the classifier is using.

Lou Glassy
No Comments

Post a Comment