Explainable AI again

John Haman

2020/12/15

Tags: Python Pythonista AI Statistics Machine Learning

Judging by the multiple and non-overlapping options for explaining a machine learning model, I am happy to report that the explainable AI problem is still kicking.

It will not be resolved. Factors interact, factors are correlated. Factors affect responses non-linearly. How can one claim to boil down these super complicated machines into hueristics that don’t deceive decision makers?

Meanwhile, machine learners still are not using analysis of variance to explain their models. Did we forget Fisher?

Why?

Why?