Currently, we can build the Artificial Intelligence (AI) approaches that respectively explain their actions within the use of goal trees 1. By moving up and down across the tree, it keeps tracking the last and next movements. Therefore, giving the ability to the machine for "explain" the actions.
Explainability regarding human levels, requires some cognitive effort, such as self-awareness, memory retrieval, a theory of mind and so on 2. Humans are adept at selecting several causes from an infinite number of causes to be the explanation. However, this selection is influenced by certain cognitive biases. The idea of explanation selection is not new in eXplainable Artificial Intelligence (XAI) [3, 4]. But, as far as we are aware, there are currently no studies that look at the cognitive biases of humans as a way to select explanations from a set of causes.
Despite a clear definition and description of the XAI field, several questions remain present. The issues are summarized in just one sentence and listed as follows.
That said, our question is:
How can we create and build XAI?
References
1 Hadoux, Emmanuel, and Anthony Hunter. Strategic Sequences of Arguments for Persuasion Using Decision Trees. AAAI. 2017.
2 Miller, T., 2018. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence.
3 Gunning, D., 2017. Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA), nd Web.
4 Samek, W., Wiegand, T. and Müller, K.R., 2017. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296.