1

Currently, we can build the Artificial Intelligence (AI) approaches that respectively explain their actions within the use of goal trees 1. By moving up and down across the tree, it keeps tracking the last and next movements. Therefore, giving the ability to the machine for "explain" the actions.

Explainability regarding human levels, requires some cognitive effort, such as self-awareness, memory retrieval, a theory of mind and so on 2. Humans are adept at selecting several causes from an infinite number of causes to be the explanation. However, this selection is influenced by certain cognitive biases. The idea of explanation selection is not new in eXplainable Artificial Intelligence (XAI) [3, 4]. But, as far as we are aware, there are currently no studies that look at the cognitive biases of humans as a way to select explanations from a set of causes.

Despite a clear definition and description of the XAI field, several questions remain present. The issues are summarized in just one sentence and listed as follows.

That said, our question is:

How can we create and build XAI?

References

1 Hadoux, Emmanuel, and Anthony Hunter. Strategic Sequences of Arguments for Persuasion Using Decision Trees. AAAI. 2017.

2 Miller, T., 2018. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence.

3 Gunning, D., 2017. Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA), nd Web.

4 Samek, W., Wiegand, T. and Müller, K.R., 2017. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296.

nbro
  • 39,006
  • 12
  • 98
  • 176
  • 1
    Can you confirm that this (your) question is a duplicate of [this one](https://ai.stackexchange.com/q/12870/2444)? If you think it's not a duplicate, can you explain why? That supposed duplicate is asking for XAI techniques, which is exactly what you're asking, i.e. "How can we create XAI", i.e. the techniques are suggestions/answers for your answer. – nbro Nov 17 '20 at 16:46
  • Despite being true that this question is similar to [that one](https://ai.stackexchange.com/questions/12870/which-explainable-artificial-intelligence-techniques-are-there), your question was published on **Jun 15 '19**, while this question was published on **Jan 25 '19**. Making it an earlier contribution. Please consider my last politely claims and try to understand the reasoning here. – Francisco Maria Calisto Dec 15 '20 at 21:36
  • 1
    Yes, but that question has 2 objectively good answers. It would not make sense to mark the other question as a duplicate of this one, when this does not have any good answer. Do you agree? So, if you think it's a duplicate, I will mark this as a duplicate of the other, even though this is a slightly older post. Btw, I upvoted your question, which will just be marked as a duplicate, i.e. it will not be deleted, so you will not lose the reputation points. – nbro Dec 15 '20 at 21:42
  • Although I do not agree with this level of justice, I can understand your logic. However, I would like to slightly specify the question for the topic of medical imaging, which will make the extent of questions/answers totally different. Please give me some time to change the question and reconsider not flagging it as duplicate. – Francisco Maria Calisto Dec 15 '20 at 21:53
  • 1
    I suggest that you ask a completely different question than changing this one. A new question can become "hot" and attract a lot of users, while an older question cannot. – nbro Dec 15 '20 at 21:56

0 Answers0