Questions tagged [explainable-ai]

For questions related to explainable artificial intelligence (XAI), also known as interpretable AI, which refers to AI techniques that can be trusted and easily understood by humans, which are particularly relevant in areas like healthcare or self-driving cars. There are several concepts related to XAI, such as accountability, fairness, and transparency.

See e.g. https://en.wikipedia.org/wiki/Explainable_artificial_intelligence.

50 questions
97
votes
7 answers

Do scientists know what is happening inside artificial neural networks?

Do scientists or research experts know from the kitchen what is happening inside complex "deep" neural network with at least millions of connections firing at an instant? Do they understand the process behind this (e.g. what is happening inside and…
72
votes
9 answers

Why do we need explainable AI?

If the original purpose for developing AI was to help humans in some tasks and that purpose still holds, why should we care about its explainability? For example, in deep learning, as long as the intelligence helps us to the best of their abilities…
malioboro
  • 2,729
  • 3
  • 20
  • 46
18
votes
3 answers

Which explainable artificial intelligence techniques are there?

Explainable artificial intelligence (XAI) is concerned with the development of techniques that can enhance the interpretability, accountability, and transparency of artificial intelligence and, in particular, machine learning algorithms and models,…
nbro
  • 39,006
  • 12
  • 98
  • 176
9
votes
1 answer

Why does nobody use decision trees for visual question answering?

I'm starting a project that will involve computer vision, visual question answering, and explainability. I am currently choosing what type of algorithm to use for my classifier - a neural network or a decision tree. It would seem to me that, because…
9
votes
2 answers

How is the "right to explanation" reasonable?

There has been recent uptick in interest in eXplainable Artificial Intelligence (XAI). Here is XAI's mission as stated on its DARPA page: The Explainable AI (XAI) program aims to create a suite of machine learning techniques that: Produce more…
9
votes
1 answer

How would one debug, understand or fix the outcome of a neural network?

It seems fairly uncontroversial to say that NN based approaches are becoming quite powerful tools in many AI areas - whether recognising and decomposing images (faces at a border, street scenes in automobiles, decision making in uncertain/complex…
6
votes
0 answers

Has anyone attempted to take a bunch of similar neural networks to extract general formulae about the focus area?

When a neural network learns something from a data set, we are left with a bunch of weights which represent some approximation of knowledge about the world. Although different data sets or even different runs of the same NN might yield completely…
Lawnmower Man
  • 300
  • 1
  • 7
5
votes
2 answers

What do the neural network's weights represent conceptually?

I understand how neural networks work and have studied their theory well. My question is: On the whole, is there a clear understanding of how mutation occurs within a neural network from the input layer to the output layer, for both supervised and…
4
votes
1 answer

How can I interpret the way the neural network is producing an output for a given input?

I'm using a small neural network (2 hidden layers, 60 neurons apiece) for a rather complex binary classification problem. The network works well, but I'd like to know how it is using the inputs to perform the classification. Ultimately, I would like…
asheets
  • 153
  • 5
4
votes
1 answer

Are these visualisations the filters of the convolution layer or the convolved images with the filters?

There are several images related to convolutional networks on the Internet, an example of which I have given below My question is: are these images the weights/filters of the convolution layer (the weights that are learned in the learning process),…
4
votes
1 answer

Is tabular Q-learning considered interpretable?

I am working on a research project in a domain where other related works have always resorted to deep Q-learning. The motivation of my research stems from the fact that the domain has an inherent structure to it, and should not require resorting to…
3
votes
1 answer

In GradCAM, why is activation strength considered an indicator of relevant regions?

In the GradCAM paper section 3 they implicitly propose that two things are needed to understand which areas of an input image contribute most to the output class (in a multi-label classification problem). That is: $A^k$ the final feature…
3
votes
1 answer

What exactly is an interpretable machine learning model?

From this page in Interpretable-ml book and this article on Analytics Vidhya, it means to know what has happened inside an ML model to arrive at the result/prediction/conclusion. In linear regression, new data will be multiplied with weights and…
3
votes
1 answer

What needs to be done to make a fair algorithm?

What needs to be done to make a fair algorithm (supervised and unsupervised)? In this context, there is no consensus on the definition of fairness, so you can use the definition you find most appropriate.
2
votes
1 answer

Is there any interpretation method suitable for CNNs which do regression tasks?

I mainly tackle regression problems by CNNs, and want to find a reliable method to calculate the heatmaps for NN's results. However, I find almost all interpretation methods including CAM is used for classification NNs but not for regression NNs. Is…
1
2 3 4