3

I am reading The Book of Why: The New Science of Cause and Effect by Judea Pearl, and in page 12 I see the following diagram.

Diagram from The Book of Why by Pearl, page 12

The box on the right side of box 5 "Can the query be answered?" is located before box 6 and box 9 which are the processes to actually answer the question. I thought that means that telling if we can answer a question would be easier than actually answering it.

Questions

  1. Do we need less information to tell if we can answer a problem (epistemic uncertainty) than actually answer it?
  2. Do we need to try to answer the problem and then realize that we cannot answer it?
  3. Do we answer the problem and at the same time provide an uncertainty estimation?
DukeZhou
  • 6,237
  • 5
  • 25
  • 53
Lerner Zhang
  • 877
  • 1
  • 7
  • 19
  • This is more of a statistics question than an AI question. I suggest this be migrated to stats.stackexchange. – The Pointer May 01 '21 at 16:17
  • @ThePointer As soon as I saw "Inference Engine" I was persuaded otherwise. We love theoretical question such as this! This also involves higher maths, specifically formal logic, which is a form of computing related to artificial intelligence (complexity classes, theorem solving, etc.) – DukeZhou May 05 '21 at 00:41
  • 1
    @Lerner At a high level my sense is no—complexity classes tell us which problems can be solved and the degree of "hardness". We can determine this by looking at the structure of the problem. For instance, we can determine if a problem is solvable but intractable, or undecidable (a coinflip) merely by knowing the rules of the "game or puzzle". I'll think about a formal answer – DukeZhou May 05 '21 at 00:53
  • A concept in [Bloom's Taxonomy](https://www.coursemapguide.com/bloom-s-taxonomy) may be important in this issue: metacognitive. @DukeZhou – Lerner Zhang Oct 13 '21 at 15:23

0 Answers0