3

I am reading the book titled Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig (4th edition) and came across this sentence about depth-first search (page 79, line 12):

For acyclic state spaces it may end up expanding the same state many times via different paths, but will (eventually) systematically explore the entire space.

My question is: how this is possible? Can you please show me some examples?

nbro
  • 39,006
  • 12
  • 98
  • 176
user153245
  • 123
  • 8

1 Answers1

1

First, note that the verb to expand has a specific meaning in this context: when you expand a node/state $s$, you try each action $a_1, \dots, a_n$ available from $s$, and each of these actions $a_i$ leads to another state.

Second, if the graph is a DAG, then the edges have a direction and there are no cycles; so, if you follow the edges, you will never get back to a node that you have already been at. However, this is not sufficient to understand that statement and why or when it is true.

Third, that statement is true only if you use the tree search version of the depth-first search, i.e. you don't keep track of the nodes that you have already explored. In other words, you don't use graph search, which is a tree search but that keeps track of the already expanded nodes in a set called the explored set (or closed list), so that to avoid expanding them again. So, here, if you explored a node/state $s$, you added it to the explored set, so you won't expand it again.

So, why is it possible to expand a node more than once in the tree search version of DFS? Well, you don't keep track of the nodes that you already expanded, so if you encounter it again from another path, you may expand it again. Given that it's a DAG (and assuming it's finite), then you will eventually try all possible paths, so you will find the goal.

Now, here's an example of when you expand a node more than once in the tree search version of DFS.

Let's say we have this search space, where you start at $A$ and the goal is $E$.

A -> B -> C -> D
A -> C -> D  
A -> E

If the nodes are at the same depth, let's assume that you expand them alphabetically; so, in the search space above, you expand first $B$ and then $C$, because $B$ comes before $C$ (in the English alphabet).

Now, note that we can go directly to $C$ from $A$ or we can go to $C$ by first going to $B$. $C$ has a child. When you expand $C$, basically, you try the possible actions from $C$. In this case, there's only one action, i.e. going to $D$.

So, without keeping track that you expanded $C$, when you got there from $B$, after you backtracked from $D$ the first time, you will try again to expand $C$. In the graph search, this would not be possible because, the first time we expand $C$, we add it to the explored set, so we don't expand it again.

nbro
  • 39,006
  • 12
  • 98
  • 176
  • Thank you for your reply. I did not consider directed state space graphs at all. In addition, it can not be inferred from the context of that section of the book. So, can we conclude that the statement is correct only in directed graphs and not in general? – user153245 Nov 08 '21 at 07:15
  • @user153245 I didn't think about the case of undirected graphs. I'm also not sure how you would define cycles in an undirected graph, as every edge is a cycle (in a way), but, of course, this is probably not the definition of a cycle in an undirected graph. In the context of undirected graphs, you probably need also to keep track of the edges that you already visited. – nbro Nov 08 '21 at 12:54
  • The explored set could be some internal flag in every node. Look at garbage collection algorithms. Read the [GC handbook](https://gchandbook.org/) – Basile Starynkevitch Dec 30 '21 at 10:22