Based on the mathematics of Go and the machine learning algorithms used to play it, is there a mathematical limit as to how much of the game-tree space even an AI could learn, because of the inherent complexity / computational bounds (number of possible states of the board) of the game Go? Are we approaching a technological limit given current computer hardware (even massive data servers)?
-
You need to define timeframe or the question turns comically irrelevant - given technological development it is unlikely there is not a time we can not just solve GO with brute force. Not day, not tomorrow, but what in 100 years, even without a singularity event? Also, even if AI is complexity bound (i.e. does not learn a lot more above a certain size) it likely will get FASTER, enabling AI to do more in a timeframe and be smaller, meaning you suddenly talk of AI swarms. – TomTom Jun 22 '23 at 20:02
2 Answers
Consider that solving whatever problem with AI or not is a computational matter. Simply put: you can't break the rules of computational complexity.
Let's consider the game of Go, you can enumerate all the possibile combinations of states which is a very huge number like $10^{350}$ or so (I don't remember exactly, but is something larger than the number of atoms in the universe!) The point here is that Go and similar have an exponential complexity meaning that the number of possibilities increases exponentially at each step in time: these are usually called NP problems in computer science terms, i.e. problems with non-polynomial complexity which are intractable when aiming for a perfect (or globally optimal) solution.
In practice, if you consider a brute-force approach it would take forever to solve Go optimally no matter how many processors you have and how much fast they are: the computational complexity is just intractable, and the situation won't change even in 1000 years! (Also considering a computer machine with infinite memory.)
- The effect of increasing the computation power is only limited. For example, say after 10 year you can get a 100x faster machine or even more. Maybe you'll be able to solve Go perfectly but only for small boards say $N\times N$ but not for $(N+1)\times (N+1)$ ones due to the exponential nature of the problem.
- So you need an exponential increase in computation to increase the size of a solvable state space just by $1$, or so.
- Time allows you to get better equipment but in case of NP problems it won't allow to solve that class of problems for whatever $N$, i.e. problem size, in an acceptable time.
For such NP (or combinatorial) problems, the solution is to lower the computational complexity at the cost of introducing inaccuracies in the solution. Thus not computing a perfect (optimal) solution anymore but, instead, an approximated (or locally optimal) one. The way you can achieve this is, for example, by using search heuristics that help to cut down the search space (by pruning some sub-trees) thus allowing to solve larger $N$, or to approximate more severely like sampling a single (or few) paths in the full tree of possibilities like MTCS (monte-carlo tree search, which also powers AlphaGo) does.
- Another example is deep learning, that allows to solve hard but also difficult to code problems by learning from data. DL models yield an approximated solution that is usually (or better said always, in practice) a local optimum.
- You can control the goodness of the solution by the number of epochs for example. Therefore, a better model may allow to get the same solution in few epochs or even a better one - but the point is that it's still approximated.
- Thus, DL can be seen as a smart way to approximate a brute-force search.
So to conclude better (learning) algorithms allow to calculate better solutions in less time, but the problem remains intractable. Therefore is best to invest time in inventing better/faster/more accurate (approximated) algorithms rather than waiting to get a better CPU.
(As a side note, in theory and in ideal conditions, quantum computers may process an exponential amount of information, potentially solving combinatorial problems in linear or so time: be aware that I'm not a big expert. Indeed, in practice you have few qbits that are partially interconnected and you need to deal with errors and external interference, and as well approximation of operations with the quantum gates. Maybe in the future quantum computers would allow to solve for very large $N$, with good solutions too, but I think that it would still be possible to hit the hardware limit by finding a sufficiently large $N$ that is just too much to solve for.)

- 2,120
- 2
- 13
In the world of AI 100 percentage accuracy is almost impossible it is always a continous learning , The models which learns the game typically fall under Reinforment learning which learns the game by doing the mistakes, if a model did a mistake it punish itself with a quantative measure for that mistake and reward itself for a right move.
In a virtual setting the 100 percentage acuracy is only possible if the model makes all the possible mistakes in all kind of setting(placement of coins) on the board ,which was quite impossible to generate that many combinations but with the today's computation power we had reach almost 70% to 75%(guess) of the combinations but still computing for the combination is still a challenge

- 11
- 1
-
1Take a look at the combinatorial complexity of Go. You may want to re-assess that 70%-75% guess a **lot**: https://en.wikipedia.org/wiki/Go_and_mathematics#Legal_positions – Neil Slater Jun 22 '23 at 12:35