11

I was reading about John McCarthy and his orthodox vision of Artificial Intelligence. To me, it seems like he was not very much in favour of resources (like time and money) being used to make AIs play games like Chess. Instead, he wanted more to focus on passing the Turing test and AIs imitating human behavior.

I have also read many articles about major companies, like IBM, Google, etc., spending millions of dollars in making AIs play games, like Chess, Go, etc.

To what extent is this justified?

nbro
  • 39,006
  • 12
  • 98
  • 176
Suraj Shah
  • 155
  • 11
  • 2
    My short answer is that games like Chess and Go have complexity akin to nature (by which I mean the universe) and are useful to study, particularly in their unsolved states, b/c, like looking out into the universe, you never know what you're going to find. Simple combinatorial models, of which games are the most useful for AI, can be infinitely expansive. Pure mathematics often takes a while to find applications, but it has a very good track record in this respect. Even where such games are solved, solutions can still be refined. – DukeZhou Jul 06 '17 at 21:40
  • 2
    To illustrate my point see [A topological approach to solving Tic-Tac-Toe](https://math.stackexchange.com/questions/1854525/is-there-a-winning-strategy-for-this-tic-tac-toe). This may also be of interest: [Solving Tic-Tac-Toe, Part II: A Better Way](http://catarak.github.io/blog/2015/01/07/solving-tic-tac-toe/). These are just a couple of basic example of what people are doing and thinking about and how games, in this case combinatorial games, relate to AI and problem solving. – DukeZhou Jul 06 '17 at 22:08
  • The paper [Chess as the Drosophila of AI](http://jmc.stanford.edu/articles/drosophila/drosophila.pdf) by John McCarthy himself is probably something that people interested in this question want to read too. – nbro Jan 31 '21 at 00:21

3 Answers3

11

In the book Artificial Intelligence: A Modern Approach (section 5.7, p. 185), Russell and Norvig write

In 1965, the Russian mathematician Alexander Kronrod called chess "the Drosophila of artificial intelligence." John McCarthy disagrees: whereas geneticists use fruit flies to make discoveries that apply to biology more broadly, AI has used chess to do the equivalent of breeding very fast fruit flies. Perhaps a better analogy is that chess is to AI as Grand Prix motor racing is to the car industry: state-of-the-art game programs are blindingly fast, highly optimized machines that incorporate the latest engineering advances, but they aren't much use for doing the shopping or driving off-road. Nonetheless, racing and game-playing generate excitement and a steady stream of innovations that have been adopted by the wider community

So, although these games (like Chess, Go, and Bridge) may not apparently be useful/beneficial to many people, the AI programs developed to play them have introduced/included concepts/techniques, like null move heuristics, futility pruning, combinatorial game theory, finessing and squeezing or meta-reasoning, which can potentially be useful to a wider spectrum of Computer Science (and not just Artificial Intelligence).

You can see it similar to space missions of NASA, ISRO, JAXA and other space agencies. All these missions don't seem to have a direct benefit to citizens, but have many indirect benefits. They pave the way for technological innovations (GPS, 3D printing, car crash technology, clean energy, LED), the creation of jobs, etc. Advance storms, hurricane detection is the output of space exploration, which has saved millions of lives worldwide.

AI in games has not just helped to develop the software but hardware also. Many innovations have been seen to produce highly optimised and powerful hardware.

nbro
  • 39,006
  • 12
  • 98
  • 176
Ugnes
  • 2,023
  • 1
  • 13
  • 26
  • 6
    Also, games like chess are highly standardized, so it's easier to compare different solutions and approaches. However, the Turing test doesn't have any formal base for comparison that is consistent over multiple runs (AFAIK), so comparing different approaches gets a lot harder (and possibly dependent on measuring methodology). – hoffmale Jul 05 '17 at 10:01
  • This answer was partially a copy-and-paste from AIMA (with minor changes) without explicitly quoting (although you had mentioned the book). Please, next time you copy something from another source, quote it using `>`, otherwise, that's considered [plagiarism, which means that your answer could have been deleted](https://ai.stackexchange.com/help/referencing). I edited this answer to clarify that you're quoting from AIMA. – nbro Jan 30 '21 at 13:05
3

Why is Game Playing R&D a Focus of Resource Allocation?

When examining the apparent obsession with game playing as researchers attempt to simulate portions of human problem solving abilities, the orthodoxy of the views of John McCarthy (1927 – 2011) may be misleading.

Publication editorial bias and popular science fiction themes may obscure the primary forces that lead to the appearance of obsession with developing winning board game software. When examining the allocation of funds and human resources within the many fields of intelligence research and development, some historical background is necessary to circumvent distortions typical of answers to questions in this social net.

Historical Background

The ability to place ourselves out of our own time and into the mindset of other periods is helpful when analyzing history, including scientific and technological history.

Consider that McCarthy's vision was not orthodox in his time. It quickly became orthodox because of an array of emerging trends in thought about automation among scientists and mathematicians in times immediately following western industrialization. This thinking was the natural extension of the mechanization of the printing, textile, agriculture, and transportation industries and of war.

By the mid-twentieth century, some of these trends combined to conceptualize the digital computer. Others became orthodoxy within the community of people investigating aspects of intelligence via digital systems. The technical backdrop included theoretical work and electro-mechanical work, some of which has since achieved a degree of public fame. But it was generally either secret or too abstract (and therefore obscure) to be considered items of national security interest at the time.

  • Cybernetics theory, largely developed by Norbert Wiener (1894 – 1964)
  • The work done on automating arithmetic (extending George Boole's theory and Blaise Pascal's calculator, with primary funding originating from the U.S. military in an interest in guiding anti-aircraft weaponry by calculating probable trajectories of enemy of aircraft and determining spherical coordinates to create a probable interesting ballistic trajectory
  • Often dismissed work of Alonso Church (1903 – 1995) on lambda calculus which led to the idea of functional programming, a key aspect to the emergence of LISP in Cambridge, which McCarthy leveraged for early AI experimentation
  • The birth of information theory, primarily through the work of Claude Shannon (1916 – 2001), funded through Bell Labs in the interest of automating communications switching
  • The early cryptanalysis work of Church's doctoral student, Alan Turing, funded entirely by Allied Forces with the R&D goal of defeating the Enigma cryptography device so that Nazi forces could be stopped prior to the complete annihilation of London and other Allied targets
  • The work on John von Neumann (1903 – 1957) toward centralizing the implementation of arbitrary Boolean logic together with integer arithmetic into a single unit (currently called a CPU) and storing the program that controlled the implementation in electronic flip-flops along with the data to be processed and the results (the same general architecture imployed by almost all contemporary computing devices today)

All of these were concepts surrounding the vision of automata, the simulation of functional aspects of mammalian neurology. (A monkey or elephant can successfully plan and execute the swatting of a fly, but a fly is incapable of planning and executing an attack on a monkey or elephant.)

Experimentation into intelligence and its simulation via symbolic manipulation using a new programming language, LISP, was a primary focus of John McCarthy and his role in the creation of the MIT AI Laboratory. But whatever orthodoxy may have existed with rule based (production systems), neural nets, and genetic algorithms has largely diversified into a cloud of ideas that make the term orthodoxy somewhat nebulous. A few examples follow.

  • Richard Stallman resigned from the MIT AI Lab and began a philosophical shift away from many of the economic philosophies that dominated that time period. The result was GNU software and LINUX, followed by open hardware and creative commons, concepts largely opposed to the philosophic orientation of those that funded AI hotbeds.
  • Many proprietary (and therefore company confidential) systems use Bayesian methods or adaptive components that stem more from Norbert Wiener's work than anything that was considered mainstream AI research in the 1970s.

The Birth of Game Theory

The key event that answers the question most directly in this parade of historical events is some other work of von Neumann's. His book Game Theory, coauthored with Oskar Morgenstern, is perhaps the strongest factor among the historical conditions that led to the persistence of Go and Chess as test scenarios for problem solving software.

Although there were many earlier works on how to win in Chess or Go, never before was there a mathematical treatment and a presentation as compelling as that in Game Theory.

The privileged members of the scientific community were well aware of von Neumann's success with raising the temperature and pressure of fissile material to critical mass and his work in deriving classic thermodynamics from quantum theory. The foundation of mathematics he presented in Game Theory was quickly accepted (by some of the same people that funded research at MIT) as a potential predictive tool for economics. Predicting economics was the first step in controlling it.

Theory Meets Geopolitical Philosophy

The dominant philosophy that drove western policy during that period was Manifest Destiny, essentially the fatalist view of a New World Order, the head of which would be in the seats of U.S. power. Declassified documents indicate that it is highly likely that leaders of that time saw economic domination achieved through the application of game theory as considerably less risky and expensive than military conquest followed by the maintenance of bases of operations (high tech garrisons) near every populated area overseas.

The highly publicized challenges to develop Chess and Go automatons are simply dragnets that corporations and governments use as a first cut in the acquisition of personnel assets. The game results are like resumes. A winning game playing program is a piece of evidence of the existence of programming skill that would likely also succeed in the development of more important games that move billions of dollars or win wars.

Those who can write winning Chess or Go code are considered high value assets. Funding game playing research has been seen as a way of identifying those assets. Even in the absence of immediate return on investment, the identification of these assets, because they can be tucked away in think tanks to plot out the domination of the world, have become a primary consideration when research funds are allocated.

Slow and Fast Paths to Return on Investment

In contrast to this geopolitical thinking, seeking institutional prestige on the back of some crafty programmer or team is another factor. In this scenario, any progress in simulating intelligence that has a potential of geometric improvements in some important industry or military application was sought.

For instance, programs like Maxima (a forerunner of mathematical problem solving applications such as Mathematica) were funded with the hope of developing mathematics using symbolic computing.

This path to success conceptually rested on determinism as an overarching natural philosophy. In fact, it was the epitome of determinism. It was proposed that, if a computer could not only do arithmetic but develop mathematical theorems of super-human complexity, models of human endeavors could be reduced to equations and solved. The predictability for a wide variety of important economic, military, and political phenomena could then be used in decision making, permitting significant gain.

To the surprise of many, the success of Maxima and other mathematics programs was very limited in its positive impact on the ability to reliably predict economic and geopolitical events. The emergence of Chaos Theory explained why.

Beating a human master with a program turned out to be within the reach of twentieth century R&D. Use of software to experiment on various computer science approaches to winning a game was achievable and therefore more attractive for institutions as a way of gaining prestige, much like a winning basketball team.

Let's Not Forget Discovery

Sometimes appearances are in direct opposition to actuality. The various above mentioned applications of thinking machines nave not been forgotten, and the expense in time and money required to simulate aspects of mammalian abilities will not loose funding to board game automaton development.

Technology is largely occupied with solving communications, military, geopolitical, economic, and financial problems that far exceed the complexity of games like Chess and Go. Game theory includes elements of random moves made by non-players as far back as its inception. Therefore, the obsession with Chess and Go is merely a signature of the actual focus of funding and activity in the many fields of simulating intelligence.

Software that can play a mean game of Chess or Go is deployed to neither NSA global modelling computers nor Google's indexing machinery. The big dollars are spent to develop what IS deployed into such places.

You will never see details on or even an overview of that R&D described online, except in the case of people who, for some personally compelling reason, violate their company confidential agreements or commit treason.

Douglas Daseeco
  • 7,423
  • 1
  • 26
  • 62
1

I find the statement troubling as the first confirmed algorithmic intelligence may have been a NIM automata, so from my perspective, the development of Algorithmic Intelligence is inseparable from combinatorial games. it would also seem that McCarthy does not hold the opinion that games are useful, which leads me to suspect he has never seriously studied the history of games.

Combinatorial Game Theory, an applied field in mathematics and computing, was formalized in the decades after the Sprague-Grundy Theorem which was a mathematical analysis of the game of NIM. More recently, the protein folding game Foldit produced real results in an applied field.

  • The answer I usually give is that games such as Chess and Go provide complexity akin to nature using extremely simple parameters. (In essence, combinatorial games and puzzles, like Sudoku, are complexity engines.)

But games, unlike puzzles, which are solo endeavors, require a type of strategic decision-making that is quite useful. (@Ugnes answer lists many of them.)

  • Combinatorial games in particular provide a useful benchmark for the capability of algorithms to manage intractable problems.

There is also a PR factor. Algorithmic language translation has gotten extremely good in recent years, but you never hear the press making a big deal about it. Compare to DeepBlue vs. Kasparov, or AlphaGo vs. Sedol. (This stack exploded with ML questions after the AlphaGo result.) This is similar to the US moon landings, which was great, if not strictly necessary, engineering feat that inspired generations of budding scientists.


Postscript: It's notable that until recently, the term "strong" was reserved for Artificial General Intelligence, which is still highly theoretical. After AlphaGo, I'm starting to see scholars use the term "Strong Narrow AI."

The use of strong in relation to Artificial General Intelligence is purely philosophical. By contrast, the way the term is used in Combinatorial Game Theory (see Solved Game) is purely practical and involves mathematical proofs.

Chess remains unsolved, and therefore it is still useful for study. [See GiraffeChess following.]

The fields of Game Theory and Combinatorial Game Theory include names like Von Neumann, Nash and Conway, and more recently Demain at MIT. And if you want to include combinatorial puzzles like Sudoku, we can stretch this back to Euler. For these reasons, as well as those listed above, I have a hard time seeing analysis of games as a trivial pursuit.


Giraffe Chess was a recent result by an individual mathematician/programmer, Matthew Lai, who used a Neural Network approach to create a chess algorithm that taught itself to play at an international master level in 72 hours.

One of of Lai's goals was to create an algorithm that produced more "human like play". (Compare to the "inhuman" play of algorithms like AlphaGo.) Giraffe is not AGI, but it certainly could be taken to be an piece of the puzzle.

Computer games are arguably the deepest type of interactions shared by humans and automata, and this type of interaction goes back almost to the inception of modern computing.

DukeZhou
  • 6,237
  • 5
  • 25
  • 53