Why is Game Playing R&D a Focus of Resource Allocation?
When examining the apparent obsession with game playing as researchers attempt to simulate portions of human problem solving abilities, the orthodoxy of the views of John McCarthy (1927 – 2011) may be misleading.
Publication editorial bias and popular science fiction themes may obscure the primary forces that lead to the appearance of obsession with developing winning board game software. When examining the allocation of funds and human resources within the many fields of intelligence research and development, some historical background is necessary to circumvent distortions typical of answers to questions in this social net.
Historical Background
The ability to place ourselves out of our own time and into the mindset of other periods is helpful when analyzing history, including scientific and technological history.
Consider that McCarthy's vision was not orthodox in his time. It quickly became orthodox because of an array of emerging trends in thought about automation among scientists and mathematicians in times immediately following western industrialization. This thinking was the natural extension of the mechanization of the printing, textile, agriculture, and transportation industries and of war.
By the mid-twentieth century, some of these trends combined to conceptualize the digital computer. Others became orthodoxy within the community of people investigating aspects of intelligence via digital systems. The technical backdrop included theoretical work and electro-mechanical work, some of which has since achieved a degree of public fame. But it was generally either secret or too abstract (and therefore obscure) to be considered items of national security interest at the time.
- Cybernetics theory, largely developed by Norbert Wiener (1894 – 1964)
- The work done on automating arithmetic (extending George Boole's theory and Blaise Pascal's calculator, with primary funding originating from the U.S. military in an interest in guiding anti-aircraft weaponry by calculating probable trajectories of enemy of aircraft and determining spherical coordinates to create a probable interesting ballistic trajectory
- Often dismissed work of Alonso Church (1903 – 1995) on lambda calculus which led to the idea of functional programming, a key aspect to the emergence of LISP in Cambridge, which McCarthy leveraged for early AI experimentation
- The birth of information theory, primarily through the work of Claude Shannon (1916 – 2001), funded through Bell Labs in the interest of automating communications switching
- The early cryptanalysis work of Church's doctoral student, Alan Turing, funded entirely by Allied Forces with the R&D goal of defeating the Enigma cryptography device so that Nazi forces could be stopped prior to the complete annihilation of London and other Allied targets
- The work on John von Neumann (1903 – 1957) toward centralizing the implementation of arbitrary Boolean logic together with integer arithmetic into a single unit (currently called a CPU) and storing the program that controlled the implementation in electronic flip-flops along with the data to be processed and the results (the same general architecture imployed by almost all contemporary computing devices today)
All of these were concepts surrounding the vision of automata, the simulation of functional aspects of mammalian neurology. (A monkey or elephant can successfully plan and execute the swatting of a fly, but a fly is incapable of planning and executing an attack on a monkey or elephant.)
Experimentation into intelligence and its simulation via symbolic manipulation using a new programming language, LISP, was a primary focus of John McCarthy and his role in the creation of the MIT AI Laboratory. But whatever orthodoxy may have existed with rule based (production systems), neural nets, and genetic algorithms has largely diversified into a cloud of ideas that make the term orthodoxy somewhat nebulous. A few examples follow.
- Richard Stallman resigned from the MIT AI Lab and began a philosophical shift away from many of the economic philosophies that dominated that time period. The result was GNU software and LINUX, followed by open hardware and creative commons, concepts largely opposed to the philosophic orientation of those that funded AI hotbeds.
- Many proprietary (and therefore company confidential) systems use Bayesian methods or adaptive components that stem more from Norbert Wiener's work than anything that was considered mainstream AI research in the 1970s.
The Birth of Game Theory
The key event that answers the question most directly in this parade of historical events is some other work of von Neumann's. His book Game Theory, coauthored with Oskar Morgenstern, is perhaps the strongest factor among the historical conditions that led to the persistence of Go and Chess as test scenarios for problem solving software.
Although there were many earlier works on how to win in Chess or Go, never before was there a mathematical treatment and a presentation as compelling as that in Game Theory.
The privileged members of the scientific community were well aware of von Neumann's success with raising the temperature and pressure of fissile material to critical mass and his work in deriving classic thermodynamics from quantum theory. The foundation of mathematics he presented in Game Theory was quickly accepted (by some of the same people that funded research at MIT) as a potential predictive tool for economics. Predicting economics was the first step in controlling it.
Theory Meets Geopolitical Philosophy
The dominant philosophy that drove western policy during that period was Manifest Destiny, essentially the fatalist view of a New World Order, the head of which would be in the seats of U.S. power. Declassified documents indicate that it is highly likely that leaders of that time saw economic domination achieved through the application of game theory as considerably less risky and expensive than military conquest followed by the maintenance of bases of operations (high tech garrisons) near every populated area overseas.
The highly publicized challenges to develop Chess and Go automatons are simply dragnets that corporations and governments use as a first cut in the acquisition of personnel assets. The game results are like resumes. A winning game playing program is a piece of evidence of the existence of programming skill that would likely also succeed in the development of more important games that move billions of dollars or win wars.
Those who can write winning Chess or Go code are considered high value assets. Funding game playing research has been seen as a way of identifying those assets. Even in the absence of immediate return on investment, the identification of these assets, because they can be tucked away in think tanks to plot out the domination of the world, have become a primary consideration when research funds are allocated.
Slow and Fast Paths to Return on Investment
In contrast to this geopolitical thinking, seeking institutional prestige on the back of some crafty programmer or team is another factor. In this scenario, any progress in simulating intelligence that has a potential of geometric improvements in some important industry or military application was sought.
For instance, programs like Maxima (a forerunner of mathematical problem solving applications such as Mathematica) were funded with the hope of developing mathematics using symbolic computing.
This path to success conceptually rested on determinism as an overarching natural philosophy. In fact, it was the epitome of determinism. It was proposed that, if a computer could not only do arithmetic but develop mathematical theorems of super-human complexity, models of human endeavors could be reduced to equations and solved. The predictability for a wide variety of important economic, military, and political phenomena could then be used in decision making, permitting significant gain.
To the surprise of many, the success of Maxima and other mathematics programs was very limited in its positive impact on the ability to reliably predict economic and geopolitical events. The emergence of Chaos Theory explained why.
Beating a human master with a program turned out to be within the reach of twentieth century R&D. Use of software to experiment on various computer science approaches to winning a game was achievable and therefore more attractive for institutions as a way of gaining prestige, much like a winning basketball team.
Let's Not Forget Discovery
Sometimes appearances are in direct opposition to actuality. The various above mentioned applications of thinking machines nave not been forgotten, and the expense in time and money required to simulate aspects of mammalian abilities will not loose funding to board game automaton development.
Technology is largely occupied with solving communications, military, geopolitical, economic, and financial problems that far exceed the complexity of games like Chess and Go. Game theory includes elements of random moves made by non-players as far back as its inception. Therefore, the obsession with Chess and Go is merely a signature of the actual focus of funding and activity in the many fields of simulating intelligence.
Software that can play a mean game of Chess or Go is deployed to neither NSA global modelling computers nor Google's indexing machinery. The big dollars are spent to develop what IS deployed into such places.
You will never see details on or even an overview of that R&D described online, except in the case of people who, for some personally compelling reason, violate their company confidential agreements or commit treason.