9

In the mid 1980s, Rodney Brooks famously created the foundations of "the new AI". The central claim was that the symbolist approach of 'Good Old Fashioned AI' (GOFAI) had failed by attempting to 'cream cognition off the top', and that embodied cognition was required, i.e. built from the bottom up in a 'hierarchy of competances' (e.g. basic locomotion -> wandering around -> actively foraging) etc.

I imagine most AI researchers would agree that the 'embodied cognition' perspective has now (at least tacitly) supplanted GOFAI as the mainstream.

My question takes the form of a thought experiment and asks: "Which (if any) aspects of 'embodied' can be relaxed/omitted before we lose something essential for AGI?"

NietzscheanAI
  • 7,206
  • 22
  • 36

1 Answers1

4

This is something of an orthogonal answer, but I think Brooks didn't go about his idea the right way. That is, subsumption architecture is one in which the 'autopilot' is replaced by a more sophisticated system when necessary. (All pieces receive the raw sensory inputs, and output actions, some of which turn off or on other systems.)

But a better approach is the normal hierarchical control approach, in which the target of a lower level system is the output of a higher level system. That is, the targeted joint angle of a robot leg is determined by the system that is trying to optimize the velocity, which is determined by a system that is trying to optimize the trajectory, which is determined by a system that is trying to optimize the target position, and so on.

This allows for increasing level of complexity while maintaining detail and system reusability.


That said, I don't think you actually need what one would naively call 'embodied cognition' in order to get the bottom-up hierarchy of competencies that Brooks is right to point towards. The core feature is the wide array of inputs and outputs, which are understood in a hierarchical fashion that allows systems to be chained together vertically. I think you could get a functional general intelligence whose only inputs and outputs involve going through an Ethernet cable, and doesn't have anything like a traditional body that it actuates or senses through. (This is a claim that the hierarchical structure is what matters, not the content of what we use that structure for.)

(The main place to look for more, I think, is actually a book about human cognition, called The Control of Perception by William T. Powers.)

Matthew Gray
  • 4,252
  • 17
  • 27
  • Completely agree, and would like to add that most of those ideas were generated before 1990s, before most of modern IT technologies came to existence. Currently it is not unthinkable to imaging nice virtual environment with extensively detailed surroundings, so that AI would not required to be in physical body. Even if it will not be contained in closed virtual world, it will be possible to create such a massive stream of information via multiple channels like web-cams, microphones and numerous APIs, so AI would have enough information to build world model. – Alex Feb 23 '18 at 18:56