4

I read an interesting essay about how far we are from AGI. There were quite a few solid points that made me re-visit the foundation of AI today. A few interesting concepts arose:

imagine that you require a program with a more ambitious functionality: to address some outstanding problem in theoretical physics — say the nature of Dark Matter — with a new explanation that is plausible and rigorous enough to meet the criteria for publication in an academic journal.

Such a program would presumably be an AGI (and then some). But how would you specify its task to computer programmers? Never mind that it’s more complicated than temperature conversion: there’s a much more fundamental difficulty. Suppose you were somehow to give them a list, as with the temperature-conversion program, of explanations of Dark Matter that would be acceptable outputs of the program. If the program did output one of those explanations later, that would not constitute meeting your requirement to generate new explanations. For none of those explanations would be new: you would already have created them yourself in order to write the specification. So, in this case, and actually in all other cases of programming genuine AGI, only an algorithm with the right functionality would suffice. But writing that algorithm (without first making new discoveries in physics and hiding them in the program) is exactly what you wanted the programmers to do!

The concept of creativity seems like the initial thing to address when approaching a true AGI. The same type of creativity that humans have to ask the initial question or generate new radical ideas to long-lasting questions like dark matter.

Is there current research being done on this?

I've seen work with generating art and music, but it seems like a different approach.

In the classic ‘brain in a vat’ thought experiment, the brain, when temporarily disconnected from its input and output channels, is thinking, feeling, creating explanations — it has all the cognitive attributes of an AGI. So the relevant attributes of an AGI program do not consist only of the relationships between its inputs and outputs.

This is an interesting concept behind why reinforcement learning is not the answer. Without input from the environment, the agent has nothing to improve upon. However, with the actual brain, if you had no input or output, it is still in a state of "thinking".

nbro
  • 39,006
  • 12
  • 98
  • 176
joethemow
  • 365
  • 1
  • 7
  • [Here](https://ai.stackexchange.com/q/5511/2444) and [here](https://ai.stackexchange.com/q/2820/2444) are two related questions. – nbro Dec 12 '21 at 12:50

2 Answers2

1

I am focusing on what you have posted here without going through (or having read) the whole essay you linked:

In the classic ‘brain in a vat’ thought experiment, the brain, when temporarily disconnected from its input and output channels, is thinking, feeling, creating explanations — it has all the cognitive attributes of an AGI. So the relevant attributes of an AGI program do not consist only of the relationships between its inputs and outputs.

The classic brain in a vat experiment does not disconnect the brain from all inputs and outputs but replaces "real" connections with its environment with "fake" connections, e.g. by connection the brain to a computer. This is what Wikipedia says:

In philosophy, the brain in a vat (BIV; alternately known as brain in a jar) is a scenario used in a variety of thought experiments intended to draw out certain features of human conceptions of knowledge, reality, truth, mind, consciousness, and meaning. It is an updated version of René Descartes's evil demon thought experiment originated by Gilbert Harman.1 Common to many science fiction stories, it outlines a scenario in which a mad scientist, machine, or other entity might remove a person's brain from the body, suspend it in a vat of life-sustaining liquid, and connect its neurons by wires to a supercomputer which would provide it with electrical impulses identical to those the brain normally receives.2 According to such stories, the computer would then be simulating reality (including appropriate responses to the brain's own output) and the "disembodied" brain would continue to have perfectly normal conscious experiences, such as those of a person with an embodied brain, without these being related to objects or events in the real world.

Following that, the argument of the author referring to the brain in a vat scenario does not hold.

If you ignore that problem for a moment the next inconsistency arises: The author assumed that because a brain would act "so and so" an AGI would need to act "so and so" as well. That however is not in line with how artificial intelligence is usually defined. Even if you consider the broad range of definitions:

Examples of defining AI

(source: "Artificial Intelligence: A Modern Approach"; Russel, Norvig, 3rd Ed, 2010)

None of these definitions directly references AI to the brain. Therefore, the assumption that an AGI (as a subset of AI) would need to act like a brain in a vat is, without further assumptions, generally flawed.

Another problem with the author's argumentation lies here:

to address some outstanding problem in theoretical physics — say the nature of Dark Matter — with a new explanation that is plausible and rigorous enough to meet the criteria for publication in an academic journal.

Such a program would presumably be an AGI (and then some).

The assumption that "creativity" (in quotation marks since we actually need to precisely define that in the first place) requires an AGI does not hold either. Let's stick to the authors example related to dark matter:

  1. This is per definition a specialized field of application of an AI. Accordingly, there is not necessarily a need for an AGI since "specialization" is exactly what an AGI does not have.
  2. Moreover, you could make an argument that coming up with new explanations related to dark matter is somewhat related to automated theorem proving. Accordingly, we might already have AI which is, in principle, capable of "solving" this task as of today.
Jonathan
  • 304
  • 5
  • 10
1

Task Specification

It's been proposed that novelty search may circumvent this problem. See: Abandoning Objectives: Evolution Through the Search for Novelty Alone. In this model, the agent has no goal or objective, but just messes around with the data to see what results. (This could be regarded as finding/forming patterns. Here's a recent popular article on the subject: Computers Evolve a New Path Toward Human Intelligence).

A form of procedural generation may also be useful, specifically the capability of creating novel models/environments and processes/algorithms to analyze them. (See: AI-GAs: AI-generating algorithms).

In terms of programmers communicating a task to the AGI, that's a natural language problem if the task relates to mundane human activity or art and craft, and a math problem if the subject is physics. (In the former case, humans are describing the problem in natural language, in the latter, presumably feeding all of the data that suggests dark matter into the algorithm. Natural language is challenging for computers, but math, along with logic, is one of their two core functions.)

Re: dark matter, it may be a matter of asking the algorithm to find patterns in the data, and build models based on the data. The patterns and models would be the output, which humans could then consider. The output would be mathematical.

(Converting that mathematical output into metaphors, as in common on science programs like Nova and Cosmos, would be another goal of AGI.)

Brain in a Box

There needs to be stimulus/input to initiate "thought" process/computation. In brain in a box, the brain is providing it's own internal stimulus. I'd argue that an RL algorithm engaged in self-play is not dependent on external stimulus, but internally generated inputs, so that the process of model-based reinforcement learning is often a brain in a box, considering a subject or problem.

DukeZhou
  • 6,237
  • 5
  • 25
  • 53