I read an interesting essay about how far we are from AGI. There were quite a few solid points that made me re-visit the foundation of AI today. A few interesting concepts arose:
imagine that you require a program with a more ambitious functionality: to address some outstanding problem in theoretical physics — say the nature of Dark Matter — with a new explanation that is plausible and rigorous enough to meet the criteria for publication in an academic journal.
Such a program would presumably be an AGI (and then some). But how would you specify its task to computer programmers? Never mind that it’s more complicated than temperature conversion: there’s a much more fundamental difficulty. Suppose you were somehow to give them a list, as with the temperature-conversion program, of explanations of Dark Matter that would be acceptable outputs of the program. If the program did output one of those explanations later, that would not constitute meeting your requirement to generate new explanations. For none of those explanations would be new: you would already have created them yourself in order to write the specification. So, in this case, and actually in all other cases of programming genuine AGI, only an algorithm with the right functionality would suffice. But writing that algorithm (without first making new discoveries in physics and hiding them in the program) is exactly what you wanted the programmers to do!
The concept of creativity seems like the initial thing to address when approaching a true AGI. The same type of creativity that humans have to ask the initial question or generate new radical ideas to long-lasting questions like dark matter.
Is there current research being done on this?
I've seen work with generating art and music, but it seems like a different approach.
In the classic ‘brain in a vat’ thought experiment, the brain, when temporarily disconnected from its input and output channels, is thinking, feeling, creating explanations — it has all the cognitive attributes of an AGI. So the relevant attributes of an AGI program do not consist only of the relationships between its inputs and outputs.
This is an interesting concept behind why reinforcement learning is not the answer. Without input from the environment, the agent has nothing to improve upon. However, with the actual brain, if you had no input or output, it is still in a state of "thinking".