12

Everything related to Deep Learning (DL) and deep(er) networks seems "successful", at least progressing very fast, and cultivating the belief that AGI is at reach. This is popular imagination. DL is a tremendous tool to tackle so many problems, including the creation of AGIs. It is not enough, though. A tool is a necessary ingredient, but often insufficient.

Leading figures in the domain are looking elsewhere to make progress. This report/claim gathers links to statements by Yoshua Bengio, Yann LeCun and Geoff Hinton. The report also explains:

The main weaknesses of DL (as I see them) are: reliance on the simplest possible model neurons (“cartoonish” as LeCun calls them); use of ideas from 19th century Statistical Mechanics and Statistics, which are the basis of energy functions and log-likelihood methods; and the combination of these in techniques like backprop and stochastic gradient descent, leading to a very limited regime of application (offline, mostly batched, supervised learning), requiring highly-talented practitioners (aka “Stochastic Graduate Descent”), large amounts of expensive labelled training data and computational power. While great for huge companies who can lure or buy the talent and deploy unlimited resources to gather data and crunch it, DL is simply neither accessible nor useful to the majority of us.

Although interesting and relevant, such kind of explanation does not really address the gist of the problem: What is lacking?

The question seems broad, but it may be by lack of a simple answer. Is there a way to pin-point what DL is lacking for an AGI ?

Eric Platon
  • 1,490
  • 10
  • 21
  • See also [Can machine learning be used to pass the Turing test?](https://ai.stackexchange.com/q/7548/2444). – nbro Nov 08 '19 at 18:11

5 Answers5

7

Everyone dealing with neural networks misses an important point when comparing systems with human like intelligence. A human takes many months to do anything intelligible, let alone being able to solve problems where adult humans can barely manage. That and the size of human brain is enormous compared to our neural networks. Direction might be right, but the scale is way off. Number of neurons in human brain can be matched memory-wise but the amount of parallelism to simulate it real-time cannot yet be achieved (at least for a random researcher). While a little old this might give you an idea of how much we lack the processing power.

Cem Kalyoncu
  • 330
  • 2
  • 10
6

Deep Learning is mostly successful in supervised learning, whereas the brain builds categories mostly in an unsupervised way. We don't yet know how to do that. (Take a look at google brain: 16,000 cores and all this thing can do is recognise cats and human faces with pretty abysmal accuracy.)

Deep Learning uses highly unstructured activations, i.e. the high level representations of "dog" and "cat" in a neural network classifier don't have to be similar at all. The brain on the other hand uses inhibitory neurons to create sparse distributed representations which are decomposable into their semantic aspects. That's probably important for abstraction and reasoning by analogy.

The brain has many different parts which work together. Deep Learning researchers are only just beginning to integrate memory or attention mechanisms into their architecture.

The brain integrates information from many different senses. Most Deep Learning applications use just one type of input, like text or pictures.

The brain is capable of modelling sequences as categories. (Basically every verb names a sequential (i.e. temporal) category.) It can then arrange these categories into long-term hierarchical plans. So far I haven't seen anything in that direction in Deep Learning.

Also neural networks can't yet operate on the same scale as the human brain. If you look at the answers to this question, the human brain will be ahead in neuron count for another couple of decades. A neural network might not need the same number of neurons as the brain to reach a similar performance (because of higher accuracy), but right now for example video processing is still pretty limited in terms of input and throughput.

BlindKungFuMaster
  • 4,185
  • 11
  • 23
3

IMHO the first hurdle is scale: even Google's largest DNN doesn't come close to the scale of the brain, and by a factor of several orders of magnitude...

Frank
  • 150
  • 4
2

I think it's missing still the aspects what makes a human brain; having a lot of different networks working with each other.

Just like meditation improves cognitive abilities by having the brain work more synergistically, we could apply that to machines too.

For example google is learning a computer to dream, just like we do, to reinforce what we already learned. https://medium.com/@tannistho/why-is-google-teaching-its-ai-to-dream-e9ae9ecd0e3a#.gljal6pww

And here is pathnet, a network of neural network. https://medium.com/@thoszymkowiak/deepmind-just-published-a-mind-blowing-paper-pathnet-f72b1ed38d46#.ed0f6pdq7

Creating all these mechanics and putting them all together, with enough power and we will get pretty close!

Alexander
  • 293
  • 1
  • 8
  • Could you clarify what "it" is? It could be the process of deep learning, or equally deep networks. These are different. – Eric Platon Feb 16 '17 at 22:51
  • 1
    Both, but mainly having deep neural networks work with each other, I speculate that DNN should also have good neural plasticity features. But this is something we could only touch the basics on, we don't even know even know how exactly a human brain works – Alexander Feb 17 '17 at 09:58
1

Artificial intelligence proponents today are focused on the problem of computability - the ability to solve complex problems fast. It is my belief that any amount of success in this direction will not lead to human (general) intelligence although it certainly will outperform humans in certain domains. Instead, efforts should be toward a study of what neurological events cause sensation (the experience of qualia). Of course, this is the hard problem of philosophy but I believe it is the unique key to general intelligence and its capabilities. Reverse engineering and also testable theories should be advanced toward this end.

user415738
  • 11
  • 1
  • 1
    Quaila might be an interesting trait for machines to have (especially if we want people to give rights to these machines), but aside from quaila itself being a *very* hard problem in philosophy, there's two main issues at play. (A) intelligence itself may not require quaila, you can be smart without being able to have subjective first-person experiences..i.e., the philosophical zombie. – Left SE On 10_6_19 Feb 20 '17 at 06:55
  • (B) Industry only cares about solving complex problems fast and doesn't really worry about whether said fast machine can think or feel. AGI is only desired to the extent that it could solve a complex problems fast...intelligence is only a means to an end. (In fact, industry may not want a thinking and feeling machine, as such machines may deserve rights...and rights are kinda like regulations, limiting what a business can do with their tool.) – Left SE On 10_6_19 Feb 20 '17 at 07:00
  • Right. There is a very limited economic potential in machines that think, feel, suffer, want, get angry, etc. You've essentially made a machine that you cannot ethically turn off nor command. Much more useful is modelling the useful parts of intelligence which can perform complex, fast, and limitless computation without creating machines with desires or selfhood. – MeatFlavourDev May 06 '21 at 11:45