4

I know this is a very general question, but I'm trying to illustrate this topic to people who are not from the field, and also my understanding is very limited since I'm just a second-year physics student with a basic understanding of R and Python. My point is, I'm not trying to say anything wrong here.

So according to Wikipedia, after the second AI winter, which happened because expert systems didn't match expectations of the general public and of scientists, AI made a recovery "due to increasing computational power (see Moore's law), greater emphasis on solving specific problems, new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards".

What I'm trying to understand now is whether the rise of AI is rather connected to greater computational power available to the public or whether there have been fundamental mathematical advances that I'm not aware of. If the latter is the case (because according to my understanding, the mathematical models behind neural networks are rooted in the 70s and 80s), I would appreciate examples.

Again, please don't be offended by the general character of this question, I know it is probably really hard to answer correctly, however, I'm just trying to give a short historic introduction to the field to a lay audience and wanted to be clear in that regard.

nbro
  • 39,006
  • 12
  • 98
  • 176
  • [Why did ML only become viable after Nvidia's chips were available?](https://ai.stackexchange.com/a/13238/2444) might be helpful. –  Oct 24 '19 at 16:58
  • @DuttaA One more question regarding your remark on matrix multiplication: you write "when we started to see Deep Learning as just a set of matrix operations", did this happen only after the 90s or 2000s? – Jan Kleinow Oct 24 '19 at 17:52
  • 1
    i think people always "kinda knew" that neural networks (NN) (please don't say "deep learning" (DL) as a replacement for "neural networks", NN's are a type of architecture, DL is a way people use NN's by making them have a lot of hidden layers) can be viewed as matrix operations; but that framing of the problem wasn't particularly enlightening because NN's weren't deep enough for us to care about condensing the interpretation (but i might just be making this up) – k.c. sayz 'k.c sayz' Oct 24 '19 at 22:23
  • No significant mathematical advances directly connected to AI for now, but there some advances in math which may connect in future (automatic theorem proving, that is homotopy type theory) – mirror2image Nov 27 '19 at 06:50

1 Answers1

1

My reading of AI development (somewhat simplified here) is that the availability of large data sets, increased computing power, and the introduction of new machine learning algorithms (which require large data sets and massive computing power) contributed to the resurgence of AI.

However, as witnessed on this site, there has been a paradigm shift within the field: while previous approaches to AI were largely symbolic, with a bit of connectionism thrown in, the current AI mainstream is purely based on statistical models and machine learning.

Massive (distributed) computing power on its own is not sufficient, as there was a bottleneck with domain modelling/knowledge acquisition in traditional symbolic approaches. New algorithms (basically further developments of neural networks) on their own would also not be sufficient without massive amounts of training data. Only the combination of all three elements enabled the AI resurgence.

Oliver Mason
  • 5,322
  • 12
  • 32