0

I've come across a theorem "Convergence theorem Simple Perceptron" for the first time, here-> https://zaguan.unizar.es/record/69205/files/TAZ-TFG-2018-148.pdf, page 27, (is in Spanish)

Are there others like this one but for the Multilayer Perceptron?

Could someone please point me out to them?

Thank you in advance.

  • Hi. Maybe you should at least provide the link to the source where you found this theorem "Convergence theorem Simple Perceptron". Moreover, in the title, you're mentioning MLPs, which are not exactly the same thing as "simple perceptrons", so you may want to clarify what you're really interested in. – nbro Apr 16 '21 at 16:32
  • Hi moderator @nbro. Ok, the document is in Spanish, doesn't matter? In the later, MLP, I'll edit. – Verónica Rmz. Apr 16 '21 at 16:36
  • 1
    To have more context, a document in Spanish is probably better than no document at all. In any case, I think this question is somehow a duplicate of [this one](https://ai.stackexchange.com/q/13317/2444). Let me know if that's the case. – nbro Apr 16 '21 at 16:41
  • @nbro I don't think it's a duplicate, it's related. In any case, I think from there one can conclude _a_ theorem about MLP. – Verónica Rmz. Apr 16 '21 at 17:10
  • 1
    I've read somewhere (maybe an ML course from Ng on Coursera) that the main reason for convergence is stochastic gradient descent. – Aray Karjauv Apr 16 '21 at 23:27
  • @ArayKarjauv "stochastic gradient descent" .but that's not a theorem, or well, at least it's not how they named it on the book Neural Network Design – Verónica Rmz. Apr 16 '21 at 23:39

0 Answers0