1

I'm currently a student learning about AI Networks. I've came across a statement in one of my Professor's books that a FFBP (Feed-Forward Back-Propagation) Neural Network with a single hidden layer can model any mathematic function with accuracy dependant on number of hidden layer neurons. Try as I might I cannot find any explanation as to why that occurs - could someone maybe explain the question why that is?

Konrad Ł
  • 11
  • 1
  • 3
    Possible duplicate of [Where can I find the proof of the universal approximation theorem?](https://ai.stackexchange.com/questions/13317/where-can-i-find-the-proof-of-the-universal-approximation-theorem) You're asking two distinct questions: 1) how does the number of neurons in a hidden layer affect the accuracy of the model and 2) why an NN, with a single hidden layer with an arbitrary number of neurons, can approximate any function. Please, ask just one question per post. The answer to your 2nd question is already implicitly given in the linked post, so I suggest you ask the first question. – nbro Sep 07 '19 at 13:17

1 Answers1

0

The claim that Neural Network with a single hidden layer can model any functions is proven in Cybenko's Approximation by superpositions of a sigmoidal function.

https://link.springer.com/article/10.1007/BF02551274 check also: https://en.wikipedia.org/wiki/Universal_approximation_theorem

The thing is that the neural network using sigmoidal functions, which are non-linear functions can.

Jim Kim
  • 1
  • 1