For questions related to the concept of function approximation. For example, questions that involve the use of a neural network (which is a function approximator) in the context of RL in order to approximate a value function or questions that are related to universal approximation theorems.
Questions tagged [function-approximation]
86 questions
27
votes
3 answers
Where can I find the proof of the universal approximation theorem?
The Wikipedia article for the universal approximation theorem cites a version of the universal approximation theorem for Lebesgue-measurable functions from this conference paper. However, the paper does not include the proofs of the theorem. Does…

Leroy Od
- 435
- 1
- 4
- 4
24
votes
2 answers
Are there other approaches to deal with variable action spaces?
This question is about Reinforcement Learning and variable action spaces for every/some states.
Variable action space
Let's say you have an MDP, where the number of actions varies between states (for example like in Figure 1 or Figure 2). We can…

Rikard Olsson
- 341
- 1
- 3
- 8
22
votes
3 answers
Why doesn't Q-learning converge when using function approximation?
The tabular Q-learning algorithm is guaranteed to find the optimal $Q$ function, $Q^*$, provided the following conditions (the Robbins-Monro conditions) regarding the learning rate are satisfied
$\sum_{t} \alpha_t(s, a) = \infty$
$\sum_{t}…

nbro
- 39,006
- 12
- 98
- 176
19
votes
1 answer
What is the number of neurons required to approximate a polynomial of degree n?
I learned about the universal approximation theorem from this guide. It states that a network even with a single hidden layer can approximate any function within some bound, given a sufficient number of neurons. Or mathematically, ${|g(x)−f(x)|<…

mark mark
- 753
- 4
- 23
14
votes
3 answers
Is there a way to understand neural networks without using the concept of brain?
Is there a way to understand, for instance, a multi-layered perceptron without hand-waving about them being similar to brains, etc?
For example, it is obvious that what a perceptron does is approximating a function; there might be many other ways,…

Evgeniy
- 249
- 1
- 3
10
votes
3 answers
Are ReLUs incapable of solving certain problems?
Background
I've been interested in and reading about neural networks for several years, but I haven't gotten around to testing them out until recently.
Both for fun and to increase my understanding, I tried to write a class library from scratch in…

Benjamin Chambers
- 221
- 1
- 8
8
votes
2 answers
What is the relation between the context in contextual bandits and the state in reinforcement learning?
Conceptually, in general, how is the context being handled in contextual bandits (CB), compared to states in reinforcement learning (RL)?
Specifically, in RL, we can use a function approximator (e.g. a neural network) to generalize to other states.…

Maxim Volgin
- 183
- 2
- 8
8
votes
1 answer
Can supervised learning be recast as reinforcement learning problem?
Let's assume that there is a sequence of pairs $(x_i, y_i), (x_{i+1}, y_{i+1}), \dots$ of observations and corresponding labels. Let's also assume that the $x$ is considered as independent variable and $y$ is considered as the variable that depends…

TomR
- 823
- 5
- 15
8
votes
1 answer
Which machine learning models are universal function approximators?
The universal approximation theorem states that a feed-forward neural network with a single hidden layer containing a finite number of neurons can approximate any continuous function (provided some assumptions on the activation function are…

nbro
- 39,006
- 12
- 98
- 176
7
votes
1 answer
What makes multi-layer neural networks able to perform nonlinear operations?
As I know, a single layer neural network can only do linear operations, but multilayered ones can.
Also, I recently learned that finite matrices/tensors, which are used in many neural networks, can only represent linear operations.
However,…

KYHSGeekCode
- 173
- 6
7
votes
3 answers
Which functions can't neural networks learn efficiently?
There are a lot of papers that show that neural networks can approximate a wide variety of functions. However, I can't find papers that show the limitations of NNs.
What are the limitations of neural networks? Which functions can't neural networks…

user2674414
- 199
- 6
7
votes
2 answers
Is it possible to implement reinforcement learning using a neural network?
I've implemented the reinforcement learning algorithm for an agent to play snappy bird (a shameless cheap ripoff of flappy bird) utilizing a q-table for storing the history for future lookups. It works and eventually achieves perfect convergence…

Jeff Puckett
- 339
- 4
- 12
7
votes
1 answer
Why does reinforcement learning using a non-linear function approximator diverge when using strongly correlated data as input?
While reading the DQN paper, I found that randomly selecting and learning samples reduced divergence in RL using a non-linear function approximator (e.g a neural network).
So, why does Reinforcement Learning using a non-linear function approximator…

강문주
- 71
- 2
7
votes
2 answers
Why don't people use projected Bellman error with deep neural networks?
Projected Bellman error has shown to be stable with linear function approximation. The technique is not at all new. I can only wonder why this technique is not adopted to use with non-linear function approximation (e.g. DQN)? Instead, a less…

Phizaz
- 510
- 3
- 13
6
votes
1 answer
Is there a way of converting a neural network to another one that represents the same function?
I have read the paper Neural Turing Machines and the paper On the Computational Power of Neural Nets about the computational power of neural networks. However, it isn't still clear to me one thing.
Is there a way of converting a neural network to…

ViniciusArruda
- 169
- 4