1

When does it happen that a layer (either first or hidden) outputs negative values in order to justify the use of RELU?

As far as I know, features are never negative or converted to negative in any other type of layer.

Is it that we can use the RELU with a different "inflection" point than zero, so we can make the neuron start describing a lineal response just after this "new zero"?

nbro
  • 39,006
  • 12
  • 98
  • 176
sujeto1
  • 129
  • 2

1 Answers1

3

The fact that features are always positive values don't guarantee that outputs of hidden layers are positive too.

Due to multiplication, output of an hidden layer could contain negative values, i.e., a hidden layer can contain weights that have opposites signs as its input. Remember that only layer outputs, not their weights, are passed through ReLu, so, weights of a model could contain negative values.

SpiderRico
  • 960
  • 8
  • 18