1

If you switch a neural network from real weights to complex weights, you're roughly doubling the size of the network, and increasing the computational load by a factor of 2 to 4. My question is, in general, roughly how does the benefit of using complex weights stack up to those extra costs? E.g. Will a complex neural network with half the weights achieve worse/comparable/better performance than a regular network with real weights?

In audio signal processing, complex numbers make the theory much more elegant, which is why I imagine using complex numbers might be disproportionately beneficial. Though I can also imagine the complexity they introduce might overly hinder things as well.

As far as I know, no one uses complex weights in the NNs (which must be for a reason), but I'd like a more definitive answer.

chausies
  • 150
  • 3
  • 1
    related: https://ai.stackexchange.com/questions/7247/why-do-we-need-floats-for-using-neural-networks. Also see this paper: https://arxiv.org/abs/1705.09792 And it looks like someone has asked the same question already in dataStackExchange: https://datascience.stackexchange.com/questions/28676/are-there-neural-networks-packages-that-use-complex-numbers – SpiderRico Feb 20 '22 at 06:22

1 Answers1

0

Note that using complex numbers doubles the number of network parameters. In general, we can say that a network with $n$ complex nodes will have a training cost equivalent to a network with $2n$ real nodes.

About audio, it is true that a lot of analytics is done in Fourier/Laplace spaces, where complex numbers are mandatory. However, the incoming signal is real, also filters can be keep on this field. It is not usual, by example, the need of a DFT followed by a complex number NN.

Finally, note one of the main advantages of a NN is their non-linear structure and capabilities. Usual transforms and related tools are restricted to linear. In this point, the NNs are a step after.

David Hoelzer
  • 787
  • 7
  • 19
pasaba por aqui
  • 1,282
  • 6
  • 21