In a comment to this question user nbro comments:
As a side note, "perceptrons" and "neural networks" may not be the same thing. People usually use the term perceptron to refer to a very simple neural network that has no hidden layer. Maybe you meant the term "multi-layer perceptron" (MLP).
As I understand it, a simple neural network with no hidden layer would simply be a linear model with a non-linearity put on top of it. That sounds exactly like a generalized linear model (GLM), with the non-linearity being the GLM's link function.
Is there a notable difference between (non-multi-layer) perceptrons and GLMs? Or is it simply another case of two equivalent methods having different names from different researchers?