I am familiar with the currently popular neural network in deep learning, which has weights and is trained by gradient descent.
However, I found many papers that were popular in the 1980s and 1990s. These papers have titles like "Neural networks to solve optimization problems". For example, Hopfield first use this name, and they used "Neural network" to solve linear programming problems [1]. Later, Kennedy et.al used "Neural network" to solve nonlinear programming problems [2].
I summarize the difference between the current popular neural network and the "Neural networks":
- They do not have parameter weights and bias to train or to learn from data.
- They used a circuit diagram to present the model.
- The model can be simplified as an ODE system and has a Lyapunov function as objective.
Please take a look at these two papers in the 1980s:
[Neurons with graded response have computational properties like those of two state neurons][2] (J.J. Hopfield)
Neural Networks for Non-linear Programming (M.P Kennedy & L.O Chua)
Reference:
[1]: J. J. Hopfield, D. W. Tank, “neural” computation of decisions in optimization problems, Biological265 cybernetics 52 (3) (1985) 141–152.
[2]: M. P. Kennedy, L. O. Chua, Neural networks for nonlinear programming, IEEE Transactions on Circuits and Systems 35 (5) (1988) 554–562.