For context, I am trying to write a bunch of neural network programs using no other packages besides NumPy for educational purposes. I am trying to make them as simple as possible, i.e. removing the features that might help training or testing, but which technically aren't supposed to be there.
I am currently writing a Perceptron program, and it seems everywhere I look, it is implemented differently. The program I have written works, I am more concerned about whether or not it is accurate to what a Perceptron is (again, this is for learning purposes). For example: some people seem to use an activation function and others don't- does the Perceptron use an activation function? I know it will function either way, but which is correct?
Here is the code I have right now. Please correct any and all mistakes I might have.
##### Imports #####
import numpy as np
##### Perceptron Class #####
class Perceptron:
def __init__(self, input_size, num_epochs, learning_rate):
# Hyperparameters
self.learning_rate = learning_rate
self.num_epochs = num_epochs
# Network
self.weights = np.zeros(input_size)
self.bias = 0
# Forward Propogation
def forward(self, input):
layer_output = np.dot(input, self.weights) + self.bias
return layer_output
# Back Propogation
def backward(self, error, input):
self.weights += self.learning_rate * error * np.array(input)
self.bias += self.learning_rate * error
# Train
def train(self, inputs, labels):
# Iterate Epochs
for _ in range(self.num_epochs):
# Iterate Pairs of Inputs and Labels
for input, label in zip(inputs, labels):
# Predict
prediction = self.forward(input)
self.backward(label - prediction, input)
# Test
def test(self, inputs, labels):
# Iterate Pairs or Inputs and Labels
for input, label in zip(inputs, labels):
# Predict
prediction = self.forward(input)
# Print
print(f'Input: {input}, Prediction: {prediction}, Label: {label}')
# Initialize Perceptron
perceptron = Perceptron(input_size = 2, num_epochs = 1_000, learning_rate = 0.01)
##### Training #####
training_inputs = [[1, 1], [1, 0], [0, 1], [0, 0]]
training_labels = [1, 0, 0, 0]
perceptron.train(training_inputs, training_labels)
##### Testing #####
testing_inputs = [[1, 1], [0, 1]]
testing_labels = [1, 0]
perceptron.test(testing_inputs, testing_labels)```