|
By: Neil E. Cotter |
Neural Networks |
|
|
Perceptrons |
|
|
Proportional increment training |
|
|
|
|
|
|
Tool: Proportional Increment Training (PIT) is a learning algorithm for updating synaptic weight wj in a Perceptron, (shown below).
The (PIT) equation for weight updates is a modified form of gradient descent that treats the step-function squashing function as the identity function y = s when calculating a derivative:
Derive: The derivation proceeds initially as in modified gradient descent learning.
where
Part way through the derivation, however, we replace the derivative of step function, u(s), (which would be zero almost everywhere), with the derivative of an identity squashing function, y = s, which is unity.
where
Note: Typically, η = 1 is used.
Initial weight values are chosen at random in the PIT algorithm, and if the training patterns are linearly separable, the PIT algorithm will converge after a finite number of weight updates.
The first figure below shows the decision boundary (where the perceptron output changes from 0 to 1) before and after training to distinguish zeros (o's) and ones (x's). The second figure shows decision boundaries during training. Note that the final decision boundary appears in the second figure, and once the training pattern is in correct position, weight updates cease because errors are zero.