COGS Q350 - Making Neural Networks Learn

In order to train neural networks, we will extract a random row of a truth table, let the network calculate the truth value for that row of the truth table. If the network has made a mistake, we will modify the weights. If no error is made, the weights stay the same. Then we randomly choose another row and repeat the process.

x1 x2 Desired Output OutputoutError What We Will Do outError * (x1,x2)
1 1 1 -1   Raise both weights (1,1)
1 1 -1 1   Lower both weights (-1,-1)
1 -1 1 -1   Raise w1, lower w2 (1,-1)
1 -1 -1 1   Lower w1, raise w2 (-1,1)
-1 1 1 -1   Lower w1, raise w2 (-1,1)
-1 1 -1 1   Raise w1, lower w2 (1,-1)
-1 -1 1 -1   Lower both weights (-1,-1)
-1 -1 -1 1   Raise both weights (1,1)

Start with randomly generated weights, w1 and w2, in the range [-1, 1].
outError = (desiredOutput - output) / 2

Repeat the following:

Randomly choose a row in the truth table.
Calculate net = (w1,w2) * (x1,x2)
If net is greater than the threshold value, then output 1, otherwise output -1. That is, output = If[net > threshold, 1, -1].
If output is not = desiredOutput, calculate new weights:
The formula for the new weights is:
(w1,w2) += learnrate*outError*(x1,x2)

where outError = (desiredOutput - output) / 2. So
(w1,w2) += learnrate * [(desiredOutput - output) / 2] * (x1,x2)

We continue iterations until the output equals the desired output for all 4 rows of the truth table

Example: Teach a neural network to output truth values for disjunction.

Let w1 and w2 be randomly chosen weights such that w1=-0.1 and w2=0.4
Let the threshold value = 0.4
Let the learnrate = 0.2
• Iteration 1:
Randomly choose a row in the truth table, say row 2. Then (x1,x2) = (1, -1). The desired output is 1.
Calculate net = (w1,w2) * (x1,x2) = (-0.1, 0.4) * (1, -1) = -0.5
The net value -0.5 is not greater than the threshold value 0.4, so output = -1.
Since output is not equal to the desiredOutput, we calculate new weights. In this case we want to raise w1 and lower w2.
(w1,w2) += learnrate * outError * (x1,x2)

(w1,w2) = (w1,w2) + (learnrate * [(desiredOutput - output) / 2] * (x1,x2))

(w1,w2) = (-0.1,0.4) + ( 0.2 * [(1 - (-1)) / 2] * (1,-1))

(w1,w2) = (-0.1,0.4) + ( 0.2 * 1 * (1,-1))

(w1,w2) = (-0.1,0.4) + ( 0.2 * (1,-1))

(w1,w2) = (-0.1,0.4) + ( 0.2, -0.2)

(w1,w2) = (0.1,0.2)

• Iteration 2:
Randomly choose a row in the truth table, say row 1. Then (x1,x2) = (1, 1). The desired output is 1.
Calculate net = (w1,w2) * (x1,x2) = (0.1, 0.2) * (1, 1) = 0.3
The net value 0.3 is not greater than the threshold value 0.4, so output = -1.
Since output is not equal to the desiredOutput, we calculate new weights. In this case we want to raise w1 and raise w2.
(w1,w2) += learnrate * outError * (x1,x2)

(w1,w2) = (0.1,0.2) + ( 0.2 * 1 * (1,1))

(w1,w2) = (0.1,0.2) + ( 0.2 * (1,1))

(w1,w2) = (0.1,0.2) + ( 0.2, 0.2)

(w1,w2) = (0.3,0.4)

• Iteration 3:
Randomly choose a row in the truth table, say row 4. Then (x1,x2) = (-1, -1). The desired output is -1.
Calculate net = (w1,w2) * (x1,x2) = (0.3,0.4) * (-1, -1) = -0.7
The net value -0.7 is not greater than the threshold value 0.4, so output = -1.
Since output is equal to the desiredOutput, we do not calculate new weights.

• Continue with iterations 4, 5, 6, ...