I’m coding a neural network in C for an OCR project. Before testing with character recognition, I’m making it learn the XoR operation. Although, the results I’n getting always converges to 0.5 instead of 1 or 0 for all input combinations.
I’m using a learning rate of 0.35 and the activation function is the standard sigmoid function (1/(1+exp(-x)). The network has 2 inputs, 1 bias neurone, 1 hidden layer with 2 neurones and 1 output neurone.
The algorithm for learning the operations looks like this:
initialise network with random weights
while networks doesn't know all operations with 5% precision do:
compute output for all combinations
run the retropropagation algorithm for all combinations
endwhile
I’ve tried to change the learning rate and the activation function, but none of that solved the issue. Why does the network keeps converging to 0.5 and how can I solve that?
5