IMPLEMENTING ANN IN PYTHON
I was searching for Artificial Neural Networks (ANN) implementation in Python. I came across the following;
- FANN - C library with python bindings
- PyBrain
- NeuroLab
- PyNN
- BPNN - Not a library, solitary script by Neil Schemenauer
The XOR problem has some history in the evolution of ANN methods. The XOR function is not linearly separable and cannot be realised using only one layer of ANN.
TINKERING WITH SCHEMENAUER'S CODE
XOR Output for a (2,1,1) Back Propagation Neural Network;
As an example, XOR Output for a (2,25,1) Back Propagation Neural Network;
Anomalous behaviour comes into play from about 12 hidden nodes.
REFERENCES
(1) An introduction to neural networks
Schemenauer's code has default training values for a 2 input XOR gate.
Schemenauer recommends using of a (2,2,1) network (viz. a network with two input, two hidden, and one output nodes) and the output is very much as desired, in the limits of errors of the ANN.
XOR Output for a (2,2,1) Back Propogation Neural Network;
([0, 0], '==', [0.025608579041218795])([0, 1], '==', [0.98184578447794768])([1, 0], '==', [0.98170742564066216])([1, 1], '==', [-0.021030064439813451])
However, playing around with the number of hidden layers has interesting results,
([0, 0], '==', [0.0020536886211772179])The output of (2,1,1) clearly confirms the XOR problem !
([0, 1], '==', [0.68437587415369783])
([1, 0], '==', [0.68413753288547252])
([1, 1], '==', [0.6856616998850974])
Increasing the number of hidden layers indiscriminately, leads to anomalous output.
As an example, XOR Output for a (2,25,1) Back Propagation Neural Network;
([0, 0], '==', [0.99999643777993841])
([0, 1], '==', [0.99999911082329096])
([1, 0], '==', [0.99999280130316026])
([1, 1], '==', [0.99999824824488848])
Anomalous behaviour comes into play from about 12 hidden nodes.
REFERENCES
(1) An introduction to neural networks