**SCHEMENAUER AND THE XOR GATE**

__IMPLEMENTING ANN IN PYTHON__

I was searching for Artificial Neural Networks (ANN) implementation in Python. I came across the following;

- FANN - C library with python bindings
- PyBrain
- NeuroLab
- PyNN
- BPNN - Not a library, solitary script by Neil Schemenauer

__THE XOR PROBLEM__

The XOR problem has some history in the evolution of ANN methods. The XOR function is not linearly separable and cannot be realised using only one layer of ANN.

__TINKERING WITH SCHEMENAUER'S CODE__

Schemenauer's code has default training values for a 2 input XOR gate.

Schemenauer recommends using of a (2,2,1) network (viz. a network with two input, two hidden, and one output nodes) and the output is very much as desired, in the limits of errors of the ANN.

XOR Output for a (2,2,1) Back Propogation Neural Network;

([0, 0], '==', [0.025608579041218795])([0, 1], '==', [0.98184578447794768])([1, 0], '==', [0.98170742564066216])([1, 1], '==', [-0.021030064439813451])

However, playing around with the number of hidden layers has interesting results,

The output of (2,1,1) clearly confirms the XOR problem !([0, 0], '==', [0.0020536886211772179])

([0, 1], '==', [0.68437587415369783])

([1, 0], '==', [0.68413753288547252])

([1, 1], '==', [0.6856616998850974])

Increasing the number of hidden layers indiscriminately, leads to anomalous output.

As an example, XOR Output for a (2,25,1) Back Propagation Neural Network;

([0, 0], '==', [0.99999643777993841])

([0, 1], '==', [0.99999911082329096])

([1, 0], '==', [0.99999280130316026])

([1, 1], '==', [0.99999824824488848])

Anomalous behaviour comes into play from about 12 hidden nodes.

**REFERENCES**

(1) An introduction to neural networks

## 2 comments:

the bigger your brain the more likely you are to get in a muddle !

lets say ! a small brain is bad (2,1,1) .... a medium sized brain is good (2,2,1) ..... and large ones (2,25,1) are crappy

Post a Comment