This demo visualizes the evolution of the sum squared error during the training phase of a multilayer preceptron with the online back-propagation learning algorithm. The units of the training patterns have binary values (0 or 1) that are depicted with red and blue dots respectively. By clicking on these dots you can switch the values of the training patterns to your will. The demo allows you to select the number of hidden units and the learning rate as well. In this example the MLP network has two input units and a single output which makes it suitable for experimenting with binary problems (AND, OR, XOR, NOR, NAND, etc.). Please note that if you choose a configuration with a single hidden unit the MLP degenerates to a simple Perceptron which is unable to learn a nonlinear discriminant function like XOR. Try it.