Neural Networks
Source Code

This page attempts to compile a suite of Neural network source codes for hobbyists and researchers to tweak and have fun on.

Source code for 1-8 are from Karsten Kutza. More Source codes are within this directory.

Network (Application) Description
1. Adaline Network

Pattern Recognition
Classification of Digits 0-9

 

The Adaline is essentially a single-layer backpropagation network. It is trained on a pattern recognition task, where the aim is to classify a bitmap representation of the digits 0-9 into the corresponding classes. Due to the limited capabilities of the Adaline, the network only recognizes the exact training patterns. When the application is ported into the multi-layer backpropagation network, a remarkable degree of fault-tolerance can be achieved.

 

2. Backpropagation Network

Time-Series Forecasting
Prediction of the Annual Number of Sunspots

 

This program implements the now classic multi-layer backpropagation network with bias terms and momentum. It is used to detect structure in time-series, which is presented to the network using a simple tapped delay-line memory. The program learns to predict future sunspot activity from historical data collected over the past three centuries. To avoid overfitting, the termination of the learning procedure is controlled by the so-called stopped training method.

 

3. Hopfield Model

Autoassociative Memory
Associative Recall of Images

 

The Hopfield model is used as an autoassociative memory to store and recall a set of bitmap images. Images are stored by calculating a corresponding weight matrix. Thereafter, starting from an arbitrary configuration, the memory will settle on exactly that stored image, which is nearest to the starting configuration in terms of Hamming distance. Thus given an incomplete or corrupted version of a stored image, the network is able to recall the corresponding original image.

 

4. Bidirectional Associative Memory

Heteroassociative Memory
Association of Names and Phone Numbers

 

The bidirectional associative memory can be viewed as a generalization of the Hopfield model, to allow for a heteroassociative memory to be implemented. In this case, the association is between names and corresponding phone numbers. After coding the set of exemplars, the network, when presented with a name, is able to recall the corresponding phone number and vice versa. The memory even shows a limited degree of fault-tolerance in case of corrupted input patterns.

 

5. Boltzmann Machine

Optimization
Traveling Salesman Problem

 

The Boltzmann machine is a stochastic version of the Hopfield model, whose network dynamics incorporate a random component in correspondence with a given finite temperature. Starting with a high temperature and gradually cooling down, allowing the network to reach equilibrium at any step, chances are good, that the network will settle in a global minimum of the corresponding energy function. This process is called simulated annealing. The network is then used to solve a well-known optimization problem: The weight matrix is chosen such that the global minimum of the energy function corresponds to a solution of a particular instance of the traveling salesman problem.

 

6. Counter-
propagation Network

Vision
Determination of the Angle of Rotation

 

The counterpropagation network is a competitive network, designed to function as a self-programming lookup table with the additional ability to interpolate between entries. The application is to determine the angular rotation of a rocket-shaped object, images of which are presented to the network as a bitmap pattern. The performance of the network is a little limited due to the low resolution of the bitmap.

 

7. Self-Organizing Map

Control
Pole Balancing Problem

 

The self-organizing map (SOM) is a competitive network with the ability to form topology-preserving mappings between its input and output spaces. In this program the network learns to balance a pole by applying forces at the base of the pole. The behavior of the pole is simulated by numerically integrating the differential equations for its law of motion using Euler's method. The task of the network is to establish a mapping between the state variables of the pole and the optimal force to keep it balanced. This is done using a reinforcement learning approach: For any given state of the pole, the network tries a slight variation of the mapped force. If the new force results in better control, the map is modified, using the pole's current state variables and the new force as a training vector.

 

8. Adaptive Resonance Theory

Brain Modeling
Stability-Plasticity Demonstration

 

This program is mainly a demonstration of the basic features of the adaptive resonance theory network, namely the ability to plastically adapt when presented with new input patterns while remaining stable at previously seen input patterns.

 

Zip archive of Source Code for 1-8

g-top.gif (311 bytes)


Created on 31 Jul 1998. Last revised on 31 Jul 1998.
Tralvex Yeap