A DYNAMIC NEURAL NETWORK ARCHITECTURE BY SEQUENTIAL PARTITIONING OF THE INPUT SPACE
R. S. Shadafan and M. Niranjan
A sequential approach to training multilayer perceptron for pattern classification applications is presented. The network is presented with each item of data only once and its architecture is dynamically adjusted during training. At the arrival of each example, a decision whether to increase the complexity of the network, or simply train the existing nodes is made based on three heuristic criteria. These criteria measure the position of the new item of data in the input space with respect to the information currently stored in the network.
During the training process, each layer is assumed to be an independent entity with its particular input space. By adding nodes to each layer, the algorithm effectively adding a hyperplane to the input space hence adding a partition in the input space for that layer. When existing nodes are sufficient to accommodate the incoming input, the involved hidden nodes will be trained accordingly.
Each hidden unit in the network is trained in closed form by means of a Recursive Least Squares (RLS) algorithm. A local covariance matrix of the data is maintained at each node and the closed form solution is recursively updated. The three criteria are computed from these covariance matrices with minimum computational cost.
The performance of the algorithm is illustrated on two problems. The first problem is the two dimensional Peterson \& Barney vowel data. The second problem is a 32 dimensional data used for wheat classification. The sequential nature of the algorithm has an efficient hardware implementation in the form of systolic arrays, and the incremental training idea has better biological plausibility when compared with iterative methods.
If you have difficulty viewing files that end
which are gzip compressed, then you may be able to find
tools to uncompress them at the gzip
If you have difficulty viewing files that are in PostScript, (ending
'.ps.gz'), then you may be able to
find tools to view them at
We have attempted to provide automatically generated PDF copies of documents for which only PostScript versions have previously been available. These are clearly marked in the database - due to the nature of the automatic conversion process, they are likely to be badly aliased when viewed at default resolution on screen by acroread.