Multi-layer Perceptron (MLP) Networks are very flexible models for analyzing problems that have an input-output structure. These techniques are well-known in artificial intelligence and provide models for non-linear statistical regression and classification with efficient learning algorithms. The author of this thesis develops extensions to MLP networks suitable for the appropriate analysis of ordinal data occurring both as inputs and outputs. Reviewing the learning procedure he introduces a new learning paradigm that combines the advantages of batch learning on the one hand and incremental estimation on the other, i.e. statistically better results and algorithmic efficiency respectively. This allows an efficient online adaptation of the model without being compromised by the dependence on either a learning parameter or the ordering of the data set.
This book addresses researchers, lecturers and students of mathematics, informatics and artificial intelligence. It may also be interesting for those who deal with data analysis in their daily work.