One of the essences of supervised learning in neural network is generalization capability. It is an ability to give an accurate result for data that are not learned in learning process. One of supervised learning method that theoretically guarantees the optimal generalization capability is incremental projection learning. This paper will describe an experimental evaluation of generalization capability of the incremental projection learning in neural networks%2C called projection generalizing neural networks%2C for solving function approximation problem. Then%2C Make comparison with other general used neural networks%2C i.e. back propagation networks and radial basis function networks. Base on our experiment%2C projection generalizing neural networks doesn%5C%27t always give better generalization capability than the two other neural networks. It gives better generalization capability when the number of learning data is small enough or the noise variance of learning data is large enough. Otherwise%2C it does not always give better generalization capability. Even though%2C In case the number of learning data is big enough and the noise variance of learning data is small enough%2C projection generalizing neural networks gives worse generalization capability than back propagation networks.
|Journal||Jurnal Teknik Elektro|
|Publication status||Published - 2001|