Adjusting robust neural networks for solving the classification problem
Download article in PDF format
Authors: Sivak M. A., Timofeev V. S.
Annotation: The paper highlights the problem of building and adjusting robust neural networks applying different loss functions for solving the classification problem. The considered functions are those of Cauchy, Meshalkin, Geman-McCluer, Charbonnier and Tukey’s Biweight losses. The accuracy of classification is examined for the different values of outliers’ fraction, for several values of learning epochs count and for datasets with various sizes. For all obtained networks the parameter values that maximize the accuracy, are defined. The best practices for choosing the parameter values depending on epoch count are also defined for all the loss functions. The ordinary neural network (with quadratic loss) and the robust neural network applying the Huber loss are also considered. The analysis of the results shows that the use of robust approach can significantly increase the learning rate and the classification accuracy, however, choosing the incorrect parameter value can decrease the accuracy of classification.
Keywords: artificial neural network, error back-propagation algorithm, outliers, computational experiment, robust technique, loss function, machine learning, classification problem