Adjusting robust neural networks for solving the classification problem

Download article in PDF format

Authors: Sivak M. A., Timofeev V. S.

Annotation: The paper highlights the problem of building and adjusting robust neural networks applying different loss functions for solving the classification problem. The considered functions are those of Cauchy, Meshalkin, Geman-McCluer, Charbonnier and Tukey’s Biweight losses. The accuracy of classification is examined for the different values of outliers’ fraction, for several values of learning epochs count and for datasets with various sizes. For all obtained networks the parameter values that maximize the accuracy, are defined. The best practices for choosing the parameter values depending on epoch count are also defined for all the loss functions. The ordinary neural network (with quadratic loss) and the robust neural network applying the Huber loss are also considered. The analysis of the results shows that the use of robust approach can significantly increase the learning rate and the classification accuracy, however, choosing the incorrect parameter value can decrease the accuracy of classification.

Keywords: classification problem, machine learning, loss function, robust technique, computational experiment, outliers, error back-propagation algorithm, artificial neural network

Viktor N. Maslennikov

Executive Secretary of the Editor’s Office

 Editor’s Office: 40 Lenina Prospect, Tomsk, 634050, Russia

  Phone / Fax: + 7 (3822) 51-21-21 / 51-43-02

  vnmas@tusur.ru

Subscription for updates