Method for evaluating the industrial systems with built-in artificial intelligence robustness to adversarial attacks

Download article in PDF format

Authors: Vorobieva A. A.

Annotation: The paper presents a method for evaluating the industrial sys-tems with built-in artificial intelligence (AI) robustness to ad-versarial attacks. The influence of adversarial attacks on the systems performance has been studied. The scheme and the scenarios to implement attacks on industrial systems with built-in AI were presented. A comprehensive set of metrics used to study the robustness of ML models has been proposed, includ-ing test data set quality metrics (MDQ), ML model quality met-rics (MMQ), and model robustness to adversarial attacks met-rics (MSQ). The method is based on the use of this metrics set and includes the following steps: generating a set of test data containing clean samples; assessing the quality of a test data set using MMQ metrics; identification of relevant adversarial at-tacks methods; generating adversarial examples and a test data set, containing the adversarial samples, to evaluate the robust-ness of the ML model; assessing the quality of the generated adversarial test data set using MDQ indicators; evaluating the quality of a ML model using MMQ indicators; evaluating mod-el robustness using MSQ scores.

Keywords: cybersecurity, artificial intelligence methods, intelligent produc-tion systems, adversarial attacks

Viktor N. Maslennikov

Executive Secretary of the Editor’s Office

 Editor’s Office: 40 Lenina Prospect, Tomsk, 634050, Russia

  Phone / Fax: + 7 (3822) 51-21-21 / 51-43-02

  vnmas@tusur.ru

Subscription for updates