Methodology of interpretable fuzzy classifiers design for explainable artificial intelligence systems
Download article in PDF format
Authors: Sarin K. S.
Annotation: The use of artificial intelligence systems based on machine learning methods in critical problem areas is associated with high risks and requires an explanation of the obtained result to a person. Predictive models with this property are called inter-pretable. The absence of such an opportunity reduces the level of trust in the result and may be the reason for slowing down the public acceptance and implementation of such systems. Artificial intelligence systems based on fuzzy systems make it possible to explain the result of their decision. Due to the pres-ence of a base of production rules, they are able to express knowledge in a human-oriented form, using natural language terms. The paper proposes a technique for constructing fuzzy classifiers aimed at improving interpretability, taking into ac-count the shortcomings of known construction methods. The technique includes the use of mixed multiobjective optimization algorithms, discrete optimization, gradient descent, and the data separation method. An experiment was conducted on 38 public-ly available data sets from various problem areas to evaluate the effectiveness of classifiers built using the proposed technique. A statistical comparison with known interpretable classifiers – genetic fuzzy systems FARC-HD and decision trees CART – was carried out. The application of the technique made it possi-ble, with comparable accuracy, to achieve a statistically signifi-cant increase of the interpretability of classifiers by reducing the number of rules, the number of features and the total number of fuzzy terms compared to genetic systems FARC-HD, and the number of rules and the number of conditions in a rule com-pared to classifiers based on decision trees CART. The achieved results indicate a high level of interpretability of classi-fiers constructed using the proposed technique.
Keywords: optimization, me-taheuristic algorithms, learning algorithms, interpretability, fuzzy systems, classifica-tion, machine learning, explainable artificial intelligence