[2D3] Condition monitoring machine learning explanations by design: a case study explaining the predicted degradation of a roto-dynamic pump

O Amin¹, B Brown¹, B Stephen¹, S McArthur¹ and V Livina²
¹University of Strathclyde, UK
²National Physical Laboratory, UK 

The field of explainable artificial intelligence (AI) has gained growing attention over the last few years due to the potential for making accurate data-based predictions on asset health. One of the current research aims in AI is to address challenges associated with adopting machine learning (ML) (ie data-driven) AI. That is, understanding how and why ML predictions are made. Despite ML models successfully providing accurate predictions in many applications, such as condition monitoring, there are still concerns about the transparency of the prediction-making process. Therefore, ensuring that the models used are explainable to human users is essential to build trust in the approaches proposed. Consequently, AI and ML practitioners need to be able to evaluate any available explainable AI (XAI) tools’ suitability for its intended domain and end-users, while simultaneously being aware of the tools’ limitations. This paper provides an insight into various existing XAI approaches and their limitations to be considered by practitioners in condition monitoring applications during the design process for an ML-based prediction. The aim is to assist practitioners in engineering applications in building interpretable and explainable models intended for end-users who wish to improve a system’s reliability and help users make better-informed decisions based on a predictive ML algorithm output. It also emphasises the importance of explainability in AI. The paper applies some of these tools to an explainability use case in which real condition monitoring data is used to predict the degradation of a roto-dynamic pump. Additionally, potential avenues are explored to enhance the credibility of explanations generated by XAI tools in condition monitoring applications, aiming to offer more reliable explanations to domain experts.