Dynamic Inspection and Maintenance Optimization Strategy for Multi-State Systems Subject to Time-Varying Demand  
Author Yiming Chen

 

Co-Author(s) Yu Liu

 

Abstract The multi-state characteristic of engineered systems has been extensively investigated in the literature. Most existing works on multi-state systems (MSSs) assume that a system is required to meet a fixed pre-specified user demand which will not change over time. However, in practical applications, demand from users may be time-varying due to uncertain operation profiles, changing market conditions, seasonal weather, or unexpected emergencies. In this study, to minimize the cost caused by unsupplied demands, inspection, and maintenance, a dynamic inspection and maintenance strategy is developed for MSSs with time-varying demand. The state of each component can be observed by non-periodic inspections, and maintenance actions can be dynamically conducted based on the observations from inspections. The resulting sequential decision-making problem is formulated as a Markov decision process (MDP) with a discrete action space and a continuous state space. Based on the framework of deep reinforcement learning (DRL), a customized proximal policy optimization (PPO) algorithm is proposed to overcome the “curse of dimensionality”. An inspection indicator is constructed to identify the time instant for the next inspection. The extended input features for neural networks are formulated to improve the effectiveness of the proposed algorithm. The effectiveness of the proposed method is demonstrated by an illustrative example of a flow transmission system.

 

Keywords Multi-state systems, maintenance, time-varying demand, deep reinforcement learning, proximal policy optimization
   
    Article #:  RQD26-34
 

Proceedings of 26th ISSAT International Conference on Reliability & Quality in Design
Virtual Event

August 5-7, 2021