Original article can be found here (source): Deep Learning on Medium
The Prognostics and Health Management (PHM) discipline provides for viewing overall health state of machines or complex systems and assists in making correct decissions on machine maintenance.
There are three main issues to be considered when building a robust PHM:
- an estimation of current health state,
- prediction of a future state along with time to fail, and
- determination of a failure’s impact on the performance of a system
The difference between prognosis and diagnosis is that while a prognosis is a guess as to the outcome of treatment, a diagnosis is actually identifying the problem and giving it a name, such as depression or obsessive-compulsive disorder.
PHM MAIN TASKS
Main tasks are data acquisition, data preprocessing, detection, prognostics and decision making.
Data acquisition is an initial and essential step of PHM which is known as a process of data collection and storage from physical component/system under investigation for further diagnostics and prognostics purposes.
Data preprocessing involves cleaning sensory data and extracting features from time series that reflect system health state being monitored .
Health state detection is the process of detecting and recognizing incipient failures and/or anomalies from Conditional Maintenance data.
Prognostics is defined as the process of predicting the time (RUL) at which a component will no longer perform a particular function.
Decision making is a process resulting in the selection of logical and/or right maintenance action among several alternatives.
Maintenance methods nowadays have transitioned from conventional methods like “fix when system breaks” to “Condition-Based Maintenance (CBM)” .
Techniques for prognostics
Prognostics refers to assessing and predicting the working condition of a machine components, based on its current and previous state, with the main motto being the accurate prognosis of Remaining Useful Life(RUL) of a machine or its parts.
Model based approach
In Model-based prognostics approaches, the behavior of a system/component degradation process leading to failure is described by mathematical models and/or equations derived from physical systems.
Data Driven approach
Data-driven approaches attempt to build degradation models using information from Conditional Maintenance data collected via installed sensors and to predict future health state instead of building physical models.
Convolutional Neural Networks (CNN) tend to outperform other models while extracting position-invariant or salient features from a raw data, while Recurrent Neural Networks have prime face advantages for modelling units in a sequence.
Here we Integrate the CNN with Long Short-term Memory Networks (LSTM, a special type of RNN) to accurately predict the Remaining Useful Life (RUL) of a given degrading system.
The dataset used for training purpose contains data of identical engines. This data is natural time series data of different sensors and these are recorded from a time when the system is healthy to the point when any fault occurs. Different sensors including acceleration, vibration etc. are used for monitoring the system.
In preprocessing, the raw data (input feature vectors) is normalized and then fed as input to CNN for feature extraction purposes in batches of size m x n where m is the the number of rows/life cycles of engine and n is the the number of columns/sensor data corresponding to different operational settings.
This pre-processing can be applied on any csv file before giving to LSTM networks.
Neural network used to estimate RUL is
Using CNN, local patterns are identified more efficiently because the main assumption of the CNN model is that the local patterns are relevant everywhere.
Then pooling layer is applied to extract most vital and fixed length features from each feature map.p. Then, the output from CNN is fed to the LSTM layer.
Mean Absolute Error of estimated RUL for this dataset is 125.79.
To view intermediate output of each layer in network, following code can be used
import keras.backend as K
lstm_out = K.function([model.inputs,
CNN is good at encoding information.In addition, CNN is able to compress the length of the sequence, which increases the capability of the subsequent LSTM to capture temporal information
A common method is to validate the physical model offline, then use data driven techniques for prediction and also update the parameters to increase level of the accuracy in prediction.