Collecting the right data at acceptable precision and correct frequency is critical to the success of a prognosis prediction. A false alarm could lead to additional maintenance costs.
Accurate domain knowledge of each machine and a strong data set of previous failures is absolutely critical to developing an effective machine learning model and solution. Expected failures, i.e the kind of failures that have already occurred, and have been recorded, with a given machine can be modelled with this approach.
First principles based or physics based equipment specifications, whenever available, can add a significant value to the predicted outcome. A simulation engine or virtual model could be built to collect the required data under various stress conditions. Unexpected failures, which might occur due to a complex interworking of parts of a machine, could be predicted with this approach.
Predictive Maintenance helps identify impending problems, before they occur, so that downtime or unplanned maintenance could be reduced
A clear definition of the business problem definition is imperative. For example, based on the cost of the machine/equipment, maintenance cost, cost of downtime or unplanned maintenance – including direct cost such as lost-production time and any in-direct costs such as brand value reduction, safety risks, etc should be considered to come up with a value for undertaking predictive maintenance. Based on this an expected outcome should be derived – for example, to save $ Z over next 3 years.
Machine Learning is dependent on data. So, if a lot of data samples are available about conditions under which an equipment failed, it would help modelling. However, if the machine fails, say, only once in a year then getting a good dataset becomes difficult and the accuracy of prognosis reduces. Collecting, storing and analyzing data over an observable period of time is critical.
Understanding the specifications of sensors, their accuracy, frequency of sample collection, precision and drift if any are important to designing a prognosis solution. For example, a temperature sensor which drifts by 3 degrees over a period of time like 90 days, would become un-noticeable if the observation period is, say, only 30 days. However, this would have an impact on the accuracy of the ML model.
This is an iterative process where multiple algorithms could be tried and tested for a given dataset. Often the process involves selecting the right parameters from the dataset and observing the impact on accuracy of the prediction. Various automated and semi-automated approaches are available to pick the right algorithm for a ML training process and to avoid over-fitting.
As new incidents are observed and new parameters having significant impact on prediction are discovered, it becomes useful to re-train the model and re-deploy the prediction. For example, in the first pass only vibration and temperature were thought to be indicators of degradation but humidity also seems to be playing a part based on the recent incidents. So, it is a continuous process to get the best prediction and save cost.