What is Predictive Maintenance?
Predictive maintenance is the use of new and historical machine data to understand and anticipate performance problems before they happen. Using sophisticated machine learning and AI techniques to analyze the data generated in the modern factory, predictive analytics can decrease downtime, optimize asset performance, and increase the lifespan of machines.
The promises made on behalf of predictive maintenance (PdM) are big. Double-digit increases in asset utilization. Smart machines that flag performance issues before they happen. Huge bumps in OEE, TEEP, and OPE.
For most, that world is still a ways away. And more importantly, it’s not something that the right predictive algorithms can solve alone.
This article will put predictive maintenance in context. I’ll explain why PdM isn’t just an AI problem, and outline clear steps you can take to get the most out of your machine monitoring program.
Predictive Maintenance in Context
Most current maintenance programs in manufacturing are preventative. Preventative maintenance (PM) occurs at regularly scheduled intervals, or when machines exceed prescribed production thresholds.
Preventative maintenance is important for ensuring asset health, but it’s a blunt instrument. PM doesn’t take into account the conditions under which an individual machine operates, the differential wear-and-tear of different machine parts, or other factors that might predict failure. It often results in maintenance schedules that are more or less frequent than necessary. (The classic example is changing your car’s oil every 3000 miles regardless of performance).
Predictive maintenance, in contrast, uses the data generated by a particular machine to create a more granular picture of part and asset life cycles. PdM, theoretically, takes the guesswork out of scheduling maintenance. By providing visibility into how a given machine will degrade, PdM enables manufacturers to perform maintenance only when necessary.
The success of any predictive maintenance effort depends on the quality and quantity of the data available in a training set.
That is, you need 1.) sufficient data to create a representative sample of machine performance over time, and 2.) data that accurately reflects machine performance and usage in local conditions.
To clarify why having enough data and good data are both important, I’ll dig into each. Quantity first.
Predictive Maintenance Needs the Right Quantity of Data
It’s a myth that you need petabytes upon petabytes of machine data to successfully train predictive algorithms. It’s also a myth that more data is better. I’m sure many of you have heard the phrase “garbage in, garbage out” to describe how a bad training set will lead to suboptimal results.
What you do need for PdM is enough data to furnish a representative sample of machine performance to account for its usage in a particular operation.
According to one professor of industrial engineering, creating a representative sample in no easy task. “When there are thousands of variables, you typically need data for hundreds of thousands, or millions of parts in order to find meaningful statistical associations between problems and root causes.”
This is especially true when accounting for those qualifying words–“usage in a particular operation”.
Here’s why: Machine life cycles unfold over the course of years, if not decades. Collecting a representative data set thus requires observing a machine as it runs across an extended time frame. As one big data group has noted of PdM, “The life span of machines is usually in the order of years, which means that data has to be collected for an extended period of time in order to observe the system throughout its degradation process.”
This problem of quantity is compounded by the fact that many manufacturers don’t have adequate historical data. There may be information about up-time and down-time, parts produced, and maintenance logs. But it’s a big assumption that this information will be accurate, and it likely isn’t fine-grained enough to yield truly predictive insights.
Many manufacturers have tried to overcome this lack of data by training their predictive algorithms on publicly available data sets. While most private companies fiercely guard their production data, there’s a lively exchange of scientific and public domain sources, and a quick google search will turn up many on Github.
But even these aren’t enough to get manufacturers from PM to PdM because they don’t capture on-the-ground manufacturing reality. No matter how large the data set, they lack ecological validity.
One engineer captured this data-dilemma well when he wrote, “Most of the time it’s difficult (if not impossible) [sic] to have failure records from machines because they are not allowed to run to failure in real conditions. On top of that, we have to work with a lot of noise from regular maintenance activities and imprecise maintenance WO imputations … real life is tough”
So this brings me to the next point. Not only do you need enough data, but you need the right kind of data.
Predictive Maintenance Needs the Right Quality Data
Perhaps another way of describing data quality in the context of PdM is sufficient to infer causality.
That is, quality data is data that moves manufacturers past the murk of correlation toward the root cause of machine failures.
This is easier said than done, as a host of production factors influence how quickly a part or machine will reach a window of failure. Spindle speed, hours running, temperature, vibration, humidity, usage–these are just a few of the parameters that interact in unique ways and, in the aggregate, have a variable impact on machine life.
As one writer has pointedly stated, “The health of a complex piece of equipment cannot be reliably judged based on the analysis of each measurement on its own. We must rather consider a combination of the various measurements to get a true indication of the situation.”
The good news is that developments in sensor technology and edge computing have made it possible to track a wider variety of performance metrics than ever. The bad news is that even the best-connected machines aren’t always accounting for the most significant causes of machine degradation.
In terms of predictive maintenance, this means that understanding how machines are used is equally or more important than understanding how machines run. In order for PdM to work as effectively as possible, you need a record of how machines are used on a day-to-day basis, whether they’re set properly, whether changeovers are done correctly, and whether or not maintenance is performed correctly.
In other words, you need a human-centered approach to machine monitoring.
Getting Started With Predictive Maintenance
Even if you can’t get a full PdM program going, taking steps toward a human-centered machine monitoring program can start creating value almost immediately. Little steps can bring massive leaps forward.
Here are concrete things you can do to get started.
1.) Bring your factory online as soon as possible. As I’ve explained here, robust, local data is the cornerstone of PdM. The sooner you begin collecting machine data with IoT, the sooner you can leverage this data to your competitive advantage.
Contrary to popular belief, getting started with IoT need not be expensive, and it need not involve your entire operation. There are easy ways to bring legacy machines online, and drops in sensor pricing make it possible to start monitoring ambient conditions without huge outlays.
2.) Consider the cloud. The quantities of data required to train and run predictive algorithms can strain servers and computing resources. The cloud is an increasingly affordable, secure, scalable option for handling the storage and compute demands of predictive analytics without needing to invest in or maintain on-prem infrastructure.
3.) Understand what to expect from ML algorithms. Knowing what machine learning algorithms can predict is useful for prioritizing which departments, machines, or processes to bring online first.
Some of the more common predictive areas include: calculating machine lifetime before failure; identifying a window in which a failure is likely to occur; identifying the most common types of failures; and detecting anomalous machine behavior.
Knowing what machine learning can uncover is the key to setting priorities for digital transformation.
4.) Keep track of machine usage. Machine monitoring works best when machine data is supplemented with information regarding machine usage. The best way to do this by connecting people and machines through operations apps. This provides a holistic picture and helps overcome common confounds.
Curious how to get connect your factory to generate truly predictive insights? Tulip’s frontline operations platform captures the human and machine data you need for full process visibility. Get in touch to see how we can help bring your operations online.