Time Series Analysis
Key Takeaways
- Time series analysis treats sensor readings as ordered sequences rather than isolated measurements, revealing degradation trends invisible to threshold-based alerts.
- The four core components are trend, seasonality, cyclicality, and irregular noise. Separating them is the foundation of all forecasting methods.
- Common industrial methods include moving averages, ARIMA, exponential smoothing, Fast Fourier Transform (FFT) for vibration spectra, and machine learning anomaly detection models.
- A 90-day vibration RMS trend is typically sufficient to identify a bearing fault signature 15 to 30 days before functional failure occurs.
- Time series analysis powers predictive maintenance by translating raw sensor streams into remaining useful life estimates that maintenance planners can act on.
What Is Time Series Analysis?
Time series analysis is the discipline of extracting meaning from measurements that are indexed in time order. Unlike cross-sectional statistics, which compare values across subjects at a single moment, time series methods treat the sequence itself as informative: how fast a value is changing, whether it oscillates at a regular frequency, and whether it is drifting upward or downward over weeks or months.
In industrial settings, almost every sensor produces a time series: vibration accelerometers sample thousands of times per second, temperature sensors log every few minutes, and oil analysis laboratories report particle counts every 250 operating hours. Time series analysis turns these raw streams into actionable maintenance intelligence by separating normal variation from genuine deterioration signals.
The goal is not merely to describe the past but to forecast the future state of an asset with enough lead time for planned intervention, which is the core value proposition of modern condition monitoring programs.
The Four Components of a Time Series
Every industrial time series can be decomposed into four components. Understanding these components is essential for choosing the right analytical method and interpreting results correctly.
Trend
The trend is the long-run direction of the series: the slow rise in bearing vibration as raceway pitting develops, or the gradual increase in motor winding temperature as insulation ages. Trend is the degradation signal that maintenance teams ultimately want to track and extrapolate forward.
Seasonality
Seasonality describes regular, calendar-driven fluctuations that repeat at a fixed period. A pump serving an HVAC system may show higher vibration every summer as ambient temperature raises fluid viscosity. A compressor in a food processing plant may cycle with daily production schedules. Seasonal effects must be removed before trend estimation becomes meaningful, otherwise the model mistakes a summer heat peak for a degradation event.
Cyclicality
Cyclicality refers to fluctuations that repeat over periods longer than one year and are driven by business or operational cycles rather than the calendar. A plant running at 60% capacity during an economic downturn may show lower vibration amplitudes simply because machines are running fewer hours, which is unrelated to asset health. Distinguishing cyclicality from trend requires multi-year data and domain knowledge.
Irregular (Noise)
The irregular component is the residual after trend, seasonality, and cyclicality are removed. It represents random measurement error, transient operating disturbances, and genuine one-time events such as a process upset or a hard start. Anomaly detection algorithms look for irregular values that are too large to be explained by normal noise, flagging them as potential fault events.
Core Methods Used in Industrial Time Series Analysis
Moving Averages
A moving average smooths a sensor signal by replacing each data point with the mean of the surrounding window of observations. A 7-day moving average of daily peak vibration removes hour-to-hour noise while preserving the week-over-week trend. The simple moving average (SMA) weights all points equally; the exponentially weighted moving average (EWMA) gives more weight to recent observations, making it more responsive to sudden changes. Moving averages are computationally cheap and interpretable, making them the standard first-pass method for trending sensor data in SCADA and CMMS dashboards.
ARIMA Models
ARIMA (AutoRegressive Integrated Moving Average) models capture three dynamics: the relationship between a current value and its own past values (autoregression), non-stationarity addressed by differencing the series (integration), and the relationship between current residuals and past residuals (moving average error term). ARIMA is well-suited to sensor streams that have a predictable autocorrelation structure, such as slow-moving temperature trends or bearing clearance growth. Its output is a point forecast with a confidence interval, which can be used directly to estimate the time horizon before a threshold is breached.
Exponential Smoothing
Exponential smoothing methods (Holt-Winters and its variants) decompose a series into level, trend, and seasonal components, each updated using a smoothing constant that controls how quickly the model adapts to new data. They are fast, require little historical data compared to ARIMA, and handle seasonality natively. In maintenance applications, they are commonly used to forecast consumable usage rates and to smooth bearing envelope signals before threshold comparison.
Fast Fourier Transform (FFT)
FFT transforms a time-domain vibration signal into its frequency-domain representation, revealing the amplitude of each component frequency. Bearing faults produce characteristic frequencies based on geometry: ball pass frequency outer race (BPFO), ball pass frequency inner race (BPFI), ball spin frequency (BSF), and fundamental train frequency (FTF). An FFT spectrum showing a growing peak at BPFO is a direct early indicator of outer race pitting, detectable weeks before the fault produces audible noise. FFT is the backbone of vibration analysis and is implemented in virtually all industrial vibration monitoring hardware.
Anomaly Detection Models
Machine learning anomaly detection models, including isolation forests, autoencoders, and one-class SVMs, learn the multivariate normal behavior of an asset across multiple sensor channels simultaneously. When a new reading falls outside the learned normal envelope, the model flags it as an anomaly without requiring a predefined threshold. This is particularly valuable for complex assets where fault signatures are not fully characterized in advance. For more on this approach, see the glossary entry on anomaly detection.
Worked Example: Bearing Fault Detection Over 90 Days
The following example shows how vibration RMS (root mean square) data collected over 90 days can reveal a developing bearing fault on a 75 kW centrifugal pump.
| Day | Vibration RMS (mm/s) | Observation |
|---|---|---|
| 1 | 2.1 | Baseline: normal operating level |
| 15 | 2.3 | Minor upward drift; within normal variation |
| 30 | 2.8 | BPFO sideband appears in FFT spectrum |
| 45 | 3.6 | BPFO peak growing; anomaly detection flag triggered |
| 60 | 5.1 | Trend model projects threshold breach in 20 to 30 days |
| 72 | 6.4 | Bearing replacement scheduled during next planned outage |
| 80 | 8.2 | Bearing replaced; inspection confirms outer race pitting |
At day 30, the FFT spectrum reveals a growing peak at BPFO. The RMS trend is still below common alert thresholds (typically 4.5 mm/s for this pump class), but the time series model fitted to days 1 to 30 projects that the rate of increase will carry the signal past 7.1 mm/s (the shutdown threshold) within 50 days. This 50-day forecast window allows the maintenance team to order a replacement bearing, assign a technician, and schedule the job during a planned production break at day 80, avoiding a catastrophic failure that would have caused 18 to 36 hours of unplanned downtime.
This scenario illustrates how time series analysis bridges the gap between raw sensor data and a remaining useful life estimate that drives a maintenance work order.
Time Series Analysis vs. Statistical Process Control
Both methods use sensor data to flag abnormal conditions, but they address different questions and operate on different assumptions.
| Dimension | Time Series Analysis | Statistical Process Control |
|---|---|---|
| Primary question | Where is this asset heading over the next N days? | Is this reading outside acceptable limits right now? |
| Data model | Sequential dependency between observations | Observations assumed independent within a stable process |
| Alert mechanism | Forecast crosses a projected threshold; anomaly score exceeds limit | Data point falls outside control limits (UCL/LCL); run rules violated |
| Output | Point forecast with confidence interval; RUL estimate | In-control / out-of-control signal; process capability index |
| Time horizon | Days to weeks ahead (prognostic) | Real-time or near-real-time (diagnostic) |
| Best application | Bearing wear, insulation degradation, corrosion growth | Dimensional tolerances, fill weights, coating thickness |
| Handles trending data | Yes, by design | No: a trend violates the stationarity assumption and requires control chart recalibration |
In practice, statistical process control and time series analysis are complementary. SPC provides the real-time alert layer that catches sudden faults; time series analysis provides the prognostic layer that identifies slow degradation and forecasts intervention timing.
How Time Series Analysis Powers Predictive Maintenance
Predictive maintenance programs depend on two capabilities: detecting faults early and estimating how long the asset can continue operating safely. Time series analysis delivers both.
Early fault detection works because developing faults produce characteristic changes in the time series before they cause functional failure. A rolling element bearing developing a raceway spall initially increases its BPFO spectral amplitude by a few percent. An ARIMA model or anomaly detection algorithm fitted to the baseline vibration stream detects this shift as a statistically significant departure from the expected trajectory, typically 30 to 60 days before ISO 10816 vibration severity thresholds are breached.
Remaining useful life estimation works by fitting a degradation model to the historical trend and projecting it forward to the failure threshold. The projection produces a distribution of probable failure dates rather than a single point, which allows maintenance planners to schedule intervention conservatively without being unnecessarily early. This is the analytical core of predictive analytics platforms used in industrial asset management.
Mean time between failure calculations also benefit from time series data. When failure timestamps are available alongside continuous sensor histories, statistical models can identify which sensor trajectories reliably precede failure, allowing mean time between failure estimates to be conditioned on current asset health rather than calculated from raw population averages.
Data Requirements for Reliable Time Series Analysis
Sampling Rate
The Nyquist theorem states that the sampling rate must be at least twice the highest frequency of interest. For bearing fault frequencies up to 20 kHz, industrial accelerometers typically sample at 25.6 kHz. For process variables such as temperature or pressure, sampling every 1 to 5 minutes captures all relevant dynamics without excessive data volume.
Window Size
The analysis window must be long enough to contain at least two full cycles of the slowest oscillation of interest. For assets with weekly operational cycles, a 30-day window is the practical minimum. For assets with annual seasonality, 24 to 36 months of data is needed before seasonal decomposition becomes stable. Short windows cause models to misinterpret seasonal dips as degradation trends.
Stationarity
Most classical time series models assume stationarity: the mean, variance, and autocorrelation structure of the series do not change over time. Raw vibration trends from degrading bearings are non-stationary by definition. The standard remedies are differencing (subtracting each value from the previous one) and detrending (removing the fitted trend component before modeling residuals). Failure to address non-stationarity produces forecasts that extrapolate seasonal patterns incorrectly and generate false confidence in the projected failure date.
Data Continuity
Gaps in sensor data caused by network outages, sensor removal for maintenance, or equipment shutdowns create missing value problems. Short gaps (less than 5% of the window) can be interpolated using linear or spline methods. Longer gaps require either excluding the gap period from model training or using state-space models that handle irregular observation intervals natively. Systems used for vibration monitoring should log the reason for every data gap to prevent models from treating a planned shutdown as an anomaly.
Common Tools and Software
Industrial time series analysis is implemented across a spectrum of tools, from programming libraries to purpose-built maintenance platforms.
| Tool / Platform | Primary Use | Typical User |
|---|---|---|
| Python (statsmodels, Prophet, scikit-learn) | Custom ARIMA, anomaly detection, ML models | Data engineers, reliability engineers |
| R (forecast, tseries) | Statistical modeling, academic-grade diagnostics | Statisticians, R&D teams |
| MATLAB Signal Processing Toolbox | FFT, order tracking, envelope analysis | Vibration analysts, OEM engineering teams |
| InfluxDB + Grafana | Time series storage, real-time dashboards | Maintenance operations teams |
| Tractian Condition Monitoring Platform | Automated trend analysis, FFT, fault alerts, RUL forecasting | Maintenance and reliability teams |
| OSIsoft PI (AVEVA) | Enterprise historian, process data analytics | Process industries (oil and gas, chemicals) |
The choice of tool depends on whether the team needs flexibility (programming libraries), visualization speed (Grafana), or a fully integrated sensor-to-insight workflow (purpose-built platforms). For condition-based maintenance programs that need to scale across dozens of machines without a dedicated data science team, integrated platforms that automate model training and alerting typically deliver faster time to value.
The Bottom Line
Time series analysis transforms sensor data from a historical record into a forward-looking maintenance tool. By decomposing signals into trend, seasonality, cyclicality, and noise, and then modeling how those components evolve, maintenance teams can detect bearing faults 30 to 60 days before functional failure, estimate remaining useful life with quantified uncertainty, and schedule interventions during planned windows rather than reacting to breakdowns.
The practical benefit is measurable: the worked example above shows a fault detected at day 30, a replacement scheduled at day 72, and a repair completed at day 80 during a planned break, avoiding 18 to 36 hours of unplanned downtime. At the typical cost of industrial downtime (USD 50,000 to 500,000 per hour in continuous process industries), the analytical investment pays back on the first avoided failure.
Successful implementation requires matching the sampling rate to the fault frequency of interest, gathering sufficient history for seasonal decomposition, and addressing non-stationarity before model fitting. Teams that combine time series analysis with a real-time alerting layer gain both the prognostic foresight and the operational response speed needed for a mature predictive maintenance program.
See Time Series Analysis in Action on Your Assets
Tractian's condition monitoring platform continuously analyzes vibration, temperature, and current time series from every connected asset, surfacing degradation trends and fault signatures before they cause unplanned downtime.
Explore Condition MonitoringFrequently Asked Questions
What sampling rate do I need for vibration time series analysis?
The Nyquist theorem requires a sampling rate at least twice the highest frequency of interest. For rolling element bearing fault frequencies, which typically fall between 5 Hz and 20 kHz, a sampling rate of 25.6 kHz is common in industrial accelerometers. For slower phenomena such as temperature drift or oil viscosity trends, a sample every few minutes is sufficient. Match the sampling rate to the fault frequency range of the specific failure mode you are monitoring.
How much historical data is needed before time series models become reliable?
As a practical rule, ARIMA and exponential smoothing models need at least 50 observations to produce stable parameter estimates, and seasonal decomposition requires at least two full seasonal cycles. For industrial assets with slow degradation trends, 90 days of continuous sensor data is a reasonable minimum before drawing conclusions about trajectory. Assets with irregular operating schedules may require 6 to 12 months to capture the full range of normal variation.
What is the difference between time series analysis and statistical process control?
Statistical process control uses control charts to flag when a measurement falls outside predefined control limits, treating each observation largely independently. Time series analysis models the sequential dependency between observations to extract trend, seasonality, and cyclic patterns, and to forecast future values. SPC answers the question "Is this reading out of bounds right now?" while time series analysis answers "Where is this asset heading over the next 30 days?" The two approaches are complementary: SPC provides real-time alerts while time series analysis supports longer-horizon prognostics.
Can time series analysis predict the exact date of equipment failure?
Time series analysis produces a probabilistic forecast with a confidence interval, not a fixed date. It estimates remaining useful life as a range: for example, bearing failure is likely within 15 to 25 days at current degradation rate. Actual failure depends on load variability, lubrication condition, ambient temperature, and other factors that introduce uncertainty. The practical goal is to narrow the intervention window enough to schedule maintenance before failure while avoiding unnecessary early replacement.
Related terms
Industrial Vibration Analysis: Techniques
Industrial vibration analysis measures machine vibration to detect faults in rotating equipment before failure. Learn the key techniques, what faults it finds, and how it powers predictive maintenance programs.
Industrial Maintenance: Types
Industrial maintenance is the set of activities that keep manufacturing and industrial equipment operating reliably. Learn the types, key strategies, performance metrics, and how to build an effective program.
Industrial Automation: Types
Industrial automation uses control systems, robotics, and software to perform manufacturing tasks with minimal human input. Learn the types, key technologies, benefits, and how automation affects maintenance.
In-Process Control: Definition
In-process control (IPC) is the real-time monitoring and testing of production processes to catch defects before they compound. Learn how it works, key methods, and its role in regulated industries.
In-House Maintenance: Definition
In-house maintenance is when a company uses its own employees to handle maintenance tasks. Learn how it compares to outsourcing, its advantages, challenges, and how a CMMS supports internal teams.