Every business with historical data has a forecasting problem. Most solve it with spreadsheets and gut feel. Here is how to do it properly with statistical and ML-based time series models.
A time series has up to four components: trend (long-term direction), seasonality (regular periodic patterns), cyclicality (irregular longer-term patterns), and residual noise. A good forecast model needs to decompose these components, model each one appropriately, and recombine them for the prediction.
The naive approach — fit a linear trend and extrapolate — fails whenever the series has seasonality (which most business metrics do), structural breaks (COVID-19 was a structural break for every retail and hospitality time series in existence), or nonlinear dynamics.
The other common mistake is evaluating forecast accuracy on the training data. A model that perfectly fits historical data is not necessarily a good forecaster — it may be overfitting. Proper evaluation requires a holdout test set and metrics like MAPE (Mean Absolute Percentage Error) or RMSE computed on data the model never saw.
Three model families dominate practical time series forecasting. Each has strengths and weaknesses that make it appropriate for different data characteristics.
| Model | Best For | Weakness |
|---|---|---|
| ARIMA/SARIMA | Stationary series, short-term forecasts, interpretable models | Requires stationarity, poor with complex seasonality |
| Prophet (Meta) | Business metrics with strong seasonality, holiday effects, missing data | Assumes additive decomposition, slow for large datasets |
| LSTM | Long sequences with complex nonlinear patterns, multivariate inputs | Needs lots of data, hard to interpret, slow to train |
| ETS (Exponential Smoothing) | Simple trend + seasonality, fast, robust to outliers | Cannot handle complex patterns or external regressors |
The time series endpoint in the MainState Labs ML API automatically selects the best model for your data using cross-validation, or you can specify a model explicitly. It returns point forecasts, prediction intervals, and a decomposition of the series into trend, seasonal, and residual components.
E-commerce platforms in Southeast Asia use time series forecasting for inventory management. Overstocking ties up capital; understocking loses sales. A 10% improvement in forecast accuracy can translate directly to 5-8% reduction in inventory carrying costs — material savings for a platform doing $100M in GMV.
SaaS companies use it for revenue forecasting and churn prediction. Monthly recurring revenue is a time series with trend and seasonality (Q4 is often stronger than Q1 for B2B software). Accurate 90-day revenue forecasts improve hiring decisions, cash management, and investor communications.
Energy companies in India and Japan use it for electricity demand forecasting. Grid operators need to predict load 24 to 48 hours ahead to schedule generation capacity. A 1% improvement in forecast accuracy at national scale translates to hundreds of millions in reduced reserve capacity costs.
The time series endpoint accepts a JSON array of {timestamp, value} pairs. Minimum recommended length is 2 full seasonal cycles — if your data has weekly seasonality, you want at least 2 weeks of data; for monthly seasonality, at least 2 years. More data generally produces better forecasts up to a point of diminishing returns.
Missing values are handled automatically via interpolation. Outliers are detected and treated as anomalies rather than signal. The forecast horizon is configurable — you can request 7-day, 30-day, or 90-day ahead forecasts with prediction intervals at any confidence level.
Add time series forecasting to your application in minutes.
Try the ML API →