A price forecast is not a prediction. It’s a distribution.
When someone says “the stock will be at $250 next week,” they’re either giving you a point estimate stripped of context or they’re making a claim they can’t back up. A proper forecast has two parts: where the model expects the price to land, and how uncertain the model is about that estimate. FinBrain’s Price Forecasts provide both, and this post explains how to read them.
What ARIMA Actually Produces
ARIMA models the statistical structure of historical price movements — trends, momentum, and the way shocks persist. When calibrated on a ticker’s price history, the model produces three outputs for each future date in the forecast horizon:
mid— The expected price (the point forecast)lower— The lower bound of the 95% confidence intervalupper— The upper bound of the 95% confidence interval
A single forecast record looks like this:
{ "date": "2026-04-21", "mid": 205.32, "lower": 198.14, "upper": 212.83}The correct reading of this record: “Based on this ticker’s price history and the fitted statistical model, we expect the price on 2026-04-21 to be around $205.32. There’s roughly a 95% probability that the actual price falls between $198.14 and $212.83 — assuming the underlying dynamics that generated past returns continue to hold.”
Why Log-Returns, Not Raw Prices
ARIMA isn’t applied directly to prices. It’s applied to log-returns — the log of consecutive price ratios.
Two reasons this matters:
- Stationarity. Raw prices trend, drift, and wander. Returns are much closer to stationary, which is what time-series models need to work properly.
- Distributional properties. Log-returns are roughly symmetric and have well-behaved statistical properties that let us construct reliable confidence intervals.
This detail has practical implications. The confidence intervals widen over time in a specific way — roughly proportional to the square root of the forecast horizon. A 1-day-ahead interval might be ±2%. A 10-day-ahead interval might be ±6%. A 60-day-ahead interval might be ±15%. This isn’t an artifact. It reflects the genuine accumulation of uncertainty over longer horizons.
Why the Bounds Matter More Than the Point Estimate
The single most common mistake in using forecasts is treating mid as a prediction and ignoring lower and upper.
Consider two tickers with the same 10-day point forecast of +2%:
- Ticker A:
mid= +2%, bounds span ±3% → 95% CI is roughly [-1%, +5%] - Ticker B:
mid= +2%, bounds span ±12% → 95% CI is roughly [-10%, +14%]
Both have the same point estimate. But Ticker A is a much tighter forecast. The model is confident in its direction. Ticker B is effectively saying “the price could go almost anywhere; the central tendency is slightly positive.”
A trader acting on point estimates alone would treat these signals identically. A trader using confidence intervals would size the Ticker A position far larger — or skip Ticker B entirely.
The Expected-Move Fields
Alongside the forecast path, FinBrain provides summary directional signals:
| Field | Daily Forecasts | Monthly Forecasts |
|---|---|---|
metadata.expectedShortTerm | 3-day expected % move | 3-month expected % move |
metadata.expectedMidTerm | 5-day expected % move | 6-month expected % move |
metadata.expectedLongTerm | 10-day expected % move | 12-month expected % move |
These are simple percentage changes between the last known price and the forecast mid at specific horizons. They’re useful for ranking and screening — finding tickers where the model expects the largest moves.
But they don’t carry the uncertainty information. If you’re comparing expectedShortTerm across tickers, you’re comparing central tendencies without weighting by how confident the model is. For serious use cases, always combine the expected-move fields with the confidence interval width for each ticker.
How to Use Confidence Intervals in Practice
Here are four concrete ways to apply the confidence intervals:
1. Sizing Positions by Confidence
Rather than equal-weighting positions when a forecast is bullish, scale position size inversely with interval width. A signal with a tight CI deserves more capital than a signal with a wide CI. This is a basic application of Kelly-style sizing — bet more when your edge is clearer.
import numpy as npfrom finbrain import FinBrainClient
fb = FinBrainClient(api_key="YOUR_API_KEY")
def signal_strength(symbol): """Combine direction and confidence into a single score""" data = fb.predictions.ticker(symbol, prediction_type="daily") last_forecast = data["predictions"][-1] mid = last_forecast["mid"] lower = last_forecast["lower"] upper = last_forecast["upper"]
# Expected return last_close = data["lastClose"] expected_return = (mid - last_close) / last_close
# Normalized uncertainty (relative half-width of CI) ci_halfwidth = (upper - lower) / 2 / mid
# Signal strength: expected return / uncertainty return expected_return / max(ci_halfwidth, 1e-6)2. Filtering by Directional Clarity
A signal is directionally clear when the confidence interval doesn’t cross zero return — that is, both lower and upper imply the same direction versus the current price.
def has_clear_direction(symbol, horizon=10): data = fb.predictions.ticker(symbol, prediction_type="daily") forecast = data["predictions"][horizon - 1] last_close = data["lastClose"]
if forecast["lower"] > last_close: return "bullish" elif forecast["upper"] < last_close: return "bearish" else: return "uncertain"Screening a universe for “directionally clear” signals removes the noise of tickers where the forecast is indistinguishable from no forecast.
3. Portfolio-Level Risk Budgeting
Aggregate CI widths across your holdings to measure forecast-level uncertainty at the portfolio level. If most of your holdings have wide intervals, your portfolio is exposed to forecast uncertainty; if most are tight, the forecasts are providing real information.
4. Backtest Benchmarking
When backtesting strategies built on price forecasts, compare the realized prices to the forecast intervals rather than to the point estimates alone. A well-calibrated 95% interval should contain the realized price roughly 95% of the time. Systematic deviation from that rate tells you the model is miscalibrated for your specific use case — potentially a signal to retrain or adjust.
Key Takeaways
- Forecasts are distributions, not predictions. The
midtells you the expected value; theloweranduppertell you the uncertainty. - Confidence intervals grow with forecast horizon — that’s a feature, not a bug.
- Treating the point estimate as a prediction and ignoring the bounds is the most common misuse of forecasting data.
- Professional use cases — position sizing, filtering, risk budgeting, backtesting — all benefit from using the full forecast distribution.
For a deeper discussion of how we arrived at ARIMA and why we moved away from deep learning, see Why We Chose Statistical Models Over Deep Learning. For the full dataset reference, see Price Forecasts Dataset.