Guide

What is wMAPE? The Accuracy Metric Your Forecasting Tool Should Show You

Understand wMAPE, MAPE, MAE, and Bias — the key forecast accuracy metrics. Learn why weighted metrics matter for portfolio-level accuracy and what good looks like for e-commerce forecasting.

Foresyte TeamFebruary 17, 202611 min read

If your forecasting tool does not show you wMAPE (weighted Mean Absolute Percentage Error), you are flying blind. wMAPE is the single most informative accuracy metric for e-commerce demand forecasting because it tells you how accurate your forecasts are weighted by the products that matter most to your revenue. Yet most operators have never heard of it, and many forecasting tools do not report it.

35%
Best-in-class wMAPE accuracy
50–70%
Industry average wMAPE
4 metrics
MAE, MAPE, wMAPE, and Bias

This guide explains wMAPE and the other forecast accuracy metrics you need to know — MAPE, MAE, and Bias — when to use each, and what "good" looks like for an e-commerce portfolio.


Why Forecast Accuracy Metrics Matter

Before diving into formulas, let us establish why this matters. Forecast accuracy directly determines your inventory costs:

  • A forecast that is 10% too high means 10% excess inventory, carrying costs, and potential markdowns.
  • A forecast that is 10% too low means stockouts, lost revenue, and lost customers.
  • A forecast that is 50% too high or too low is catastrophic.

Without a reliable accuracy metric, you cannot answer basic questions: Is my forecasting getting better or worse? Which products have the worst forecasts? Should I trust this forecast enough to place a large purchase order?


The Four Metrics You Need to Know

1. MAE — Mean Absolute Error

Formula: MAE = (1/n) x SUM( |Actual_i - Forecast_i| )

MAE is the simplest metric: the average of the absolute differences between actual and forecast values, measured in the same units as your data (units sold, dollars, etc.).

Example: If your forecast was 100 units and actual was 120, the absolute error is 20 units. Average this across all products and periods.

When to use: MAE is useful when you want to understand error in concrete units. "Our average forecast is off by 15 units" is immediately actionable for inventory planning.

Limitation: MAE does not tell you whether 15 units of error is good or bad. For a product selling 1,000 units/month, 15 units of error is excellent (1.5%). For a product selling 20 units/month, 15 units of error is terrible (75%).

2. MAPE — Mean Absolute Percentage Error

Formula: MAPE = (1/n) x SUM( |Actual_i - Forecast_i| / Actual_i ) x 100%

MAPE expresses error as a percentage of actual demand, making it comparable across products with different volume levels.

Example: Forecast 100 units, actual 120 units. MAPE for this observation = |120 - 100| / 120 = 16.7%.

When to use: MAPE is good for comparing accuracy across individual products. A product with 15% MAPE is better forecasted than one with 40% MAPE, regardless of their volume levels.

Limitations (and there are serious ones):

  • Explodes when actuals are near zero. If actual demand is 1 unit and your forecast was 5, MAPE = 400%. This single low-volume product can dominate your portfolio-level MAPE, even though the 4-unit error is economically insignificant.
  • Treats all products equally. A product selling 10,000 units/month and a product selling 10 units/month contribute equally to MAPE. This is not how business impact works — a 20% error on the 10,000-unit product is far more costly than a 20% error on the 10-unit product.
  • Asymmetric. MAPE penalizes over-forecasting more than under-forecasting due to the asymmetry of the percentage calculation. A forecast of 150 versus actual of 100 gives MAPE = 50%. A forecast of 50 versus actual of 100 gives MAPE = 50%. But mathematically, the formula biases toward penalizing high forecasts of low actuals.
MAPE Pitfall

A single low-volume product with 400% MAPE can drag your entire portfolio metric from "good" to "terrible" — even if the actual error is just 4 units. This is why wMAPE exists.

3. wMAPE — Weighted Mean Absolute Percentage Error

Formula: wMAPE = SUM( |Actual_i - Forecast_i| ) / SUM( Actual_i ) x 100%

wMAPE solves the core problems with MAPE by weighting each product's error by its volume. Instead of averaging percentage errors (where each product counts equally), wMAPE divides total absolute error by total actual demand.

Example:

Product Actual Forecast Absolute Error MAPE
A (high volume) 10,000 9,500 500 5%
B (medium volume) 500 400 100 20%
C (low volume) 5 15 10 200%
  • Simple MAPE = (5% + 20% + 200%) / 3 = 75% — dominated by the low-volume product.
  • wMAPE = (500 + 100 + 10) / (10,000 + 500 + 5) = 610 / 10,505 = 5.8% — reflects the reality that the portfolio is very well forecasted.

Simple MAPE says 75%. wMAPE says 5.8%. Same data, dramatically different story. wMAPE tells the truth.

This example illustrates why wMAPE is the right metric for portfolio-level accuracy. The 200% error on Product C sounds terrible, but it represents only 10 units of actual error. The 5% error on Product A represents 500 units. wMAPE correctly prioritizes the business impact.

When to use: wMAPE is the best single metric for portfolio-level forecast accuracy. It answers the question: "For every dollar of demand in my portfolio, how many cents of error do I have?" This directly connects to inventory planning and financial outcomes.

Limitation: wMAPE can mask poor accuracy on low-volume products. If you have long-tail products where stockouts are still problematic (for example, products with contractual service-level obligations), you should track both wMAPE (portfolio level) and MAPE (product level) for those specific items.

Key Takeaway

wMAPE is the best single metric for portfolio-level accuracy because it weights errors by volume. Use wMAPE as your primary metric, Bias as your secondary, and drill into per-product MAPE only where individual product accuracy matters.

4. Bias — Forecast Bias

Formula: Bias = SUM( Forecast_i - Actual_i ) / SUM( Actual_i ) x 100%

Bias tells you whether your forecasts are systematically too high (positive bias) or too low (negative bias). MAE and MAPE tell you how much error you have; Bias tells you which direction.

Example: If your total forecasts across all products were 105,000 units and total actuals were 100,000 units, your Bias = +5%. You are systematically over-forecasting by 5%.

Why it matters:

  • Positive bias (over-forecasting) leads to excess inventory, carrying costs, and markdowns.
  • Negative bias (under-forecasting) leads to stockouts and lost revenue.
  • A good forecasting system should have bias near zero — errors should be roughly symmetric.

Red flag: If your wMAPE is 35% but your Bias is +20%, your forecast is not just inaccurate — it is consistently wrong in the same direction. This is worse than unbiased error of the same magnitude because your safety stock calculations assume errors are centered around the forecast. Persistent bias means your safety stock is systematically miscalibrated.

Pro Tip

Always check Bias alongside wMAPE. A tool with 35% wMAPE and 2% Bias is far more trustworthy than one with 35% wMAPE and 15% Bias. Persistent bias means your safety stock is systematically miscalibrated in one direction.

Compare your current accuracy against Foresyte's 35% wMAPE
Start 14-Day Free Trial

Metric Comparison Summary

Metric Best For Scale-Independent? Handles Low Volume? Shows Direction?
MAE Concrete unit-level error No Yes No
MAPE Per-product comparison Yes No (explodes near zero) No
wMAPE Portfolio-level accuracy Yes Yes (volume-weighted) No
Bias Systematic error direction Yes Yes Yes

Recommendation: Report wMAPE as your primary accuracy metric, Bias as your secondary metric, and use per-product MAPE or MAE for drilling into individual product performance.


What "Good" Looks Like

Accuracy benchmarks depend on your product category, catalog complexity, and data history. But here are general guidelines for e-commerce demand forecasting:

wMAPE Range Assessment Typical Context
Under 25% Excellent Stable, high-volume staples with long history
25%–40% Good Diverse portfolio, mix of stable and seasonal products
40%–55% Average Industry standard for mid-market e-commerce
55%–70% Below average Sparse data, many new products, or poor model fit
Above 70% Poor Essentially guessing — explore better methods

Important context: these benchmarks are for backtested accuracy — meaning the metric is calculated on held-out historical data that the model did not see during training. If your tool only reports accuracy on training data ("in-sample accuracy"), the numbers will look much better than reality. Always ask for out-of-sample, backtested metrics.

Key Takeaway

Best-in-class e-commerce forecasting achieves 25–40% wMAPE. The industry average is 50–70%. Always insist on backtested (out-of-sample) accuracy — in-sample metrics are misleadingly optimistic.


How to Validate Your Forecasting Tool's Accuracy

Here is a step-by-step process to honestly evaluate any forecasting tool's accuracy claims:

1
Understand the Backtesting Methodology
Ask the tool: "How do you calculate accuracy?" You want rolling-origin backtesting — the model is trained on data up to a cutoff date, generates forecasts for the period after the cutoff, and then accuracy is measured against actuals. The cutoff should be rolled forward multiple times to get a robust estimate.
2
Check the Metric
Is the tool reporting MAPE or wMAPE? As we discussed, MAPE can be misleading for portfolios with low-volume products. wMAPE is the better metric for overall portfolio assessment.
3
Look at Bias
A tool with 35% wMAPE and 2% Bias is far more trustworthy than a tool with 35% wMAPE and 15% Bias. The first tool's errors are random and cancel out. The second tool is systematically over-predicting, which means your inventory plans will be systematically wrong.
4
Examine Per-Product Accuracy
Portfolio-level wMAPE can hide pockets of very poor accuracy. Look at the distribution of per-product accuracy. A tool that is at 35% wMAPE overall but has 100 products with greater than 80% MAPE may be achieving its good portfolio score by being excellent on high-volume products and terrible on everything else.
5
Test on Your Data
The only accuracy number that matters is the one on your data. Vendor benchmarks on demo datasets are not representative. Demand the ability to run the tool on your actual historical sales data and see backtested accuracy before committing.
Start forecasting 2,000+ products in 15 minutes
Start 14-Day Free Trial
Key Takeaway

Do not trust vendor benchmarks on demo data. The only accuracy that matters is backtested performance on your actual sales history. Demand rolling-origin backtesting, check for bias, and examine per-product accuracy distributions before committing.


How Foresyte Reports Accuracy

Foresyte provides all four metrics — wMAPE, MAPE, MAE, and Bias — computed through rolling-origin backtesting on your actual sales data. The platform achieves a 35% wMAPE across diverse e-commerce portfolios, well below the 50–70% industry average.

Every product gets an individual confidence score based on its data quality, history length, and model fit. Products with low confidence are flagged for human review, so you know exactly where to trust the automation and where to apply judgment. The accuracy dashboard lets you drill from portfolio-level wMAPE down to individual product performance, giving you full transparency into how well the system is working.

Ready to automate your demand planning? Start free
Start 14-Day Free Trial

Accuracy is not a feature to take on faith — it needs to be proven on your data, with your products, using honest backtesting methodology. Start a 14-day free trial with Foresyte to see your portfolio's wMAPE, Bias, and per-product accuracy computed on your own historical sales. Connect your marketplace data and get backtested results in under 15 minutes.

Ready to forecast smarter?

Start your 14-day free trial and see how Foresyte's AI archetype intelligence can predict demand for your entire product catalog in minutes.

Related articles