Users Guide of the Probabilistic Long-Range Forecasts
The probabilistic forecasts provide users with additional information that is not contained in the deterministic forecasts. This product gives estimates of the probability that the seasonal mean will be above, near or below normal. For each category, the probability is obtained by counting the number of ensemble members that predict a seasonal mean in that category, and then dividing by the total number of ensemble members (More details on the computation of probabilistic forecast).
What do the probabilistic forecast maps represent?
The forecast maps are composed of 3 panels, one for each of the categories: above normal, near normal and below normal. On the maps for temperature, the colors range from yellow to red for above normal; grey to purple for near normal and light to dark blue for below normal. On the maps for precipitation, the colors range from green to blue for above normal; grey to purple for near normal and yellow to dark brown for below normal. The months for which the forecast is valid is indicated at the bottom of each panel. The date of issuance is shown on the top right corner. The color scale on the right side of the maps indicates forecast probability, in percentage of the ensemble members (10% intervals) that predict this specific category.
How are the deterministic and probabilistic forecasts related?
The deterministic and probabilistic forecasts are two different ways of presenting the forecast information. The deterministic forecasts show the predicted forecast category (above, near or below normal), resulting from the average of the 20 model runs. The historical percent correct maps attached to the deterministic maps give an indication of the skill of the prediction system based on verification of the forecasts over a number of years (typically 30 years). This is useful but unfortunately, the skill maps do not provide information on the confidence that might be attributed to the specific current forecast.
This is where the probabilistic forecasts can add important information to the deterministic forecasts by giving an indication of the specificity of the forecast. For example, a deterministic forecast of above normal conditions that is accompanied by probabilities of 45%, 30% and 25% for the above normal, normal and below normal categories would be less clear than a forecast with probabilities of 60%, 25% and 15% respectively. In the latter case, the forecast is that there are better than even odds of above average conditions, and that this is accompanied by relatively small odds of below average conditions. Similarly, a forecast of near normal conditions might be accompanied by probabilities of 30%, 40% and 30%, in which case the forecast is not very specific, or probabilities of 15%, 70% and 15%, in which case the forecast of near-normal conditions is much more specific.
It must be noted that while the deterministic forecasts skill maps are based on the verification of the 3 categories, many studies done on various seasonal forecast systems developed around the world show that the near normal category is always less well predicted than the above and below categories (Van Den Dool and Toth, 1991; Gagnon et al. 2000; Gagnon and Verret, 2000, 2001; Kharin and Zwiers, 2003). The main reason for this is that the above and below categories are open ended. In other words, they are only constrained on one side (i.e. by the near normal category). Thus, a forecast of above normal will be correct whether the observed conditions are slightly or much above normal. The same applies to the below normal category. On the other hand, the near normal category is constrained on both sides. Hence only a comparatively smaller range of values in observed conditions allow it to be correct. Therefore, less confidence should be placed on the near normal forecasts, irrespective of what the probabilistic forecasts show compared to above or below normal forecasts.
It should also be mentioned that the probabilistic forecasts are not calibrated. Please see the calibration section.
How are the probabilistic forecasts produced?
The current seasonal forecast results from an ensemble of 20 coupled climate model runs, with 10 runs of each of the Canadian Centre for Climate Modelling and Analysis (CCCma) models CanCM3 and CanCM4.
The forecast probabilities are calculated by counting the number of individual members in each of the three categories at every location and then dividing by the ensemble size. This 20-member ensemble is then configured to have a resolution of 10% (10 bins of 10%). For example, if at one location 13 members predict above normal, 6 members near normal and 1 member below normal, the forecast probabilities will be respectively 65% for above normal, 30% for near normal and 5% for below normal. On the seasonal forecast map, probabilities are not indicated with such detail. Probabilities are instead grouped in 10% bin intervals. A 5% probability would then end up in the 0-9% interval while the 65% probability would became a probability in the 60-69% interval.
The table below shows the relation between number of members, probabilities and bin intervals.
|Number of members||Probability||Bin|
Definition of the categories
The probabilistic forecasts are categorized as below normal, near normal and above normal. The definition of these 3 categories is the same as for the deterministic forecasts.
How to use the maps?
- Look at the deterministic forecast and skill map for the temperature and precipitation anomalies to determine the forecast category (above, near or below normal). This is what you would use if you had to quantify the forecast with one word.
- Look at the probabilistic forecast maps for each of the 3 categories in the area of interest.
- Compare the color on the probabilistic maps with the scale on the right side. The number that you obtain is an estimate of the probability of occurrence for each category. As a rough estimate, one can say that a higher probability equals a higher confidence in the forecast (see examples below). It is recommended to read the calibration section to get additional information on how to calibrate the probabilities.
It has to be noted that the surface air temperature forecast is a prediction of the anomaly of the mean daily temperature at 2 meters (i.e. at standard observation Stevenson screen height). It is not a forecast of the maximum or of the minimum daily temperature. For more information on what is predicted by Environment Canada seasonal forecasts please read this frequently asked questions page.
- Assume that the deterministic forecast is above normal and the probabilistic maps show 80 to 89% for above normal, 10 to 19% for near normal and 0 to 9% for below normal. Based on this, one would conclude that the probability that temperatures will be above normal is high (80 to 89%) and that the forecast is very specific. One might also say that the probability of below normal temperatures is low.
- Now, suppose that the deterministic forecast is above normal. Suppose now that the probabilistic maps show 50 to 59% for above normal, 20 to 29% for near normal and 20 to 29% for below normal. This means that only half of the model runs (20 out of 40) was actually above normal. Based on this, one would conclude that even if the deterministic forecast is for above normal temperatures, there is a good chance also that the conditions could be near normal or below normal. The odds of occurence of above normal conditions would certainly be lower than what it was for the example presented earlier.
- Suppose that the deterministic forecast is near normal and that the probabilistic maps show 20 to 29% for above normal, 40 to 49% for near normal and 40 to 49% for below normal. This is the least specific of the three examples. One would conclude that there is good chance that near normal and as well below normal conditions will occur but also that it is unlikely that temperatures will be above normal.
- Gagnon, N. and R. Verret, 2001: Probabilistic Approach to Seasonal Forecasting, Proceedings of the Long-Range Weather and Crop Forecasting Work Group Meeting IV, Regina, Saskatchewan, March 5-6, 2001, 13-18.
- Gagnon, N. and R. Verret, 2000: Probabilistic Approach to Seasonal Forecasting at the Canadian Meteorological Centre. Proceedings of the Twenty-Fifth Annual Climate Diagnostics and Prediction Workshop, Palisades, New York, October 23-27 2000, 169-172
- Gagnon, N., R. Verret, A. Plante, L. Lefaivre and G. Richard, 2000: Long-Range Forecasts Verification, Preprints, 15th Conference Probability and Statistics in Atmospheric Sciences, AMS, Asheville, North Carolina, May 2000, 65-68.
- Kharin, V. V., and F. W. Zwiers, 2003: Improved seasonal probability forecasts. Journal of Climate, 16, 1684-1701.
- Van Den Dool, Huug M., Toth, Zoltan. 1991: Why Do Forecasts for "Near Normal" Often Fail? Weather and Forecasting: Vol. 6, No. 1, 76-85.
- Date modified: