This page is open for editing because it is part of the Incubator. Have something to add? Please register so you can contribute. Have an option you would like to share? Please click on the 'Talk' button to enter the dialogue. The TF Resource Volunteers appreciate your feedback and interest.

[Note: the word “project” is used generically to refer to a MAJOR road/transit investment or policy proposed for local implementation in a metropolitan area. The word “prediction” refers to the raw outputs from travel models, while “forecast” refers to the information from models and other sources that is analyzed and subsequently presented to decision makers.]

A common misconception is that once a travel model has “passed” a series of validation tests, all predictions based on that model can be considered reasonable, i.e., plausible predictions of what is likely to happen. There are many reasons why this is faulty logic. Here are examples that may surface when a model-based prediction is scrutinized:

  1. Either very little work was actually done in regards to validation tests, or what was reported includes only those tests that appear to show good results;
  2. The regional model validation results might look generally “okay,” but yet don’t look so good for specific corridors of greatest interest to a project-specific study;
  3. The calibrated base year model “passes” various validation tests, but only after the model has been severely “over-fitted” to match the real-world data, thereby making the model an unreliable prediction tool;
  4. Errors in the model script were undetected;
  5. Implemented modeling procedures are inappropriate or poorly implemented;
  6. Model parameters are unsupportable;
  7. Traffic assignment does not converge;
  8. Valid congested auto and transit travel times are not used;
  9. The network and zonal demographic inputs to the calibrated base year model had major errors;
  10. The future-year inputs to the travel model are single-point forecasts that either have errors or are unrealistic;
  11. The data available for model testing, as well as model development, may turn out to be an inaccurate picture of real-world travel; and
  12. Predictions were made for situations that are very different from anything encountered during model calibration (e.g., travel predictions for tolled facilities, premium transit services, or transit oriented developments in regions where these have not yet been implemented anywhere in the region).

While the specific work activities depend on the circumstances for why a travel forecast is under review, individuals with responsibilities for determining the plausibility of a forecast for use in decision-making may find the following series of questions helpful. The information below is written from the context of an “independent review” of a forecast prepared by others, but individuals responsible for preparation of forecasts may be able to tailor this to their specific needs.

Why is the project under consideration?

Before “digging in” to any model outputs, it is useful to first gain a general understanding of what the proposed project is expected to accomplish:

  • What are the expected mobility, economic development, and other benefits if this project is implemented? Not detailed numbers, but an in-plain-English description?
  • Is the project intended to solve a current observable problem, or is this focused on anticipated problems caused by future demographic growth?
  • What are the concerns raised by any opponents to the project? While information from such sources may be biased, it takes only a few minutes to perform a search of information readily available on the internet. It is good to keep in mind that information from any source could be biased and/or incomplete.
  • Is the project similar to other implemented projects in the region, or elsewhere in the country? What information is available for these comparable projects? Are there opportunities to compare the project forecasts with the observable outcomes of these other projects?

What forecasts are available?

Common practice is to review a travel forecast, particularly for a long-range horizon (e.g., 2035), as a “stand alone” forecast. However, to properly assess the plausibility of such a forecast, a more prudent approach is to find the “coherent story” that includes an examination of other model-based predictions:

  • The model calibration/validation “base” year.
  • Current year (or “near year”) predictions with and without the project (often referred to as No-Build and Build).
  • Horizon year No-Build and Build predictions.

What assessments can be done?

The specific analyses to be performed, as well as the sequence of work, will depend on the specific circumstances. Here are examples:

  • Re-examine the base year model validation results, particularly for the geographic areas and traveler markets expected to be most impacted by project implementation: for which areas/markets are the predictions higher and/or lower than observed? What are the possible reasons for these errors? Could these errors be due to the flawed collection of real-world data? What impact are these errors likely to have on the accuracy of the project predictions?
  • Compare the current year No-Build predictions to the base year predictions: do the predicted changes in travel make sense? How do the current year predictions compare to recent real-world data?
  • Compare the horizon year No-Build predictions to the current year No-Build predictions: do the predicted changes in travel make sense? Significant reductions in predicted auto speeds between two years, or significant changes in travel patterns, should be closely inspected to determine if the underlying cause is an imbalance of too little (local) roadway capacity to meet the (local) demand, to an extent that this is never likely to actually happen in the real world.
  • Compare the horizon year Build predictions to the current year Build and horizon year No-Build predictions: do the predicted changes in travel make sense?
  • For projects focused on ridership or traffic volumes, how do the current year and horizon year predictions compare to similar projects that were implemented elsewhere, either in the region or in other regions?

The central issue for any critical assessment of a forecast is to LOOK at the Big Picture information from the forecasts, as well as the details, and determine if what the model outputs are trying to tell us make sense. In other words, that the forecasts are telling a plausible story about what is likely to happen if the proposed project is implemented.