Frequently Asked Questions (FAQ)
Answers to the most commonly asked questions about the early-warning system. Missing a question? Reach out at info@viewsforecasting.org for personal assistance.
conflict Definitions
The current model generates forecasts for state-based armed conflict[1] per the UCDP definition thereof, i.e. for armed conflict between two or more actors – of which at least one is the government of a state[2] – over a contested incompatibility[3] that concerns government and/or territory.
Yes, the model is set to expand to also cover non-state conflict and one-sided violence against civilians, per the UCDP definitions thereof. The models in use between 2018-2021 also covered these two types of violence.
Non-state (ns) armed conflict refers to the use of armed force between two or more organized armed groups[4], neither of which is the government of a state.
One-sided (os) violence concerns the deliberate use of armed force by the government of a state, or by a formally organized armed group against civilians.
Prediction targets
The VIEWS system currently generates two sets of forecasts for state-based armed conflict, for each level of analysis:
(1) Predicted number of battle-related deaths (BRDs), disaggregated by unit and month.
(2) Predicted probability of armed conflict, disaggregated by unit and month. At the country level, the forecasts present the predicted probability of meeting or exceeding at least 25 BRDs per country-month; at the sub-national level, they present the probability of observing at least 1 BRD per grid cell and month.
Please note that in order for a given conflict event to be included in the final and annual UCDP GED dataset that the VIEWS forecasts are evaluated against, the conflict dyad at hand must have resulted in at least 25 battle-related deaths over the course of the concerned calendar year. This criterion is not applied in the UCDP Candidate data that informs the VIEWS forecasts on a monthly basis in addition to the GED data, and has therefore been excluded from the outcome definitions above, given the monthly release of our forecasts. When evaluating the forecasts against GED data, it is nevertheless implicitly applied. More about the UCDP Candidate dataset, and its differences from the GED data, can be found in the 2021 presentation article of the UCDP-Candidate dataset.
Over the course of the Societies at Risk and ANTICIPATE projects, the VIEWS system will be expanded with predictions for humanitarian impacts of conflict.
Units and Levels of Analysis
The VIEWS forecasts are presented at two levels of analysis: the country level and the granular grid level.
The country-level forecasts have global scope. They are based on the Gleditsch & Ward (1999) list of independent states, combined with country codes (IDs) and geographic delimitations from the GIS dataset CShapes (Weidmann, Kuse & Gleditsch, 2010).
The grid-level forecasts currently span Africa and the Middle East at a 0.5° resolution, measuring approximately 55 sq.km. at the Equator. The units are drawn from PRIO-GRID 2.0 ( Tollefsen, Strand & Buhaug, 2012), a standardized spatial grid structure consisting of quadratic grid cells that jointly cover all terrestrial areas of the world.
Please note that the choices of country set and delimitations thereof were made on methodological grounds and do not reflect the views or opinions of the VIEWS team.
The model uses calendar months as the temporal unit.
The lead time, or forecasting horizon, is 1-36 months. Each dataset offers conflict forecasts for each month in a rolling three-year window, counted from the last month of data informing the model.
For example, the forecasts produced in January 2025 were informed by data up to December 2024 (a one-month lag due to the update cycle of our main input data). Our forecast dataset therefore contained monthly forecasts from January 2025 – December 2027.
Data Partitioning
Yes. In order to train and calibrate the forecasting models used in VIEWS, all available training data are split into two sets of data partitions. One is used for true forecasting, and the other for evaluating historic forecasts.
See the two questions below for more information.
We use separate data partitions for true/actual forecasting and for evaluation of past predictions.
For true forecasting, the data are split into four partitions: a training, calibration, “predictor updating”, and forecasting period.
The training period runs from the first month of available UCDP GED data, i.e. January 1990, up until the month before the start of the calibration period. The length of the training period is increased by one year following the annual release of the UCDP GED dataset as we then re-train our models.
The calibration period is 48 months long. Its start and end dates shift by one year following the annual release of the UCDP GED data and our subsequent re-training of the VIEWS models. If the most recent UCDP GED release covers the year of 2021 (and the VIEWS models have been retrained accordingly), the calibration period ends on 31 December 2021; if the last release covers the year of 2022, it runs up until 31 December 2022; and so forth.
The predictor updating period runs from the month following the last month of available UCDP GED data up until and including the last month of available UCDP Candidate data. During the predictor updating period, the VIEWS system is informed by the latter as a monthly substitute for the UCDP GED data, in addition to updates from other predictors that follow a regular update schedule. The predictor updating period is thus extended by one month each time we generate and release a new set of monthly VIEWS forecasts.
The forecasting period is the rolling 36-month period for which we release true forecasts each month. The forecasting period starts immediately after the last month of available UCDP Candidate data, i.e. after the last month of input data informing the VIEWS models. The name of each data release in the VIEWS API reflects the last month of input data and can thus be used to deduce when the predictor updating period ends and the forecasting period starts. The fatalities002_2023_08_t01 release of VIEWS data, e.g., was informed by data up until and including Aug 2023, and thus contains forecasts for September 2023 – August 2026.
For the fatalities002_2023_08_t01 release of VIEWS data, e.g., the partitions looked at follows:
- Training period: Jan 1990 – Dec 2017
- Calibration period: Jan 2018 – Dec 2021 (awaiting retraining of the models with the UCDP GED data for 2022)
- Predictor updating period: Jan 2022 – Aug 2023
- Forecasting period: Sept 2023 – Aug 2026
The evaluation periodization is used to evaluate the VIEWS models. In this periodization, the available data are split into three partitions: a training, calibration, and testing period.
The periodization as a whole runs from the first month of available UCDP GED data (January 1990) up until and including the last month of data in the most recent UCDP GED release used for the given evaluation. If the last release used covers the year of 2021 (and the VIEWS models have been retrained accordingly), the evaluation periodization ends in December 2021; if the last release covers 2022, it runs up until and including December 2022; and so forth.
The calibration and testing periods both span 48 months each, while the training period covers the remaining time from January 1990 onwards.