Over the past three days, the VIEWS team has convened in Uppsala to finalize its next-generation conflict forecasting platform. This cutting-edge tool, set to launch this winter, will power the latest iteration of VIEWS conflict prediction models, incorporating state-of-the-art machine learning and data science practices to strengthen the reliability and usability of early warning systems (EWS) for early action.

The new platform represents a leap forward in flexibility and scalability, supporting an unlimited variety of modeling approaches, including temporospatial neural networks, large language models (LLMs), Hidden Markov models, and the advanced time series techniques that have been used to date. With robust, maintainable, and well-defined automated workflows, combined with built-in quality assurance, input drift detection, and real-time monitoring, the system is designed to adapt to the evolving needs of conflict forecasting.
Pioneering MLOps in Social Sciences
The new pipeline leverages state-of-the-art practices in MLOps, DevOps, and CI/CD that are critical to modern infrastructures, yet commonly underutilized in the social sciences. Key features include:
- Advanced Logging and Monitoring: Real-time logging to support quality assurance and rapid issue resolution.
- Branching Strategy for Stability: A clear separation of production and development workflows, featuring robust versioning, hotfix capabilities, and a staging environment for pre-deployment validation of new models.
- Version Control and Traceability: Semantic versioning of all pipeline and model releases to facilitate reproducibility and rollback.
By integrating these practices, the pipeline provides end users with a robust and transparent foundation for navigating complex challenges for informed decision-making.
Transparency and Trustworthiness at the Core
One of the central goals of the project is to increase the transparency and reliability of the VIEWS models. By documenting architectural decisions, offering comprehensive code commenting, and evaluating outputs systematically, the team aims to dismantle part of the “black box” perception often associated with machine learning in the social sciences.
Dual Evaluation Framework
In addition to the features above, the system will implement both online and offline evaluation mechanisms:
- Online Evaluation: Monthly assessments of model predictions against the most recent conflict data, providing a live assessments of our forecasts against UCDP’s Candidate Events Dataset.
- Offline Evaluation: Regular out-of-sample evaluation on partitioned training and test datasets against the thoroughly vetted annual releases of UCDP GED data, providing insights into model performance for research and model development.
This dual framework supports continuous improvement while providing end users with timely, intuitive, and up-to-date evaluation metrics, while upholding high standards of accuracy and reliability.
Towards a New Era of Conflict Forecasting
The forthcoming platform consolidates novel models in development, including the current advanced time series models and the new neural net model HydraNet, into a unified infrastructure. With a focus on robust predictions, transparency, and operational stability, the platform aims to redefine the role of machine learning in conflict forecasting and early warning applications.
“The upcoming launch of this platform marks a major step forward for responsible use of machine learning and open data for social good, strengthening the foundations for data-driven decision-making in conflict mitigation and response, while reaffirming the team’s commitment to innovation and collaboration in conflict research.” says Dr. Simon Polichinel von der Maase, Senior Researcher and Head of Model Development and Deployment for VIEWS.