Geneva, Switzerland – VIEWS’ Simon Polichinel von der Maase and Alexa Timlick took center stage at the UNIDIR Global Conference on AI, Security and Ethics 2025, presenting “A.C.T.S. Now: Why MLOps Must Govern AI in Critical Systems and High-Stakes Domains.” Their talk emphasized the urgent need for robust AI governance in security-sensitive applications.

Watch the recording of VIEWS’ session at the Global Conference on AI Security and Ethics 2025, starting from 7:36:06.

Polichinel von der Maase and Timlick outlined real-world case studies of AI failures in healthcare, financial trading, and autonomous transportation, highlighting the risks of silent failures and automation bias. In response, they introduced the A.C.T.S. (Auditable, Controllable, Transparent, Secure) framework—an AI governance framework designed by VIEWS to ensure accountability, oversight, and resilience in AI-driven systems.

With AI adoption accelerating in high-risk domains, P. von der Maase and Timlick urged policymakers, researchers, and industry leaders to implement structured governance mechanisms. They advocated for mandatory MLOps (Machine Learning Operations) protocols, investment in interoperable AI infrastructure, and widespread adoption of A.C.T.S. as a governance standard to prevent catastrophic failures.

The Global Conference on AI, Security and Ethics 2025, hosted by UNIDIR, convened experts, policymakers, and industry stakeholders to explore responsible AI development. VIEWS’ contribution underscored the intersection of AI governance and conflict forecasting, reinforcing the need for resilient AI frameworks in global security operations.

Learn more about the A.C.T.S. framework and its application in the soon-to-be-launched VIEWS MLOps platform.