Statistical Change Detection: Diagnosis, Prognosis and Event Detection in Real-life Conditions


Mogens Blanke,
Technical University of Denmark, DTU


Abstract: Industrial cases of diagnosis, prognosis and event detection rarely meet the standard assumptions in statistical change detection: Gaussian distributed signals and Independent and Identically Distributed (IID) samples. This presentation expands the basic elements of detection theory to cases often encountered in engineering reality: non-Gaussian distributed residuals and signals with heavily correlated samples. The paper explains where basic detection theory falls short in cases found in industry, and introduces a generic procedure that enables design for event detection with known probabilities of true, false and missed detection under non-Gaussian conditions. The methodology is exemplified with cases of event detection in drilling, in aerospace and in marine applications. The presentation is kept at a tutorial level, introducing traps and tricks to overcome some of the challenges met when developing applications for timely diagnosis, prognosis and detection before events develop to hazards.

Bio: Mogens Blanke is Professor in Automation and Control at the Technical University of Denmark, DTU. He received the MScEE degree in 1974 and the PhD in 1982 from DTU, was Systems Analyst at the European Space Agency 1975-76, had tenure track positions at DTU 1977-84, was Head of Division at Lyngsø Marine 1985-89 and Professor at Aalborg University 1990–99. He took his present position at DTU in 2000 and served, in addition, as Adjunct Professor at NTNU in Trondheim 2005-2017. His areas of special focus are diagnosis, prognosis, fault tolerant control and autonomous systems. Prof. Blanke served as Technical Editor for IEEE Transactions of Aerospace and Electronic Systems (2006-2016), and is currently Associate Editor for Control Engineering Practice and Deputy Editor for Ocean Engineering. He has received various international recognitions, latest the ASME DSCD 2018 Rudolf Kalman Best Paper Award and the IFAC 2020 Control Engineering Practice Paper Prize. Research leadership has included Principal Investigator (PI) for the control system on the Ørsted satellite (1992-99), coordinator for the COSY network on Fault-tolerant Control (1996-199) and PI for the national Danish research effort on surface ship autonomy (2017-2022).


Towards verifying AI systems based on deep neural networks


Alessio Lomuscio,
Verification of Autonomous Systems Lab,
Imperial College London


Abstract: A key difficulty in the deployment of ML-based solutions in safety-critical systems remains their inherent fragility and difficulty of certification. Formal verification has long been employed in the analysis and debugging of traditional computer systems, including hardware, but its deployment in the context of AI-systems remains largely unexplored. In this talk I will summarise some of the contributions on verification of neural systems from the Verification of Autonomous Systems Lab at Imperial College London. I will focus on the issue of specifications and verifications for deep neural classifiers. After a discussion on specifications (robustness to noise, contrast, luminosity and beyond), I will introduce recent exact and approximate methods, including MILP-based approaches, linear relaxations, and symbolic interval propagation. I will introduce the resulting toolkits, namely Verinet and Venus, and exemplify their use on Boeing use cases in the context of the DARPA-effort on Assured Autonomy. This will enable us to observe that the verification of neural models with hundreds of thousands of nodes (corresponding to millions of tunable parameters) is now feasible with further advances likely to be achieved in the near future. I will conclude by presenting closely related ongoing work, including the verification of neural-symbolic systems, i.e., closed-loop systems combining neural and symbolic components, against specifications based on temporal logic.

Bio: Alessio Lomuscio (http://www.doc.ic.ac.uk/~alessio) is Professor of Safe Artificial Intelligence in the Department of Computing at Imperial College London (UK), where he leads the Verification of Autonomous Systems Lab (http://vas.doc.ic.ac.uk/). He serves as Deputy Director for the UKRI Doctoral Training Centre in Safe and Trusted Artificial Intelligence. He is a Distinguished ACM member, a Fellow of the European Association of Artificial Intelligence and currently holds a Royal Academy of Engineering Chair in Emerging Technologies. Alessio’s research interests concern the development of formal verification methods for artificial intelligence. Since 2000 he has worked on the development of formal methods for the verification of autonomous systems and multi-agent systems. To this end, he has put forward several methods based on model checking and various forms abstraction to verify AI systems, including robotic swarms, against AI based specifications. He is presently focusing on the development of methods to ascertain the correctness of AI systems based on deep neural networks. He has published approximately 200 papers in AI conferences (including IJCAI, KR, AAAI, AAMAS and ECAI), verification and formal methods conferences (CAV, SEFM, ATVA), and international journals (AIJ, JAIR, ACM ToCL, JAAMAS, Information and Computation). He sits on the Editorial Board member for AIJ, JAIR, and JAAMAS and recently served as general co-chair for AAMAS 2021.


Secure networked control systems: Closing the loop over malicious networks

Henrik Sandberg,
Division of Decision and Control Systems,
KTH Royal Institute of Technology


Abstract: Reports of cyber-attacks, such as Stuxnet, have shown their devastating consequences on digitally controlled systems supporting modern societies, and shed light on their modus operandi: First learn sensitive information about the system, then tamper the visible information so the attack is undetected, and meanwhile have significant impact on the physical system. Securing control systems against such complex attacks requires a systematic and thorough approach. In the first part of the talk, we provide an overview of recent work on secure networked control systems centered on a risk management framework and its main stages: scenario characterization, risk analysis, and risk mitigation. In particular, we shall consider malicious attacks on key security properties such as confidentiality, integrity, and availability. In the second part of the talk, we focus on specific sensor attack scenarios and mitigation strategies currently being investigated in our research group. We show that an attacker with access to the sensor channel can perfectly estimate a linear controller’s state without error, and thus violate the operator’s privacy, if and only if the controller has no unstable poles. An advanced attacker may exploit such a breach of confidentiality to design stealthy false data injection attacks (violations of sensor data integrity) resulting in large physical impact on the controller plant. We illustrate some of the results using lab experiments, and discuss efficient mitigation strategies.

Bio: Henrik Sandberg is Professor at the Division of Decision and Control Systems, KTH Royal Institute of Technology, Stockholm, Sweden. He received the M.Sc. degree in engineering physics and the Ph.D. degree in automatic control from Lund University, Lund, Sweden, in 1999 and 2004, respectively. From 2005 to 2007, he was a Post-Doctoral Scholar at the California Institute of Technology, Pasadena, USA. In 2013, he was a Visiting Scholar at the Laboratory for Information and Decision Systems (LIDS) at MIT, Cambridge, USA. He has also held visiting appointments at the Australian National University and the University of Melbourne, Australia. His current research interests include security of cyber-physical systems, power systems, model reduction, and fundamental limitations in control. Dr. Sandberg was a recipient of the Best Student Paper Award from the IEEE Conference on Decision and Control in 2004, an Ingvar Carlsson Award from the Swedish Foundation for Strategic Research in 2007, and a Consolidator Grant from the Swedish Research Council in 2016. He has served on the editorial boards of IEEE Transactions on Automatic Control and the IFAC Journal Automatica.