Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

VNN: Verification-Friendly Neural Networks with Hard Robustness Guarantees

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Additional Information
    • Contributors:
      Lund University, Faculty of Engineering, LTH, LTH Profile areas, LTH Profile Area: Engineering Health, Lunds universitet, Lunds Tekniska Högskola, LTH profilområden, LTH profilområde: Teknik för hälsa, Originator; Lund University, Faculty of Engineering, LTH, Departments at LTH, Department of Electrical and Information Technology, Secure and Networked Systems, Lunds universitet, Lunds Tekniska Högskola, Institutioner vid LTH, Institutionen för elektro- och informationsteknik, Säkerhets- och nätverkssystem, Originator; Lund University, Faculty of Engineering, LTH, LTH Profile areas, LTH Profile Area: Water, Lunds universitet, Lunds Tekniska Högskola, LTH profilområden, LTH profilområde: Vatten, Originator; Lund University, Profile areas and other strong research environments, Lund University Profile areas, LU Profile Area: Natural and Artificial Cognition, Lunds universitet, Profilområden och andra starka forskningsmiljöer, Lunds universitets profilområden, LU profilområde: Naturlig och artificiell kognition, Originator; Lund University, Faculty of Engineering, LTH, LTH Profile areas, LTH Profile Area: AI and Digitalization, Lunds universitet, Lunds Tekniska Högskola, LTH profilområden, LTH profilområde: AI och digitalisering, Originator
    • Abstract:
      Machine learning techniques often lack formal correctness guarantees, evidenced by the widespread adversarial examples that plague most deep-learning applications. This lack of formal guarantees resulted in several research efforts that aim at verifying Deep Neural Networks (DNNs), with a particular focus on safety-critical applications. However, formal verification techniques still face major scalability and precision challenges. The over-approximation introduced during the formal verification process to tackle the scalability challenge often results in inconclusive analysis. To address this challenge, we propose a novel framework to generate Verification-Friendly Neural Networks (VNNs). We present a post-training optimization framework to achieve a balance between preserving prediction performance and verification-friendliness. Our proposed framework results in VNNs that are comparable to the original DNNs in terms of prediction performance, while amenable to formal verification techniques. This essentiallyenables us to establish robustness for more VNNs than their DNN counterparts, in a time-efficient manner.