Thu 18 Jul 2024 14:00 - 14:18 at Mandacaru - SE4AI 1 Chair(s): Qinghua Lu

Deep Neural Networks (DNN) have emerged as an effective approach to tackling real-world problems. However, like human-written software, DNNs are susceptible to bugs and attacks. This has generated significant interests in developing effective and scalable DNN verification techniques and tools.

Recent developments in DNN verification have highlighted the potential of constraint-solving approaches that combine abstraction techniques with SAT solving. Abstraction approaches are effective at precisely encode neuron behavior when it is linear, but they lead to overapproximation and combinatorial scaling when behavior is non-linear. SAT approaches in DNN verification have incorporated standard DPLL techniques, but have overlooked important optimizations found in modern SAT solvers that help them scale on industrial benchmarks.

In this paper, we present VeriStable, a novel extension of recently proposed DPLL-based constraint DNN verification approach. VeriStable leverages the insight that while neuron behavior may be non-linear across the entire DNN input space, at intermediate states computed during verification many neurons may be constrained to have linear behavior – these neurons are stable. Efficiently detecting stable neurons reduces combinatorial complexity without compromising the precision of abstractions. Moreover, the structure of clauses arising in DNN verification problems shares important characteristics with industrial SAT benchmarks. We adapt and incorporate multi-threading and restart optimizations targeting those characteristics to further optimize DPLL-based DNN verification.

We evaluate the effectiveness of VeriStable across a range of challenging benchmarks including fully-connected feedforward networks (FNNs), convolutional neural networks (CNNs) and residual networks (ResNets) applied to the standard MNIST and CIFAR datasets. Preliminary results show that VeriStable is competitive and outperforms state-of-the-art DNN verification tools, including $\alpha$-$\beta$-CROWN and MN-BaB, the first and second performers of the VNN-COMP, respectively.

Thu 18 Jul

Displayed time zone: Brasilia, Distrito Federal, Brazil change

14:00 - 15:30
14:00
18m
Talk
Harnessing Neuron Stability to Improve DNN Verification
Research Papers
Hai Duong George Mason University, Dong Xu University of Virginia, ThanhVu Nguyen George Mason University, Matthew B Dwyer University of Virginia
14:18
18m
Talk
MirrorFair: Fixing Fairness Bugs in Machine Learning Software via Counterfactual Predictions
Research Papers
Ying Xiao King's College London / Southern University of Science and Technology, Jie M. Zhang King's College London, Yepang Liu Southern University of Science and Technology, Mohammad Reza Mousavi King's College London, Sicen Liu Southern University of Science and Technology, Dingyuan Xue Southern University of Science and Technology
14:36
9m
Talk
Using Run-time Information to Enhance Static Analysis of Machine Learning Code in Notebooks
Ideas, Visions and Reflections
Yiran Wang Linköping University, José Antonio Hernández López Linkoping University, Ulf Nilsson Linköping University, Daniel Varro Linköping University / McGill University
Link to publication DOI
14:45
9m
Talk
Human-Imperceptible Retrieval Poisoning Attacks in LLM-Powered Applications
Ideas, Visions and Reflections
Quan Zhang Tsinghua University, Binqi Zeng Central South University, Chijin Zhou Tsinghua University, Gwihwan Go Tsinghua University, Heyuan Shi Central South University, Yu Jiang Tsinghua University
14:54
18m
Talk
DeepGD: A Multi-Objective Black-Box Test Selection Approach for Deep Neural Networks
Journal First
Zohreh Aghababaeyan University of Ottawa, Canada, Manel Abdellatif Software and Information Technology Engineering Department, École de Technologie Supérieure, Mahboubeh Dadkhah The School of EECS, University of Ottawa, Lionel Briand University of Ottawa, Canada; Lero centre, University of Limerick, Ireland
15:12
9m
Talk
Testing Learning-Enabled Cyber-Physical Systems with Large-Language Models: A Formal Approach
Ideas, Visions and Reflections
Xi Zheng Macquarie University, Aloysius K. Mok University of Texas at Austin, Ruzica Piskac Yale University, Yong Jae Lee University of Wisconsin Madison, Bhaskar Krishnamachari University of Southern California, Dakai Zhu The University of Texas at San Antonio, Oleg Sokolsky University of Pennsylvania, USA, Insup Lee University of Pennsylvania
15:21
9m
Talk
GAISSALabel: A tool for energy labeling of ML models
Demonstrations
Pau Duran Universitat Politècnica de Catalunya (UPC), Joel Castaño Fernández Universitat Politècnica de Catalunya (UPC), Cristina Gómez Universitat Politècnica de Catalunya, Silverio Martínez-Fernández UPC-BarcelonaTech
Link to publication Pre-print