Machine learning and artificial intelligence used successfully in many application areas. Still, its use in high-assurance systems is limited by the lack of formal verification techniques that satisfy the requirements of industrial certification standards such as EN 50128 (safety) or Common Criteria (security) that high-assurance systems often need to comply to.

The formal specification and verification techniques that such standards require usually rely on the existence of an implementation (e.g., source code) whose compliance to a specification can be verified. For systems based on neural networks, such an implementation that precisely describes, in a human-readable form, the system’s behavior does not exist. The only artifact that exists is a neural network trained on a set of training data, which is expected to behave correctly for all possible inputs.

To address the gab between testing, i.e., making a statement about the behavior for a limited set of inputs, and the needs for high assurance systems (i.e., ensuring the safety and security for all possible inputs), we are working on developing formal verification methods for AI/ML-based systems.

Important Publications

[1]
A. D. Brucker and A. Stell, Verifying feedforward neural networks for classification in Isabelle/HOL,” in Formal methods (FM 2023), M. Chechik, J.-P. Katoen, and M. Leucker, Eds. Heidelberg: Springer-Verlag, 2023.