﻿<?xml version="1.0" encoding="UTF-8"?>
<b:Sources SelectedStyle="" xmlns:b="http://schemas.openxmlformats.org/officeDocument/2006/bibliography"  xmlns="http://schemas.openxmlformats.org/officeDocument/2006/bibliography" >
<b:Source>
<b:Tag>brucker.ea:afp-neural_networks:2025</b:Tag>
<b:SourceType>ArticleInAPeriodical</b:SourceType>
<b:Year>2025</b:Year>
<b:Month>November</b:Month>
<b:PeriodicalTitle>Archive of Formal Proofs</b:PeriodicalTitle>
<b:Url>https://isa-afp.org/entries/Neural_Networks.html, Formal proof development</b:Url>
<b:Author>
<b:Author><b:NameList>
<b:Person><b:Last>Brucker</b:Last><b:First>Achim</b:First><b:Middle>D</b:Middle></b:Person>
<b:Person><b:Last>Stell</b:Last><b:First>Amy</b:First></b:Person>
</b:NameList></b:Author>
</b:Author>
<b:Title>Formalizing Neural Networks</b:Title>
<b:Comments>Deep learning, i.e., machine learning using neural networks, is used successfully in many application areas. Still, their use in safety-critical or security-critical applications is limited, due to the lack of testing and verification techniques. We address this problem by formalizing an important class of neural networks, feed-forward neural networks, in Isabelle/HOL. We present two different approaches of formalizing feed-forward networks and show their equivalence as well as demonstrate their use in verifying certain safety and correctness properties of various example. Moreover, we do not only provide a formal model that allows to reason over feed-forward neural networks, we also provide a datatype package for Isabelle/HOL that supports importing models from TensorFlow.js.</b:Comments>
</b:Source>
</b:Sources>
