Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Tobias Ladner- Formal Verification of Neural Ne...

Tobias Ladner- Formal Verification of Neural Networks in Safety-Critical Environments

Neural networks have achieved significant progress in various applications, including safety-critical tasks such as autonomous driving.
However, their vulnerability to small perturbations of the input, also known as adversarial examples, limits the applicability of neural networks in safety-critical environments. Given that uncertainties are unavoidable in real-world situations, the formal verification of neural networks has gained importance in recent years, both in terms of open-loop neural network verification and the verification of neural-network-controlled systems. We will discuss the fundamentals of formal neural network verification and how it can prove safety in real-world situations.

MunichDataGeeks

January 07, 2024
Tweet

More Decks by MunichDataGeeks

Other Decks in Technology

Transcript

  1. Formal Verification of Neural Networks in Safety-Critical Environments Tobias Ladner

    Technical University of Munich November 30th, 2023 Tobias Ladner Neural Network Verification November 30th, 2023 1 / 29
  2. Formal Verification of Neural Networks About Me Tobias Ladner PhD

    @ TUM Research: Set-based Formal Verification of Neural Networks Cyber-Physical Systems Group (Prof. Althoff) Tobias Ladner Neural Network Verification November 30th, 2023 2 / 29
  3. Motivation Cyber-Physical Systems Group (Prof. Althoff) Figure: Youtube: TUM Cyber-Physical

    Systems1 1https://www.youtube.com/watch?v=IUAeZGau28E Tobias Ladner Neural Network Verification November 30th, 2023 3 / 29
  4. Motivation Motivation 2Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. “Explaining

    and harnessing adversarial examples”. In: International Conference on Learning Representations. 2015 Tobias Ladner Neural Network Verification November 30th, 2023 4 / 29
  5. Motivation Motivation Adversarial attacks2 limit the applicability of neural networks

    in cyber-physical systems! 2Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. “Explaining and harnessing adversarial examples”. In: International Conference on Learning Representations. 2015 Tobias Ladner Neural Network Verification November 30th, 2023 4 / 29
  6. Motivation Motivation Let us demonstrate the formal verification of neural

    networks by an example: −15 −10 −5 0 5 0 1 2 3 4 5 6 7 8 9 prediction pred. label Sample Verified? Tobias Ladner Neural Network Verification November 30th, 2023 5 / 29
  7. Motivation Motivation Let us demonstrate the formal verification of neural

    networks by an example: −15 −10 −5 0 5 0 1 2 3 4 5 6 7 8 9 prediction pred. label Samples Verified? Tobias Ladner Neural Network Verification November 30th, 2023 5 / 29
  8. Motivation Motivation Let us demonstrate the formal verification of neural

    networks by an example: −15 −10 −5 0 5 0 1 2 3 4 5 6 7 8 9 prediction pred. label Samples Verification Output Y Tobias Ladner Neural Network Verification November 30th, 2023 5 / 29
  9. Motivation Motivation Let us demonstrate the formal verification of neural

    networks by an example: −15 −10 −5 0 5 0 1 2 3 4 5 6 7 8 9 prediction pred. label Unsafe S Output Y Samples → Conservative output set Y should not intersect with unsafe set S. Tobias Ladner Neural Network Verification November 30th, 2023 6 / 29
  10. Set-based Layer Propagation Neural Network Feed-forward neural network with κ

    layers, input x and output y: h0 = x, hk = Lk(hk−1), k = 1 . . . κ, y = hκ, (1) with Lk(hk−1) = Wkhk−1 + bk if layer k is linear, σk(hk−1) otherwise. (2) Tobias Ladner Neural Network Verification November 30th, 2023 7 / 29
  11. Set-based Layer Propagation Neural Network Feed-forward neural network with κ

    layers, input X and output Y: H0 = X, Hk ⊇ Lk(Hk−1), k = 1 . . . κ, Y = Hκ, (3) with Lk(hk−1) = Wkhk−1 + bk if layer k is linear, σk(hk−1) otherwise, (4) Tobias Ladner Neural Network Verification November 30th, 2023 8 / 29
  12. Set-based Layer Propagation Neural Network Feed-forward neural network with κ

    layers, input X and output Y: H0 = X, Hk ⊇ Lk(Hk−1), k = 1 . . . κ, Y = Hκ, (3) with Lk(hk−1) = Wkhk−1 + bk if layer k is linear, σk(hk−1) otherwise, (4) and Lk(Hk−1) = {Lk(hk−1) | hk−1 ∈ Hk−1} . (5) Tobias Ladner Neural Network Verification November 30th, 2023 8 / 29
  13. Set-based Layer Propagation Sets Discrete set: X = {x, x1,

    x2, . . .} . (6) Tobias Ladner Neural Network Verification November 30th, 2023 9 / 29
  14. Set-based Layer Propagation Sets Discrete set: X = {x, x1,

    x2, . . .} . (6) We might miss outliers using discrete sets → continuous sets: Tobias Ladner Neural Network Verification November 30th, 2023 9 / 29
  15. Set-based Layer Propagation Sets Discrete set: X = {x, x1,

    x2, . . .} . (6) We might miss outliers using discrete sets → continuous sets: Interval: X = [x − ϵ, x + ϵ] . (7) Tobias Ladner Neural Network Verification November 30th, 2023 9 / 29
  16. Set-based Layer Propagation Sets Discrete set: X = {x, x1,

    x2, . . .} . (6) We might miss outliers using discrete sets → continuous sets: Interval: X = [x − ϵ, x + ϵ] . (7) We usually use (polynomial) zonotopes for NNV3: X = ⟨x, ϵIn⟩Z . (8) 3Niklas Kochdumper et al. “Open-and closed-loop neural network verification using polynomial zonotopes”. In: NASA Formal Methods Symposium. Springer. 2023, pp. 16–36 Tobias Ladner Neural Network Verification November 30th, 2023 9 / 29
  17. Set-based Layer Propagation Set-based Layer Propagation Neuron 2 Input 1st

    linear layer 1st nonlinear layer Neuron 1 Neuron 2 2nd linear layer Neuron 1 2nd nonlinear layer Neuron 1 Output Interval Zonotope Samples Tobias Ladner Neural Network Verification November 30th, 2023 10 / 29
  18. Set-based Layer Propagation Set-based Layer Propagation Neuron 2 Input 1st

    linear layer 1st nonlinear layer Neuron 1 Neuron 2 2nd linear layer Neuron 1 2nd nonlinear layer Neuron 1 Output Interval Zonotope Samples Tobias Ladner Neural Network Verification November 30th, 2023 10 / 29
  19. Set-based Layer Propagation Set-based Layer Propagation Neuron 2 Input 1st

    linear layer 1st nonlinear layer Neuron 1 Neuron 2 2nd linear layer Neuron 1 2nd nonlinear layer Neuron 1 Output Interval Zonotope Samples Tobias Ladner Neural Network Verification November 30th, 2023 10 / 29
  20. Set-based Layer Propagation Zonotope Z = ⟨c, G⟩Z = c

    + p i=1 βi G(:,i) βi ∈ [−1, 1] (9) Tobias Ladner Neural Network Verification November 30th, 2023 11 / 29
  21. Set-based Layer Propagation Zonotope Z = ⟨c, G⟩Z = c

    + p i=1 βi G(:,i) βi ∈ [−1, 1] (9) Example: Z = 1 1 , 1 1 1 1 −1 0 Z (10) Tobias Ladner Neural Network Verification November 30th, 2023 11 / 29
  22. Set-based Layer Propagation Zonotope Z = ⟨c, G⟩Z = c

    + p i=1 βi G(:,i) βi ∈ [−1, 1] (9) Example: Z = 1 1 , 1 1 1 1 −1 0 Z (10) 0 1 2 0 1 2 x(1) x(2) G(:,1) −2 0 2 4 0 2 x(1) G(:,1:2) −2 0 2 4 0 2 x(1) G(:,1:3) Tobias Ladner Neural Network Verification November 30th, 2023 11 / 29
  23. Set-based Layer Propagation Set-based Computation Layers of our neural network:

    Lk(hk−1) = Wkhk−1 + bk if layer k is linear, σk(hk−1) otherwise. (11) Tobias Ladner Neural Network Verification November 30th, 2023 12 / 29
  24. Set-based Layer Propagation Set-based Computation Layers of our neural network:

    Lk(hk−1) = Wkhk−1 + bk if layer k is linear, σk(hk−1) otherwise. (11) Linear Layers: W Z + b = W ⟨c, G⟩Z + b = ⟨Wc + b, WG⟩Z . (12) Tobias Ladner Neural Network Verification November 30th, 2023 12 / 29
  25. Set-based Layer Propagation Set-based Computation Layers of our neural network:

    Lk(hk−1) = Wkhk−1 + bk if layer k is linear, σk(hk−1) otherwise. (11) Linear Layers: W Z + b = W ⟨c, G⟩Z + b = ⟨Wc + b, WG⟩Z . (12) Unfortunately, nonlinear layers are harder to compute → Image enclosure. Tobias Ladner Neural Network Verification November 30th, 2023 12 / 29
  26. Set-based Layer Propagation Image Enclosure Output Step 1 Step 2

    Step 3 Input Output Step 4 Input Step 5 Input Step 6 Tobias Ladner Neural Network Verification November 30th, 2023 13 / 29
  27. Set-based Layer Propagation Step 1: Element-wise Evaluation Many nonlinear layers

    are applied element-wise, e.g. ReLU, sigmoid, . . . Output Step 1 Output Step 4 Tobias Ladner Neural Network Verification November 30th, 2023 14 / 29
  28. Set-based Layer Propagation Step 1: Element-wise Evaluation Many nonlinear layers

    are applied element-wise, e.g. ReLU, sigmoid, . . . Hk−1(i) = project (Hk−1, i) , i = 1 . . . vk, (13) Output Step 1 Output Step 4 Tobias Ladner Neural Network Verification November 30th, 2023 14 / 29
  29. Set-based Layer Propagation Step 2: Domain Bounds As our set

    Hk−1(i) is bounded, we do not need to enclose the nonlinear function over the entire domain. Output Step 1 Step 2 Output Step 4 Step 5 Tobias Ladner Neural Network Verification November 30th, 2023 15 / 29
  30. Set-based Layer Propagation Step 2: Domain Bounds As our set

    Hk−1(i) is bounded, we do not need to enclose the nonlinear function over the entire domain. Only enclose the nonlinear function within the bounds of the input set: [l(i), u(i)] = interval Hk−1(i) . (14) Output Step 1 Step 2 Output Step 4 Step 5 Tobias Ladner Neural Network Verification November 30th, 2023 15 / 29
  31. Set-based Layer Propagation Step 3: Polynomial Approximation Next, we compute

    a polynomial pi (x) that approximates our nonlinear function f (x) within the domain: pi (x) = polyApprox [l(i), u(i)], order ≈ f (x), x ∈ [l(i), u(i)]. (15) Output Step 1 Step 2 Step 3 Output Step 4 Step 5 Step 6 Tobias Ladner Neural Network Verification November 30th, 2023 16 / 29
  32. Set-based Layer Propagation Step 4: Approximation Error After finding an

    approximation polynomial pi (x), we need to find the approximation error: d(i) = max x∈[l(i),u(i) ] |f (x) − pi (x)|. (16) Output Input Output Step 4 Tobias Ladner Neural Network Verification November 30th, 2023 17 / 29
  33. Set-based Layer Propagation Step 5: Set-based Evaluation Finally, we evaluate

    our polynomial pi (x) over our set Hk−1(i): Hk(i) = pi Hk−1(i) . (17) Output Input Output Step 4 Input Step 5 Tobias Ladner Neural Network Verification November 30th, 2023 18 / 29
  34. Set-based Layer Propagation Step 5: Set-based Evaluation Finally, we evaluate

    our polynomial pi (x) over our set Hk−1(i): Hk(i) = pi Hk−1(i) . (17) For a linear polynomial pi (x) = c0 + c1x, this is computed by Hk(i) = c0 + c1 · Hk−1(i). (18) Output Input Output Step 4 Input Step 5 Tobias Ladner Neural Network Verification November 30th, 2023 18 / 29
  35. Set-based Layer Propagation Step 5: Set-based Evaluation Finally, we evaluate

    our polynomial pi (x) over our set Hk−1(i): Hk(i) = pi Hk−1(i) . (17) For a linear polynomial pi (x) = c0 + c1x, this is computed by Hk(i) = c0 + c1 · Hk−1(i). (18) Higher-order polynomials require non-convex set representations for a tight enclosure. Output Input Output Step 4 Input Step 5 Tobias Ladner Neural Network Verification November 30th, 2023 18 / 29
  36. Set-based Layer Propagation Step 6: Image Enclosure Finally, we stack

    the computed sets back together using the Cartesian product: Hk =    Hk(1) . . . Hk(vk )    , (19) Output Input Output Step 4 Input Step 5 Input Step 6 Tobias Ladner Neural Network Verification November 30th, 2023 19 / 29
  37. Set-based Layer Propagation Step 6: Image Enclosure Finally, we stack

    the computed sets back together using the Cartesian product: Hk =    Hk(1) . . . Hk(vk )    , (19) and add the approximation error: Hk = Hk ⊕ [−d, d]. (20) Output Input Output Step 4 Input Step 5 Input Step 6 Tobias Ladner Neural Network Verification November 30th, 2023 19 / 29
  38. Higher-Order Image Enclosure Higher-Order Image Enclosure Note that we can

    also use higher-order polynomials to enclose the nonlinear layers: Output Step 1 Step 2 Step 3 Input Output Step 4 Input Step 5 Input Step 6 Tobias Ladner Neural Network Verification November 30th, 2023 20 / 29
  39. Higher-Order Image Enclosure Automatic Abstraction Refinement4 0.2 0.4 0.6 0.4

    0.5 0.6 0.7 y(1) y (2) Output Samples Order: [1 1 1 1 1] [1 1 1 1 1] [1 1] 4Tobias Ladner and Matthias Althoff. “Automatic abstraction refinement in neural network verification using sensitivity analysis”. In: Proceedings of the 26th ACM International Conference on Hybrid Systems: Computation and Control. 2023, pp. 1–13 Tobias Ladner Neural Network Verification November 30th, 2023 21 / 29
  40. Higher-Order Image Enclosure Automatic Abstraction Refinement4 0.2 0.4 0.6 0.4

    0.5 0.6 0.7 y(1) y (2) Output Samples Order: [1 1 1 1 1] [1 1 1 1 1] [1 1] Order: [2 2 2 1 2] [1 1 1 1 1] [1 1] 4Tobias Ladner and Matthias Althoff. “Automatic abstraction refinement in neural network verification using sensitivity analysis”. In: Proceedings of the 26th ACM International Conference on Hybrid Systems: Computation and Control. 2023, pp. 1–13 Tobias Ladner Neural Network Verification November 30th, 2023 21 / 29
  41. Higher-Order Image Enclosure Automatic Abstraction Refinement4 0.2 0.4 0.6 0.4

    0.5 0.6 0.7 y(1) y (2) Output Samples Order: [1 1 1 1 1] [1 1 1 1 1] [1 1] Order: [2 2 2 1 2] [1 1 1 1 1] [1 1] Order: [2 3 3 1 3] [1 1 1 1 1] [1 1] Order: [2 4 4 1 4] [1 1 1 1 1] [1 1] Order: [2 4 5 1 5] [1 1 1 1 1] [1 1] 4Tobias Ladner and Matthias Althoff. “Automatic abstraction refinement in neural network verification using sensitivity analysis”. In: Proceedings of the 26th ACM International Conference on Hybrid Systems: Computation and Control. 2023, pp. 1–13 Tobias Ladner Neural Network Verification November 30th, 2023 21 / 29
  42. Higher-Order Image Enclosure Automatic Abstraction Refinement4 0.2 0.4 0.6 0.4

    0.5 0.6 0.7 y(1) y (2) Output Samples Order: [1 1 1 1 1] [1 1 1 1 1] [1 1] Order: [2 2 2 1 2] [1 1 1 1 1] [1 1] Order: [2 3 3 1 3] [1 1 1 1 1] [1 1] Order: [2 4 4 1 4] [1 1 1 1 1] [1 1] Order: [2 4 5 1 5] [1 1 1 1 1] [1 1] Order: [2 4 5 1 5] [2 2 2 2 2] [1 1] Order: [2 4 5 1 5] [2 2 2 2 2] [2 1] 4Tobias Ladner and Matthias Althoff. “Automatic abstraction refinement in neural network verification using sensitivity analysis”. In: Proceedings of the 26th ACM International Conference on Hybrid Systems: Computation and Control. 2023, pp. 1–13 Tobias Ladner Neural Network Verification November 30th, 2023 21 / 29
  43. Open-Loop System Open-Loop System Going back to our image example:

    What happens if we add more noise to our images? −15 −10 −5 0 5 0 1 2 3 4 5 6 7 8 9 prediction pred. label Unsafe S Output Y Samples Tobias Ladner Neural Network Verification November 30th, 2023 22 / 29
  44. Open-Loop System Open-Loop System Going back to our image example:

    What happens if we add more noise to our images? −15 −10 −5 0 5 0 1 2 3 4 5 6 7 8 9 prediction pred. label Linear Abstraction Unsafe S Output Y Samples Tobias Ladner Neural Network Verification November 30th, 2023 22 / 29
  45. Open-Loop System Open-Loop System Going back to our image example:

    What happens if we add more noise to our images? −15 −10 −5 0 5 0 1 2 3 4 5 6 7 8 9 prediction pred. label Refined Abstraction Unsafe S Refined Y Samples → We can verify a larger noise radius using higher-order polynomials! Tobias Ladner Neural Network Verification November 30th, 2023 22 / 29
  46. Closed-Loop System Closed-Loop System We can now compute the output

    set of a neural network. Tobias Ladner Neural Network Verification November 30th, 2023 23 / 29
  47. Closed-Loop System Closed-Loop System We can now compute the output

    set of a neural network. Can we use this knowledge to verify a neural network as part of a dynamic system? Open-Loop System ˙ x = f (x, u) Sampler t = ∆t Neural Network Controller u = Φ(x) x u Tobias Ladner Neural Network Verification November 30th, 2023 23 / 29
  48. Closed-Loop System Neural Network Controller Controllers are usually sampled every

    ∆t. The output of the neural network controller is then used as input to the dynamic system. 0 1 2 3 4 5 0 1 2 time altitude Quadrotor Goal set Simulations Tobias Ladner Neural Network Verification November 30th, 2023 24 / 29
  49. Closed-Loop System Neural Network Controller Controllers are usually sampled every

    ∆t. The output of the neural network controller is then used as input to the dynamic system. 0 1 2 3 4 5 0 1 2 time altitude Quadrotor Goal set Simulations Sampling time Tobias Ladner Neural Network Verification November 30th, 2023 24 / 29
  50. Closed-Loop System Neural Network Controller We can verify a given

    system by evaluating everything set-based: 0 1 2 3 4 5 0 1 2 time altitude Quadrotor Goal set Simulations Tobias Ladner Neural Network Verification November 30th, 2023 25 / 29
  51. Closed-Loop System Neural Network Controller We can verify a given

    system by evaluating everything set-based: 0 1 2 3 4 5 0 1 2 time altitude Quadrotor (linear) Goal set Reachable set Initial set Simulations Tobias Ladner Neural Network Verification November 30th, 2023 25 / 29
  52. Closed-Loop System Neural Network Controller We can verify a given

    system by evaluating everything set-based: 0 1 2 3 4 5 0 1 2 time altitude Quadrotor (refined) Goal set Reachable set Initial set Simulations → Refinement can also help in closed-loop systems! Tobias Ladner Neural Network Verification November 30th, 2023 25 / 29
  53. Closed-Loop System Examples 0 5 10 −4 −2 0 x(1)

    x(2) Simplified Car Model Goal set Reachable set Initial set Simulations Tobias Ladner Neural Network Verification November 30th, 2023 26 / 29
  54. Closed-Loop System Examples 0 5 10 −4 −2 0 x(1)

    x(2) Simplified Car Model Goal set Reachable set Initial set Simulations 0 2 4 40 60 80 100 time distance Lane Following Distance Safe distance Simulations Tobias Ladner Neural Network Verification November 30th, 2023 26 / 29
  55. Closed-Loop System CORA If you want to verify some networks

    yourself: https://cora.in.tum.de Don’t hesitate to contact us! Tobias Ladner Neural Network Verification November 30th, 2023 27 / 29
  56. Closed-Loop System Cyber-Physical Systems Group (Prof. Althoff) Figure: Youtube: TUM

    Cyber-Physical Systems5 5https://www.youtube.com/watch?v=IUAeZGau28E Tobias Ladner Neural Network Verification November 30th, 2023 28 / 29
  57. References References Goodfellow, Ian, Jonathon Shlens, and Christian Szegedy. “Explaining

    and harnessing adversarial examples”. In: International Conference on Learning Representations. 2015. Kochdumper, Niklas et al. “Open-and closed-loop neural network verification using polynomial zonotopes”. In: NASA Formal Methods Symposium. Springer. 2023, pp. 16–36. Ladner, Tobias and Matthias Althoff. “Automatic abstraction refinement in neural network verification using sensitivity analysis”. In: Proceedings of the 26th ACM International Conference on Hybrid Systems: Computation and Control. 2023, pp. 1–13. Tobias Ladner Neural Network Verification November 30th, 2023 29 / 29
  58. Appendix Appendix: Formal Neural Network Reduction Larger networks are usually

    harder to verify than smaller networks. Is it possible to construct a smaller network Φ, where the verification of the small network implies the verification of the original network Φ: Φ(X) ∩ S = ∅ =⇒ Φ(X) ∩ S = ∅, (21) where X is the input set and S is the unsafe set. Tobias Ladner Neural Network Verification November 30th, 2023 1 / 2
  59. Appendix Appendix: Formal Neural Network Reduction Larger networks are usually

    harder to verify than smaller networks. Is it possible to construct a smaller network Φ, where the verification of the small network implies the verification of the original network Φ: Φ(X) ∩ S = ∅ =⇒ Φ(X) ∩ S = ∅, (21) where X is the input set and S is the unsafe set. Bk,y,δ Original Network Tobias Ladner Neural Network Verification November 30th, 2023 1 / 2
  60. Appendix Appendix: Formal Neural Network Reduction Larger networks are usually

    harder to verify than smaller networks. Is it possible to construct a smaller network Φ, where the verification of the small network implies the verification of the original network Φ: Φ(X) ∩ S = ∅ =⇒ Φ(X) ∩ S = ∅, (21) where X is the input set and S is the unsafe set. Bk,y,δ Original Network y → Reduced Network Tobias Ladner Neural Network Verification November 30th, 2023 1 / 2
  61. Appendix Appendix: Set-based Training Standard training only uses a single

    point to update the weights. x Input · h0 W1 Layer L1 linear + b1 µ2 Layer L2 activation h1 · h2 W3 Layer L3 linear + b3 µ4 Layer L4 activation h3 ˆ y Output h4 ∂E(y,ˆ y) ∂ˆ y Gradient · diag ∂µ4 ∂h3 g4 · ⊤ g3 · diag ∂µ2 ∂h1 g2 g1 Tobias Ladner Neural Network Verification November 30th, 2023 2 / 2
  62. Appendix Appendix: Set-based Training Standard training only uses a single

    point to update the weights. X Input · H0 W1 Layer L1 linear + b1 enclose H1 Layer L2 activation · H2 W3 Layer L3 linear + b3 enclose H3 Layer L4 activation Y Output H4 ∂E(y,Y) ∂Y Gradient · diag(m4 ) G4 · ⊤ G3 · diag(m2 ) G2 G1 → Including uncertainty in the training process makes the resulting networks more robust. Tobias Ladner Neural Network Verification November 30th, 2023 2 / 2