Neural networks have achieved significant progress in various applications, including safety-critical tasks such as autonomous driving. However, their vulnerability to small perturbations of the input, also known as adversarial examples, limits the applicability of neural networks in safety-critical environments. Given that uncertainties are unavoidable in real-world situations, the formal verification of neural networks has gained importance in recent years, both in terms of open-loop neural network verification and the verification of neural-network-controlled systems. We will discuss the fundamentals of formal neural network verification and how it can prove safety in real-world situations.