More and more developers are looking to integrate Generative AI features into their applications. Until now, this has almost always meant going to the cloud, but it doesn't have to be that way! Microsoft and Chromium are currently implementing built-in AI interfaces in Chrome and Edge that provide access to a locally installed large language model (LLM). The advantages are obvious: user data does not leave the device, and everything works even with a weak or no internet connection. Google and Microsoft are also currently specifying the WebMCP protocol: this allows a web application to make its tools known to an LLM and become part of an agentic workflow. In this session, Christian Liebel will demonstrate the use cases that the APIs cover and show you how you can make your web application smarter.