You’re a software engineer, and your project has to integrate AI. Hopefully, not just any AI. You probably need solutions that are private, efficient, and production-ready, not just a checkbox for the latest trend. Join me in this session, where we’ll apply The WHY Factor to cut through the hype and focus on what actually works.
In this hands-on session, we’ll explore how to build robust LLM applications using Java, open-source tools, and European machine learning models, ensuring compliance, security, and developer-friendly workflows. You’ll learn how to:
- Leverage open-source and European LLMs for inference, reducing dependency risks and keeping your data under your control.
- Design modular RAG architectures with Docling, enabling private, resource-efficient, and hallucination-free document processing.
- Integrate agentic workflows securely, connecting your enterprise applications and APIs without compromising privacy.
- Monitor, test, and refine your Generative AI applications locally, using observability and evaluation strategies for real-world reliability.
We’ll build a live demo to show how Java and Spring AI can help you integrate AI responsibly: local-first, open-source, and hype-free. By the end of this session, we’ll have a working application, and a bigger question: Does this AI solution actually address a need, or is it just another trend we’re chasing?