Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Agentic development using the Symfony AI

Agentic development using the Symfony AI

Avatar for Mario Blazek

Mario Blazek

October 22, 2025
Tweet

More Decks by Mario Blazek

Other Decks in Programming

Transcript

  1. About Four of Them • Technology provider for ihreapotheken.de •

    90+ experts • Remote fi rst • O ffi ce based in Zaprešić
  2. Company organization • Flutter team • iOS and Android team

    • QA team • Wordpress team • Fronted team • Backend team • Project management team • Design team • Marketing campaigns team
  3. Tech stack • Azure, Kubernetes, Gitlab • PHP and Symfony

    • Node (Typescript), Nuxt.js, Vue.js • Flutter (Dart), native iOS (Swift) and Android (Kotlin) • Wordpress (PHP)
  4. About me • Mario Blažek • CTO @4ofthem • Married

    with children • Believes in Open Source • Zagreb PHP Meetup organizer (ZgPHP) • Chief Fire O ffi cer
  5. Setup • VirtualBox virtual machine • Visual Studio Code •

    Terminal • git • ~/Desktop/foi2025-symfony-ai
  6. About this workshop • Learn AI skills that make you

    stand out in the PHP world • Build a real FOI assistant that helps students today • Master the same AI tools used by ChatGPT and Claude
  7. Arti fi cial Intelligence • 1950. - term AI was

    coined (Alan Turing - Computer Machinery and Intelligence) • 1980. - AI boom (“expert systems”) • 1990. - AI agents • 2000. - ANI - Arti fi cial Narrow Intelligence • 2040. - AGI - Arti fi cial General Intelligence • 2060. - ASI - Arti fi cial Superior Intelligence
  8. Arti fi cial Intelligence • Writing documentation • Testing •

    Boilerplate code • Low-level positions -> high transformation • High-level positions -> low transformation • Embrace the change
  9. Symfony • PHP Framework • 50+ standalone components • Powers

    major platforms (Drupal, Laravel, phpBB) • 600,000+ developers worldwide • Open source
  10. Why Symfony? • Best-in-class documentation • Active community support •

    Long-term support (LTS) versions • Predictable release cycles • Backward compatibility promise
  11. Large Language Models (LLMs) • The brain behind AI •

    Neural networks trained on massive amounts of text data to understand and generate human-like text • Billions of parameters (GPT-4: 1.7 trillion) • Trained on human text
  12. How LLMs work • Tokenization • "FOI is great" →

    ["FOI", "is", "great"] → [12453, 374, 2294] • Prediction - LLMs predict the next most likely token • "FOI o ff ers programs in" → [likely next: "computer", "information", "data"] • Transformer Architecture • Input → Attention Mechanism → Neural Network → Output • LLMs don't "understand" like humans - they're incredibly sophisticated pattern matchers
  13. Types of LLMs - by size • Small <7B -

    Phi-2, TinyLlama - Edge devices, simple tasks • Medium 7B-70B - Llama-3, Mistral - Local deployment, most tasks • Large 70B-200B - Llama-70B, Mixtral - Complex reasoning • Massive >200B - GPT-4, Claude-3 - Most capable, API only
  14. Types of LLMs - by access • Closed Source: GPT-4,

    Claude, Gemini • Open Source: Llama, Mistral, Falcon • Local Models: Ollama, llama.cpp
  15. Types of LLMs - by specialization • General: ChatGPT, Claude

    • Code: Codex, CodeLlama, StarCoder • Domain: BioBERT, FinBERT, MedPaLM
  16. Understanding tokens • Basic units of text that LLMs process

    • Not exactly words, not exactly charactersBillions of parameters (GPT-4: 1.7 trillion) • 1 token ≈ 4 characters in English • 1 token ≈ ¾ of a word
  17. Understanding tokens • "Hello" → 1 token • "FOI" →

    1 token • "University" → 2 tokens • "Informatics" → 3 tokens • "Hello, FOI students!" → 5 tokens
  18. Why tokens matter - cost • GPT-4o-mini pricing: • Input:

    $0.15 per 1M tokens • Output: $0.60 per 1M tokens • Example conversation: • Input: 500 tokens × $0.00000015 = $0.000075 • Output: 1000 tokens × $0.0000006 = $0.0006 • Total: ~$0.0007 per conversation
  19. Why tokens matter - context limits • GPT-4: 128K tokens

    • Claude: 200K tokens • Local models: 2K-32K tokens
  20. Prompting fundamentals • System prompt - personality/role • "You are

    an FOI assistant, knowledgeable about programs, facilities, and student services." • User prompt - the question • "What programming languages are taught at FOI?" • Context - additional information • "Based on the 2024 curriculum..."
  21. Prompt injection vulnerabilities • Role reversal attempts • Context switching

    • Instruction injection • Social engineering • Encoding/Obfuscation • Hypothetical framing
  22. Context windows • Maximum amount of text an LLM can

    "remember" in one conversation • GPT-3.5: 4K tokens (~3,000 words) • GPT-4: 128K tokens (~96,000 words) • Claude 3: 200K tokens (~150,000 words) • Gemini 1.5: 1M tokens (~750,000 words) • Local (Llama): 2-32K tokens (~1,500-24,000 words)
  23. LLMs limitations • Knowledge cuto ff • Hallucinations • No

    true understanding • Context confusion • Biases
  24. Local vs Cloud LLMs • Cloud LLMs: Best quality, no

    setup, pay per use, needs internet • Local LLMs: Full privacy, no API costs, works o ff l ine, needs GPU • Cloud costs ~$0.001 per query, Local costs $5000+ upfront • Cloud for quality and ease, Local for privacy and control • Best practice: Use both - Local for sensitive data, Cloud for complex tasks
  25. Practical LLM costs - assumptions • 2800 students at FOI

    • 10% daily active users • 5 queries per user per day = 1,400 queries • Average: 500 input + 1000 output tokens per query
  26. What are AI agents? • Agents are AI assistants that

    can take actions, not just chat • They use tools to interact with the real world (databases, APIs, fi les) • Agents remember context and learn from conversations • They can plan, reason, and complete multi-step tasks autonomously
  27. Agent capabilities - tools • Tools are functions that agents

    can call to get things done • They connect agents to external systems (databases, emails, calendars) • Tools turn agents from advisors into actors that complete real tasks • You can create custom tools for any API or service you need • Agents decide which tools to use and when to use them
  28. Single agent patterns • Q&A Pattern: Agent simply answers user

    questions • Task Executor: Agent receives task → uses tools → returns result • Memory Pattern: Agent remembers conversation history for context • Goal-Oriented: Agent works toward achieving a speci fi c objective • Reactive: Agent responds to events and triggers automatically
  29. Multi agent work fl ows • Multiple specialized agents work

    together on complex tasks • Parallel processing: Agents work simultaneously for faster results • Supervisor pattern: One agent coordinates and manages others • Pipeline pattern: Agents pass work sequentially like an assembly line • Each agent is an expert in one domain for better quality
  30. RAG (Retrieval-Augmented Generation) • Is a technique that combines information

    retrieval with text generation to make AI responses more accurate and grounded in speci fi c knowledge • Knowledge cuto ff • Hallucinations • Traditional LLM: Question -> LLM brain -> Answer • RAG: Question -> Search documents -> Include facts -> LLM brain -> Answer
  31. Model Context Protocol (MCP) • The USB standard for AI

    • An open protocol that standardizes how AI assistants connect to external data sources and tools • Created by Anthropic • https://modelcontextprotocol.io/
  32. Symfony AI • https://symfony.com/blog/kicking-o ff -the-symfony-ai-initiative • Goal to provide

    a comprehensive set of components and bundles designed to bring powerful AI capabilities directly into PHP applications • https://github.com/symfony/ai • Not just wrappers, but a complete AI development framework
  33. Symfony AI - architecture • Platform component - A uni

    fi ed interface to major AI providers like OpenAI, Anthropic, Azure, Google, Mistral, and more. Write your code once and switch between AI platforms seamlessly • Agent component - A framework for building AI agents that can interact with users, call tools, and perform complex multi-step tasks. Perfect for creating sophisticated chatbots and automated work fl ows • Store component - Data storage abstraction with indexing and retrieval capabilities for AI applications. Ideal for implementing RAG (Retrieval-Augmented Generation) patterns and semantic search • AI bundle - Seamlessly integrates the Platform, Store, and Agent components into Symfony applications with con fi guration, dependency injection, and debugging tools • MCP SDK - An implementation of the Model Context Protocol, enabling your applications to communicate with AI systems using the emerging industry standard • MCP Bundle - Allows your Symfony applications to act as MCP servers or clients, opening up new possibilities for AI integration and tool creation
  34. Symfony AI - install • composer con fi g minimum-stability

    dev • composer require symfony/ai-platform symfony/ai-bundle