Upgrade to Pro — share decks privately, control downloads, hide ads and more …

AI Development in .NET - Microsoft.Extensions.AI

AI Development in .NET - Microsoft.Extensions.AI

Devnot - Dotnet Konferansı 2025
24.05.2025
Sheraton Grand Ataşehir, İstanbul

Avatar for Mert Metin

Mert Metin

May 20, 2025
Tweet

More Decks by Mert Metin

Other Decks in Technology

Transcript

  1. Interests Microsoft / .NET software architecture, secure software, clean code

    infrastructure, system design Senior Software Engineer Blogger, speaker MERT METİN QR to reach me
  2. Brief Recap of the GenAI Concepts Generative AI, LLM, Prompt,

    Roles, RAG ,Agent, Token, Embedding, Tools
  3. Generative AI Subset of AI Creates new content such as

    text, code, audio, image based on user prompts and learned data.
  4. Large Language Model LLMs are optimized for complex tasks, trained

    on huge datasets, and capable of understanding and interpreting human language.
  5. Prompt An input that is used to communicate with the

    model to generate desired output. Prompt Engineering is the practice of designing effective prompts. Prompts must be; clear, specific, goal oriented, have contextual information
  6. System Specify “How will the model behave?” Roles User Prompt

    and query by user. Asistant Model responds using user’s prompt. It is used to ensure focus on a specific task or behavior pattern.
  7. Agents They are capable of reasoning, planning, and interacting with

    their environment, using generative AI to perform tasks autonomously.
  8. Tool Functions provided to LLMs must have clear objectives. Tools

    enable agents to perform actions such as web search, retrieving information from external sources and API calls.
  9. Embedding - Vector Database Embedding is an operation that converts

    text, images into numerical data (vectors) that machines can understand. A vector database stores numerical data as vectors, enabling efficient similarity search
  10. Retrieval Augmented Generation (RAG) Enhances LLMs with external data sources

    (databases, documents, APIs). Improves response relevance and accuracy.
  11. Why Support for switching between different LLM providers Manageable, Simple,

    and Rapid Development Provide implementations for dependency injection, caching, telemetry and other cross-cutting concerns. Implement chat and embedding features using any LLM provider.
  12. Microsoft.Extensions.AI.Ollama LLM provider for Ollama Useful for working on local

    machine Working With Model Providers Note: This package is deprecated and the OllamaSharp package is recommended.
  13. GetResponseAsync Sends a user chat text message and returns the

    response messages. Return type is ChatResponse which represents the response to a chat request.
  14. GetStreamingResponseAsync Sends chat messages and streams the response. Return type

    is IAsyncEnumerable<ChatResponseUpdate> which provides a stream of updated chat response.
  15. Conversation Application needs to keep conversation context. So, send conversation

    history to the model using roles. Assistant role is set by AddMessages. Case - git commit generator
  16. History and Roles ChatMessage list often represents the history of

    whole chat messages that are part of the conversation. Chat result
  17. Embeddings GenerateEmbeddingAsync has more detailed information about embeddings and Vector

    collection is also included. GenerateEmbeddingVectorAsync returns the vector collection.