Upgrade to Pro — share decks privately, control downloads, hide ads and more …

AI Development in .NET Microsoft.Extensions.AI

Sponsored · Your Podcast. Everywhere. Effortlessly. Share. Educate. Inspire. Entertain. You do you. We'll handle the rest.
Avatar for Mert Metin Mert Metin
February 06, 2026

AI Development in .NET Microsoft.Extensions.AI

AgentCon Istanbul - 07.02.2026
AI Development in .NET Microsoft.Extensions.AI

Avatar for Mert Metin

Mert Metin

February 06, 2026
Tweet

More Decks by Mert Metin

Other Decks in Technology

Transcript

  1. Brief Recap of the GenAI Concepts Generative AI, LLM, Prompt,

    Roles, RAG ,Agent, Token, Embedding, Tools
  2. Generative AI Subset of AI Creates new content such as

    text, code, audio, image based on user prompts and learned data.
  3. Large Language Model LLMs are optimized for complex tasks, trained

    on huge datasets, and capable of understanding and interpreting human language.
  4. Prompt An input that is used to communicate with the

    model to generate desired output. Prompt Engineering is the practice of designing effective prompts. Prompts must be; clear, specific, goal oriented, have contextual information
  5. Agents They are capable of reasoning, planning, and interacting with

    their environment, using generative AI to perform tasks autonomously.
  6. System Specify “How will the model behave?” Roles User Prompt

    and query by user. Asistant Model responds using user’s prompt. It is used to ensure focus on a specific task or behavior pattern.
  7. Tool Functions provided to LLMs must have clear objectives. Tools

    enable agents to perform actions such as web search, retrieving information from external sources and API calls.
  8. Embedding - Vector Database Embedding is an operation that converts

    text, images into numerical data (vectors) that machines can understand. A vector database stores numerical data as vectors, enabling efficient similarity search
  9. Retrieval Augmented Generation (RAG) Enhances LLMs with external data sources

    (databases, documents, APIs). Improves response relevance and accuracy.
  10. Why Support for switching between different LLM providers Manageable, Simple,

    and Rapid Development Provide implementations for dependency injection, caching, telemetry and other cross-cutting concerns. Implement chat and embedding features using any LLM provider.
  11. Use Cases Product description for e-commerce domain Content summarization from

    long articles Easy to categorize for news feeds Customer review analysis Call center assistant Internal frequently asked question assistant
  12. The eaisest way to use the Ollama in .NET. Useful

    for working on local machine. Working With Model Providers Note: Microsoft.Extensions.AI.Ollama package is deprecated. OllamaSharp
  13. GetResponseAsync Sends a user chat text message and returns the

    response messages. Return type is ChatResponse which represents the response to a chat request.
  14. GetStreamingResponseAsync Sends chat messages and streams the response. Return type

    is IAsyncEnumerable<ChatResponseUpdate> which provides a stream of updated chat response.
  15. Conversation Case - git commit generator ChatRole.System specifies “How will

    the model behave?” List<ChatMessage> Initialize the with a prompt of this role.
  16. Conversation Each user’s prompt must set to Case - git

    commit generator ChatRole.User. ChatRole.Assistant AddMessages role is set by
  17. History and Roles often represents the history of whole chat

    messages that are part of the conversation. Chat result List<ChatMessage>
  18. Embeddings GenerateEmbeddingAsync has more detailed information about embeddings and Vector

    collection is also included. GenerateEmbeddingVectorAsync returns the vector collection.
  19. Evaluate the response Result may affect by different LLMs. Chat

    messages and model’s responses are essential part of the EvaluateAysnc