the prompt and few shots to get answers in a certain tone/format. • RAG (Retrieval Augmented Generation): Combines the prompt with domain data from a knowledge base to get grounded answers. https://github.com/f/awesome-chatgpt-prompts https://github.com/Azure-Samples/azure-search-openai-demo/ aka.ms/ragchat
lessons covered under PerksPlus include: · Skiing and snowboarding lessons · Scuba diving lessons · Surfing lessons · Horseback riding lessons These lessons provide employees with the opportunity to try new things, challenge themselves, and improve their physical skills.…. Large Language Model Yes, your company perks cover underwater activities such as scuba diving lessons 1 User Question Do my company perks cover underwater activities?
to the knowledge base) • Are they clear and understandable? • Are they formatted in the desired manner? Yes, underwater activities are included as part of the PerksPlus program. Some of the underwater activities covered under PerksPlus include scuba diving lessons [PerksPlus.pdf#page=3]. Yes, according to the information provided in the PerksPlus.pdf document, underwater activities such as scuba diving are covered under the program. Yes, the perks provided by the PerksPlus Health and Wellness Reimbursement Program cover a wide range of fitness activities, including underwater activities such as scuba diving. The program aims to support employees' physical health and overall well-being, so it includes various lessons and experiences that promote health and wellness. Scuba diving lessons are specifically mentioned as one of the activities covered under PerksPlus. Therefore, if an employee wishes to pursue scuba diving as a fitness-related activity, they can expense it through the PerksPlus program. Do the perks cover underwater activities?
Search) • Search query cleaning • Search options (hybrid, vector, reranker) • Additional search options • Data chunk size and overlap • Number of results returned Document Search Large Language Model Question • System prompt • Language • Message history • Model (ie. GPT 3.5) • Temperature (0-1) • Max tokens
app against sample questions Try different parameters Satisfied? Run flow against larger dataset Evaluate answers Satisfied? Deploy app to users No No Yes Yes Change defaults Evaluate user feedback Improve the prompt and orchestration Connect to your data Customize prompt for domain Add monitoring and alerts 1. Ideating/exploring 2. Building/augmenting 3. Operationalizing
for automating the evaluation of RAG answer quality. • Generate ground truth data • Evaluate with different parameters • Compare the metrics and answers across evaluations Based on the azure-ai-generative SDK: https://pypi.org/project/azure-ai-generative/
The ground truth data is the ideal answer for a question. Manual curation is recommended! Generate Q/A pairs from a search index: Azure AI Search Azure OpenAI azure-ai-generative SDK documents prompt + docs Q/A pairs
custom metrics for every question in ground truth. Evaluate based off the configuration: Local endpoint Azure OpenAI azure-ai-generative SDK response + ground truth prompt metrics question gpt_coherence gpt_groundedness gpt_relevance length has_citation
Start by evaluating the baseline, the default parameters. • For each set of parameters, evaluate at least 3x • And/or use seed in the app itself to reduce variation. • Track evaluation results in a repo, tied to RAG code changes.
feedback dialog to your live app: Then you can: • Manually debug the answers that got rated • Add questions to ground truth data https://github.com/microsoft/sample-app-aoai-chatGPT/pull/396 aka.ms/rag/thumbs