Upgrade to Pro — share decks privately, control downloads, hide ads and more …

AI Tool Calling And What We Can Do With Them

AI Tool Calling And What We Can Do With Them

Avatar for Tamar Twena-Stern

Tamar Twena-Stern

May 15, 2025
Tweet

Transcript

  1. I am Asking My LLM A Question - I am

    a huge Judo fan - Does Inbar Lanir won an olympic medal ?
  2. Large language models (LLMs) are machine learning models that can

    comprehend and generate human language text. They work by analyzing massive data sets of language.
  3. To Understand How LLM Works In High Level - We

    Need To Understand It’s Layers
  4. Layer 2 - Deep Learning - Neural Networks - Models

    With Large Amount Of Parameters And Complicated relations Animal Probability Cat 0.1 Dog 0.9 Neurons
  5. To Solve Text Problems - Embedding Words - Turn A

    Sentence To A Sequence Of Numeric Inputs
  6. Layer 3 - LLM Classification Problem To Predict Next Word

    The child likes to play with the Word Probability zebra 0.1 ball 0.7 shoe 0.05
  7. LLM Tokens nm, Node.js is an open source, cross platform

    JavaScript runtime environment that allows developers to run code outside the web browser.
  8. Temperature - LLM Setting to Guess Words With Smaller Probablity

    The child likes to play with the Word Probability zebra 0.1 ball 0.7 shoe 0.05
  9. When Using An LLM , You Need To Give It

    A Prompt prompt - “You are a helpful assistant that help developers to find bugs in their code”
  10. The Prompt Can Be Used To Avoid Hellucinations prompt -

    “You are a helpful assistant that help developers to find bugs in their code. If you don’t know the answer, say you don’t know. Never make up an answer”
  11. Function Calling enables your LLM to interface with code or

    external data • Fetch data • Submitting a form, calling APIs, modifying application state
  12. AI Agent An AI agent is a system that can

    perceive its environment, make decisions, and take actions to achieve specific goals — often autonomously and continuously over time.
  13. AI Agent - Main Usage For Tool Calling Tool call

    execution is injected into the LLM Context window
  14. LLM Providers That You can Use • GPT 3.5, GPT4,

    GPT4-Turbo,GPT-4o • Gemini, Palm • Llama
  15. We will develop our bot with OpenAI model And Langchain

    and LangGraph - frameworks for developing applications powered by large language models (LLMs).
  16. Initialize OpenAI Model import { ChatOpenAI } from "@langchain/openai"; //

    initialize the LLM const llm = new ChatOpenAI({ model: "gpt-4o" });
  17. ReAct AI Agent • Receives a user query • Thinks

    step-by-step (reasoning): "What do I know? What should I do next?" • Acts: Calls a tool (like a web search, calculator, database, etc.) • Observes the tool result • Thinks again, and loops if needed • Eventually outputs a final answer
  18. Create ReAct Agent // Initialize memory to persist state between

    graph runs const agentCheckpointer = new MemorySaver(); const agent = createReactAgent({ llm: agentModel, tools: agentTools, checkpointSaver: agentCheckpointer, });
  19. Agent Invoke And Tool Calling app.post('/api/question', async (req, res) =>

    { const userQuestion = req.body.userQuestion; if (!userQuestion) { return res.status(400).json({ error: 'User question required' }); } // Now it's time to use! const agentFinalState = await agent.invoke( { messages: [new HumanMessage(userQuestion)] }, { configurable: { thread_id: "42" } }, ); const response = agentFinalState.messages[agentFinalState.messages.length - 1].content; res.status(200).json({ answer: response }); });
  20. Initialize OpenAI Model import { ChatOpenAI } from "@langchain/openai"; //

    initialize the LLM const openAiModel = new ChatOpenAI({ openAIApiKey: process.env.OPENAI_API_KEY, temperature: 0.7, });
  21. Initialize The Search Tool import { TavilySearchResults } from "@langchain/community/tools/tavily_search";

    // initialize the search tool const searchTool = new TavilySearchResults({ apiKey: process.env.TAVILY_API_KEY, });
  22. The Graph - Creation And Nodes import { Graph }

    from "@langchain/langgraph"; // Create graph const graph = new Graph(); // Start node graph.addEdge("__start__", "UserInput"); // Node: UserInput graph.addNode("UserInput", async (query) => { log("UserInput node:", query); return { query }; });
  23. The Web Search Node // Node: WebSearch graph.addNode("WebSearch", async (inputs)

    => { const queryStr = inputs.query?.UserInput || inputs.query; // Handle structure const searchResults = await searchTool.invoke( queryStr ); log("WebSearch results:", searchResults); return { ...inputs, searchResults }; });
  24. The Summary Node // Node: Summarize graph.addNode("Summarize", async (inputs) =>

    { const summary = await openAiModel.invoke([ { role: "system", content: "Summarize the search results in a structured format for decision-making.", }, { role: "user", content: JSON.stringify(inputs.searchResults), }, ]); log("Summarize result:", summary.text); return { ...inputs, summary: summary.text }; });