Overview
Tool calling enables LLMs to interact with external systems, APIs, and functions beyond text generation. The LLM identifies when a tool is needed, selects the appropriate one, invokes it with the correct parameters, and incorporates the results into its response. This bridges the gap between conversational AI and actionable automation. The Platform supports tool calling in two primary contexts: Agentic Apps and Workflow Tools (AI Nodes).The Tool Calling Process
How the LLM Selects Tools
The LLM uses tool descriptions to decide which tool to invoke:Tool Definition
Selection Criteria
The LLM considers:- Query intent — What is the user trying to accomplish?
- Tool descriptions — Which tool’s description matches?
- Required information — What data does the tool provide?
- Parameter availability — Can required parameters be extracted?
Writing Effective Descriptions
Multiple Tool Calling
Agents can invoke multiple tools in a single turn.Sequential Execution
Tools run one after another when outputs are dependent:Parallel Execution
Independent tools run simultaneously for faster responses:Use Case: Sequential Multi-Tool Call
User query: “What’s the weather in Paris, and is it a good time to visit?” The LLM identifies two tools:get_weather and get_travel_advice.
- Calls
get_weather(location="Paris", unit="Celsius")→ 16°C, partly cloudy, light rain later - Passes results to
get_travel_advice(location="Paris", weather_condition="Partly cloudy with light rain")→ Okay to visit; bring an umbrella - Combines outputs: “The weather in Paris is 16°C and partly cloudy, with light rain expected. It’s a fine time to visit — just bring an umbrella.”
Tool Calling in Different Contexts
Agentic Apps
Agents dynamically select tools based on reasoning:- Workflow Tools — Visual, no-code tools for designing multi-step workflows with synchronous and asynchronous execution.
- Code Tools — Custom JavaScript or Python for advanced logic, data transformation, and complex integrations.
- MCP Tools — Tools exposed via the Model Context Protocol for connecting to remote functions on external servers.
Workflow Tools (AI Nodes)
AI nodes in workflows can be configured with tool access:- Name — A meaningful identifier that helps the LLM recognize when to call the tool.
- Description — A detailed explanation of the tool’s purpose and capabilities.
- Parameters — The inputs the tool requires, which the LLM collects from the user.
- Actions — The nodes executed when the LLM requests a tool call (such as Service, Script, or Search AI nodes).
Tool Choice Modes
Control how the LLM interacts with tools:| Mode | Behavior |
|---|---|
auto | LLM decides whether to use tools |
required | LLM must use at least one tool |
none | Tools are disabled for this request |
specific | LLM must use a specified tool |
Error Handling
Observability
Track tool usage for debugging and optimization.Execution Trace
Metrics
- Invocation count — How often each tool is used
- Success rate — Tool reliability
- Latency — Execution time
- Token impact — Tokens used for tool calls
Why Tool Calling Matters
- Extended capabilities — Enables the LLM to call external tools, making it more versatile (for example, calling a text-to-speech tool alongside text generation).
- Increased efficiency — Specialized tools complete tasks faster than the model alone.
- Real-time updates — Fetches live data like weather or stock prices through APIs.
- More autonomy — The model automatically decides when to use tools, reducing manual input.
- Better user experience — Dynamic, accurate responses improve user satisfaction.
Supported Models
Tool calling requires models with function calling support. Platform-hosted and open-source models (including Hugging Face models) don’t support tool calling.| Provider | Models |
|---|---|
| OpenAI | gpt-4, gpt-4o, gpt-4-0613, gpt-4-0125-preview, gpt-4-turbo-preview, gpt-4-1106-preview, gpt-3.5-turbo, gpt-3.5-turbo-1106 |
| Azure OpenAI | gpt-4, gpt-3.5-turbo |
| Anthropic | Claude 3 Opus, Sonnet, Haiku; Claude 3.5 Sonnet |
| Gemini 1.5 Pro, Gemini 1.5 Flash |