Skip to main content
How agents identify, invoke, and process tools during execution.

Overview

Tool calling enables LLMs to interact with external systems, APIs, and functions beyond text generation. The LLM identifies when a tool is needed, selects the appropriate one, invokes it with the correct parameters, and incorporates the results into its response. This bridges the gap between conversational AI and actionable automation. The Platform supports tool calling in two primary contexts: Agentic Apps and Workflow Tools (AI Nodes).

The Tool Calling Process

tool_calling_process:
  - step: 1
    name: Input Processing
    description: User query is received by the agent
    example: 'User: "What''s the weather in Tokyo?"'

  - step: 2
    name: Tool Identification
    description: LLM analyzes the query and identifies the need for weather data
    selected_tool: get_weather

  - step: 3
    name: Parameter Preparation
    description: Extract and map entities to tool parameters
    parameters:
      location: Tokyo
      units: celsius

  - step: 4
    name: Tool Invocation
    description: Platform triggers the tool with the prepared parameters

  - step: 5
    name: Execution
    description: Tool performs its operation (API call, computation, etc.)

  - step: 6
    name: Result Processing
    description: Validate and format the tool output
    result:
      temp: 22
      condition: Sunny
      humidity: 45

  - step: 7
    name: Response Generation
    description: Agent incorporates results into natural language
    example: "It's currently 22°C and sunny in Tokyo."

How the LLM Selects Tools

The LLM uses tool descriptions to decide which tool to invoke:

Tool Definition

name: get_weather
description: |
  Retrieves current weather conditions for a specified location.
  Returns temperature, conditions (sunny, cloudy, rainy), humidity,
  and wind speed. Use when users ask about weather or need to plan
  outdoor activities.

parameters:
  location:
    type: string
    description: City name or coordinates (for example, "Tokyo" or "35.6762,139.6503")
    required: true

  units:
    type: string
    enum: [celsius, fahrenheit]
    description: Temperature unit preference
    default: celsius

Selection Criteria

The LLM considers:
  1. Query intent — What is the user trying to accomplish?
  2. Tool descriptions — Which tool’s description matches?
  3. Required information — What data does the tool provide?
  4. Parameter availability — Can required parameters be extracted?

Writing Effective Descriptions

# Good - Specific and actionable
description: |
  Retrieves the current status of a customer order including
  shipping information and tracking number. Use when customers
  ask about order status, delivery dates, or tracking.

# Bad - Too vague
description: Gets order information.

Multiple Tool Calling

Agents can invoke multiple tools in a single turn.

Sequential Execution

Tools run one after another when outputs are dependent:
User: "What's my order status and when will it arrive?"

1. get_order_status(order_id) → status, tracking_number
2. get_shipping_estimate(tracking_number) → delivery_date

Response: "Your order shipped yesterday. Based on tracking,
          it should arrive by Friday."

Parallel Execution

Independent tools run simultaneously for faster responses:
User: "Compare the weather in Tokyo and New York"

┌─ get_weather(location="Tokyo") ────┐
│                                    ├─→ Compare and respond
└─ get_weather(location="New York") ─┘

Response: "Tokyo is 22°C and sunny, while New York is 15°C
          and cloudy. Tokyo is warmer by 7 degrees."

Use Case: Sequential Multi-Tool Call

User query: “What’s the weather in Paris, and is it a good time to visit?” The LLM identifies two tools: get_weather and get_travel_advice.
  1. Calls get_weather(location="Paris", unit="Celsius")16°C, partly cloudy, light rain later
  2. Passes results to get_travel_advice(location="Paris", weather_condition="Partly cloudy with light rain")Okay to visit; bring an umbrella
  3. Combines outputs: “The weather in Paris is 16°C and partly cloudy, with light rain expected. It’s a fine time to visit — just bring an umbrella.”

Tool Calling in Different Contexts

Agentic Apps

Agents dynamically select tools based on reasoning:
Agent receives: "I need to book a flight and hotel"

Agent reasoning:
├── Task requires travel booking
├── Available tools: search_flights, book_flight, search_hotels, book_hotel
├── User hasn't specified details yet
└── Action: Ask for dates and destination, then use tools

Agent: "I can help with that. Where are you traveling to,
       and what are your dates?"
Agentic Apps support three tool types:
  • Workflow Tools — Visual, no-code tools for designing multi-step workflows with synchronous and asynchronous execution.
  • Code Tools — Custom JavaScript or Python for advanced logic, data transformation, and complex integrations.
  • MCP Tools — Tools exposed via the Model Context Protocol for connecting to remote functions on external servers.

Workflow Tools (AI Nodes)

AI nodes in workflows can be configured with tool access:
ai_node:
  prompt: "Process this customer request"
  tools:
    - check_inventory
    - create_order
    - send_confirmation
  tool_choice: auto  # auto, required, none
Each tool configured in an AI Node includes:
  • Name — A meaningful identifier that helps the LLM recognize when to call the tool.
  • Description — A detailed explanation of the tool’s purpose and capabilities.
  • Parameters — The inputs the tool requires, which the LLM collects from the user.
  • Actions — The nodes executed when the LLM requests a tool call (such as Service, Script, or Search AI nodes).

Tool Choice Modes

Control how the LLM interacts with tools:
ModeBehavior
autoLLM decides whether to use tools
requiredLLM must use at least one tool
noneTools are disabled for this request
specificLLM must use a specified tool
# Force tool usage
tool_choice: required

# Disable tools
tool_choice: none

# Use specific tool
tool_choice:
  type: specific
  tool: get_order_status

Error Handling

error_handling:
  on_failure: retry  # retry, fallback, fail
  retry_attempts: 2
  fallback_message: |
    I'm having trouble retrieving that information right now.
    Please try again in a moment.

Observability

Track tool usage for debugging and optimization.

Execution Trace

{
  "trace_id": "trace_abc123",
  "tool_calls": [
    {
      "tool": "get_order_status",
      "parameters": { "order_id": "ORD-12345" },
      "started_at": "2024-01-15T14:30:00.123Z",
      "completed_at": "2024-01-15T14:30:00.456Z",
      "duration_ms": 333,
      "status": "success",
      "result": { "status": "shipped", "tracking": "1Z999..." }
    }
  ]
}

Metrics

  • Invocation count — How often each tool is used
  • Success rate — Tool reliability
  • Latency — Execution time
  • Token impact — Tokens used for tool calls

Why Tool Calling Matters

  1. Extended capabilities — Enables the LLM to call external tools, making it more versatile (for example, calling a text-to-speech tool alongside text generation).
  2. Increased efficiency — Specialized tools complete tasks faster than the model alone.
  3. Real-time updates — Fetches live data like weather or stock prices through APIs.
  4. More autonomy — The model automatically decides when to use tools, reducing manual input.
  5. Better user experience — Dynamic, accurate responses improve user satisfaction.

Supported Models

Tool calling requires models with function calling support. Platform-hosted and open-source models (including Hugging Face models) don’t support tool calling.
ProviderModels
OpenAIgpt-4, gpt-4o, gpt-4-0613, gpt-4-0125-preview, gpt-4-turbo-preview, gpt-4-1106-preview, gpt-3.5-turbo, gpt-3.5-turbo-1106
Azure OpenAIgpt-4, gpt-3.5-turbo
AnthropicClaude 3 Opus, Sonnet, Haiku; Claude 3.5 Sonnet
GoogleGemini 1.5 Pro, Gemini 1.5 Flash

Best Practices

Clear Tool Descriptions

Write descriptions that help the LLM make good decisions:
# Include: What it does, when to use it, what it returns
description: |
  Searches the product catalog by keyword, category, or filters.
  Returns matching products with names, prices, and availability.
  Use when users ask about products, want recommendations, or
  are looking for specific items.

Appropriate Granularity

# Too broad - hard for LLM to know when to use
name: do_everything
description: Handles all customer operations

# Too narrow - creates tool sprawl
name: get_order_status_for_shipped_orders
name: get_order_status_for_processing_orders

# Just right - clear scope
name: get_order_status
description: Gets status for any order by order ID

Handle Missing Parameters

Don’t fail silently—return helpful errors:
if (!params.order_id) {
  return {
    error: "order_id is required",
    message: "Please provide an order number"
  };
}