Skip to main content
Build tool flows with specialized nodes for different operations.

Overview

Workflow tools are built by connecting nodes on a visual canvas. Each node type serves a specific purpose—processing data, making decisions, integrating with external systems, or pausing for human review.
Start → Validate → API Call → Transform → Condition → End

                                         Error Handler

Node Types


NodeCategoryPurpose
StartControlEntry point; defines input parameters
EndControlExit point; returns output or error message
APIIntegrationMake REST/SOAP API calls to external services
IntegrationIntegrationConnect to third-party services without code
FunctionLogicExecute custom JavaScript or Python code
ConditionLogicBranch based on IF/ELSE IF/ELSE logic
LoopLogicIterate over arrays to process multiple items
Text to TextAITransform text using LLMs
Text to ImageAIGenerate images from text prompts
Audio to TextAITranscribe audio to text (ASR)
Image to TextAIExtract text or insights from images (OCR)
DocSearchDataRAG-powered retrieval from a Search AI app
HumanControlPause for human review or approval

Start Node

Every flow begins with a Start node that defines input parameters and initiates execution.

Configuration

inputs:
  order_id:
    type: string
    description: The order identifier
    required: true

  include_details:
    type: boolean
    description: Whether to include full details
    default: false

outputs:
  - name: order_status
    type: object

Accessing Inputs in Downstream Nodes

LanguageSyntax
JavaScript{{context.steps.Start.variable-name}}
Python{{context["steps"]["Start"]["variable-name"]}}

End Node

Terminates the flow and returns output to the caller, or displays an error message on failure.

Configuration

  1. Custom Name: Enter an appropriate name for the node.
  2. Name (key): Select a key from the Manage Output section.
  3. Value: Map to a node output using {{context. and select the variable.
// Example value mapping
{{context.steps.summarization.output}}
At least one output variable is required for every End node.

API Node

Make HTTP requests to external services using REST or SOAP protocols.

Key Capabilities

  • Protocols: REST and SOAP
  • Methods: GET, POST, PUT, DELETE, PATCH
  • Auth: Pre-authorized tokens or per-user runtime authorization
  • Modes: Synchronous or Asynchronous
  • Body formats: JSON, XML, Form URL Encoded, Custom
  • Testing: Preview API responses before finalizing setup

Common Use Cases

  • Data enrichment (fetch user, order, or product details)
  • Webhook triggers based on workflow decisions
  • Identity verification, fraud checks, or compliance validation
  • Sending alerts or updating external dashboards

Configuration

FieldDescription
Node NameName for the node
TypeREST or SOAP
Integration TypeSynchronous: waits for response (timeout: 5–180s, default 60s) · Asynchronous: continues without waiting (timeout: 30–300s, default 60s; No timeout option available for indefinite wait)
Request DefinitionAPI endpoint URL or cURL, auth profile, headers, and body
Auth options:
  • Pre-authorize the integration: Use system-level credentials shared across all users.
  • Allow users to authorize: Each user authenticates at runtime (for example, Google Drive).
On Success / On Failure: Configure downstream nodes for each path.

Accessing Output

{{context.steps.APINodeName.output.data}}
{{context.steps.APINodeName.output.status}}
// Or via Start node reference:
{{context.steps.Start.APINodeName}}

Function Node

Execute custom JavaScript or Python code for data transformation and business logic.

Key Capabilities

  • Write Code: Author inline scripts in the built-in editor
  • Custom Function: Invoke a function from a deployed script library
  • Memory Access: Read/write Agent Memory stores for stateful logic
  • Supported languages: JavaScript (async with await), Python (synchronous)

Common Use Cases

  • Data transformation and format conversion
  • Custom validation and business rule logic
  • Mathematical calculations and statistical analysis
  • String manipulation and regex operations

Option 1 — Write Code

  1. Select Write Code and open the script editor.
  2. Choose JavaScript or Python.
  3. Use context variables for dynamic inputs (see syntax below).
  4. Click Run to test the script.
Script editor tabs:
TabDescription
Context InputDynamic inputs fetched from the Start node or static values
Context OutputOutput generated by the script
LogExecution log with output or errors
Context variable syntax:
LanguageSyntax
JavaScript{{context.steps.Start.variable-name}}
Python{{context["steps"]["Start"]["variable-name"]}}
const order = context.steps.FetchOrder.output;
const transformed = {
  id: order.order_id,
  total: order.items.reduce((sum, item) => sum + item.price, 0),
  itemCount: order.items.length,
  formattedDate: new Date(order.created_at).toLocaleDateString()
};
return transformed;

Option 2 — Custom Function

Invoke a function from an imported and deployed script.
  1. Select Script: Choose a deployed script from the Script name list. Deploy scripts via Settings > Manage custom scripts.
  2. Select Function: Choose a function from the Function name list. Only one function per node; only deployed scripts are listed.
  3. Map Input Arguments: Assign static or dynamic values to each argument. Select the correct data type (String, Number, JSON, Boolean). Type {{ to trigger context variable suggestions.
  4. Test: Click Test, enter values in the Input panel, then click Execute.
Output (result key) is saved to {{context.steps.functionnodename.output}}. Errors (stderr) are saved to {{context.steps.functionnodename.error}}.

Agent Memory Access

Use memory stores to retain and share data across steps or sessions. Data is stored as JSON; always check the memory store schema for field names and types.
OperationSyntax
Getmemory.get_content(memory_store_name=STORE_NAME, projections={"field": 1})
Setmemory.set_content(memory_store_name=STORE_NAME, content={...})
Deletememory.delete_content(memory_store_name=STORE_NAME)
Example:
# Read from memory
retrieved = memory.get_content(memory_store_name="my-notes", projections={"note": 1})

# Write to memory
memory.set_content(memory_store_name="my-notes", content={"note": "Updated note.", "timestamp": "2025-05-15T10:00:00Z"})

# Delete
memory.delete_content(memory_store_name="my-notes")

Accessing Output

{{context.steps.FunctionNodeName.output}}
{{context.steps.FunctionNodeName.error}}

Integration Node

Connect to pre-configured third-party services without writing code.

Key Capabilities

  • No-code: Embed prebuilt third-party actions without custom code
  • Secure connections: Tested and authenticated service connections
  • Auto-generated JSON: Prebuilt payloads from action parameters
  • Visual configuration: Configure directly on the canvas

Common Use Cases

  • CRM and marketing automation (for example, trigger campaigns from captured leads)
  • Workflow automation across apps (for example, create a task when a ticket is raised)
  • Payment gateway processing
  • SaaS tool integrations (CRM, email, e-commerce)

Prerequisites

Add at least one service provider connection via Settings > Integrations before configuring this node. Test the connection in Settings to confirm it works.

Configuration

  1. Add Node: Click Integration > + New Integration in the Assets panel or drag onto the canvas.
  2. Select Service: Search or browse the integrations list.
  3. Node Name: Name the node (letters and numbers only).
  4. Connection Name: Select an active, configured connection.
  5. Add Action: Click Add Action and select one action (only one action per node).
  6. Action Parameters: Fill in parameters for the selected action.
  7. Connections: Set On Success and On Failure paths.

Managing Actions

ActionHow
EditClick the Edit icon and modify parameters
ChangeClick Change Action to swap to a different action (existing config is lost)
DeleteNot supported—add a new node with a different action instead
View JSONEnable the JSON switch in the action config window to view and copy the action code

Accessing Output

{{context.steps.IntegrationNodeName.output}}

Condition Node

Branch workflow execution based on logical conditions.

Key Capabilities

  • Condition types: IF, ELSE IF, and ELSE.
  • Operators: ==, !=, >, <, >=, <=, contains, startsWith, and endsWith.
  • Logic combinators: AND or OR for multi-criteria conditions.
  • Dynamic references: Context variables and previous node outputs.

Common Use Cases

  • Route based on classification, type, or priority
  • Fallback logic when no match is found
  • Validate data before proceeding
  • Multi-step filtering with combined conditions

Structure

│ ┌─────────────┐ │ │ Condition │ │ │ amount > 100│ │ └──────┬──────┘ │ │ │ ┌────────────┼────────────┐ │ ▼ ▼ ▼ │ ┌────────┐ ┌────────┐ ┌────────┐ │ │ IF │ │ELSE IF │ │ ELSE │ │ └────────┘ └────────┘ └────────┘

Configuration

  1. Add the node to the canvas.
  2. Node Name: Enter a descriptive name.
  3. IF Condition: Enter a context variable (for example, {{context.ambiguous_sub_categories}}), choose an operator, and enter a value or another variable (for example, {{context.steps.NodeName.output}}). Combine multiple criteria with AND/OR.
  4. Routing: Set Go To (IF met) and ELSE (IF not met) nodes.
Operators reference:
OperatorDescription
==Equals
!=Not equals
>Greater than
<Less than
>=Greater or equal
<=Less or equal
containsString contains
startsWithString starts with
endsWithString ends with
Complex conditions:
// AND
context.steps.Order.amount > 100 && context.steps.Order.status === "pending"

// OR
context.steps.User.tier === "premium" || context.steps.Order.amount > 500
  • A Condition node can be called a maximum of 10 times in a tool flow.
  • Standard error: If a condition path has no connected node, the error “Path not defined. Please check the flow.” is displayed.

Loop Node

Iterate over arrays to process multiple items, one at a time.

Key Capabilities

  • Array iteration: Execute child nodes once per item in an input array
  • Flexible child nodes: Add Function, API, AI, Condition nodes inside the loop
  • Output aggregation: Collect per-iteration results into an output array
  • Three error handling strategies: Continue, Terminate, or Remove Failed
  • Debug support: Per-iteration inspection in the Debug panel

Common Use Cases

  • Batch processing (invoices, documents, records)
  • API calls on multiple inputs (fetch data per customer ID)
  • Bulk notifications (personalized messages to a list)
  • Report generation per item in a dataset

Configuration

SettingDescription
Node NameDescriptive name for the loop
Loop Input SourceArray to iterate over: a context variable (for example, context.invoices) or a previous node’s output
Output FieldVariable to store aggregated results (for example, context.result)

Error Handling Options

StrategyBehavior
Continue on error (default)Processes all items; output includes both successes and errors; follows success path
Terminate executionStops on first failure; follows the failure path with failed iteration details
Remove failed resultsLike “Continue” but filters failures from the final output; only successes returned

Inside the Loop

// Reference current iteration item
{{currentItem}}

// Current index (via loop context)
{{context.loop.index}}
Only nodes placed inside the loop block execute per iteration. Nodes connected outside run after the loop completes.

Accessing Output

{{context.steps.LoopNodeName.output}}
// Returns: [result1, result2, result3, ...]

Troubleshooting

IssueCauseFix
Loop input missing or emptyInput list is undefined or nullVerify Loop Input Source is set to a valid array; check Debug Log
Child nodes not executingNodes placed outside the loop containerDrag nodes into the loop block on the canvas
Loop stops on one item failingError handling set to TerminateChange to Continue on error
Output variable conflictsField name reused elsewhere in the flowUse a unique name for the Output field

AI Nodes

Multimodal nodes that use LLMs for specialized tasks—text, image, audio, and visual processing.
NodeInputOutputUse Cases
Text to TextTextTextSummarization, translation, content generation
Text to ImageTextImage (PNG URL)Marketing visuals, concept art, variant testing
Audio to TextAudioTextTranscription, voice processing, subtitles
Image to TextImageTextOCR, document digitization, image Q&A

Text to Text Node

Transform input text into desired text output using LLMs.

Key Capabilities

  • Prompt options: Write your own prompt, or choose from the Prompt Hub with version selection
  • Model selection: Choose from pre-configured LLM models
  • Hyperparameter tuning: Temperature, Top-p, Top-k, Max Tokens
  • Structured output: Optional JSON schema for parseable responses
  • Tool calling: Enable the model to call up to 3 external tools during execution
  • Timeout: 30–180 seconds (default: 60s)

Common Use Cases

  • Summarization (transcripts, logs, documents)
  • Tone or style adjustment
  • Content rewriting and reformatting
  • Error explanation and log analysis

Configuration

FieldDescription
Node NameName for the node
Prompt optionsWrite your own: Enter System Prompt (model role) and Human Prompt (task instructions) · Prompt Hub: Select a saved prompt and version; optionally customize
Select ModelChoose a configured LLM
Timeout30 - 180 seconds (default 60s)
Response JSON schemaOptional; define structure for predictable output
Model ConfigurationsTemperature, Top-p, Top-k, Max Tokens
System vs. Human prompts:
  • System Prompt: Sets the model’s role. Example: “You are a helpful assistant.”
  • Human Prompt: The task or question. Example: “Summarize this error log.” Use {{context.variable_name}} for dynamic values.
Tool calling settings:
SettingDescription
Add ToolsSelect up to 3 tools from your account
Exit node execution afterNumber of model calls before exiting to failure path
Tool choiceAuto (model decides) or Required (always calls a tool)
Parallel tool callsTrue for simultaneous calls; False for sequential

Accessing Output

{{context.steps.AINodeName.output}}

Text to Image Node

Generate images from descriptive text prompts using AI image models.

Key Capabilities

  • Positive Prompt: Define what the image should include (style, elements, setting)
  • Negative Prompt: Specify what to exclude from the image
  • Aspect Ratio: Up to 2048 × 2048 pixels (GPU-dependent)
  • Steps: Refinement iterations—25–30 recommended for quality and performance
  • Batch Count: Up to 5 image variants per run
  • Output: PNG format returned as URLs

Supported Models

ProviderModels
Stable Diffusionstable-diffusion-xl-base-1.0, stable-diffusion-2-1, stable-diffusion-v1-5
OpenAIDALL·E 2, DALL·E 3

Common Use Cases

  • Marketing banners, ads, and promotional visuals
  • Content illustration for blogs or newsletters
  • Visual prototyping (UI mockups, storyboards)
  • A/B testing with multiple image variants

Configuration

FieldDescription
Node NameName for the node
Select ModelChoose a Stable Diffusion or OpenAI variant
Positive PromptKeywords and descriptions for what to generate; use {{context.variable_name}} for dynamic input
Negative PromptKeywords for elements to exclude
Aspect RatioWidth x Height in pixels (max 2048 x 2048)
StepsRefinement passes; 25 - 30 recommended
Batch CountNumber of image variants to generate sequentially (max 5)
The node uses an input scanner to detect banned words. Banned topics cause an error in the Debug window.

Accessing Output

{{context.steps.TextToImageNodeName.output}}
// Returns PNG image URL(s)

Audio to Text Node

Convert spoken audio into written text using Automatic Speech Recognition (ASR).

Key Capabilities

  • Model: OpenAI Whisper-1
  • Multilingual: Transcribes multiple languages; translates non-English audio to English
  • Input: Audio file (upload) or audio URL; max file size 25 MB
  • Timestamps: Optional; records when each dialog segment was spoken
  • Structured output: Optional JSON schema

Supported Formats

M4a · Mp3 · Webm · Mp4 · Mpga · Wav · Mpeg
  • Files larger than 25 MB must be split at logical points to avoid mid-sentence breaks.
  • Inverse translation (English to other languages) is not supported.
  • Only URLs are supported as input variables—direct file uploads via input variables are not supported.

Common Use Cases

  • Meeting, lecture, or interview transcription
  • Customer support call analysis
  • Subtitle and caption generation
  • Voice command processing

Configuration

FieldDescription
Node NameName for the node
Audio FileInput variable with the audio file URL from the Start node
Select ModelChoose from configured models
TranslationToggle on to translate non-English audio to English
TimestampsToggle on to include time markers in the transcript
PromptTranscription style instructions, terminology corrections, speaker labels (max 224 tokens for Whisper)
Response JSON schemaOptional structured output definition

Accessing Output

{{context.steps.AudioToTextNodeName.output}}

Image to Text Node

Extract text or generate insights from images using OCR and LLMs.

Key Capabilities

  • OCR: Extract embedded text from scanned documents, screenshots, and photos
  • Image Understanding: Answer questions or generate descriptions from images using prompts
  • Multi-model: OpenAI and Anthropic models supported
  • Structured output: Optional JSON schema for parseable responses

Supported Models

  • OpenAI: gpt-4o, gpt-4o-mini
  • Anthropic: Claude Sonnet Vision

Supported Image Formats

PNG · JPEG · JPG
  • Only one image URL can be provided at a time.

Common Use Cases

  • Document digitization (receipts, invoices, scanned forms)
  • Image-based content moderation
  • Multilingual OCR (printed or handwritten text)
  • Extracting insights from diagrams, posters, or infographics

Configuration

FieldDescription
Node NameName for the node
Select ModelChoose a supported OpenAI or Anthropic model
File URLPublic URL of the image (PNG, JPEG, or JPG)
System PromptDefine the model’s role (for example, “You are a vehicle insurance assistant”)
PromptTask instructions; use {{context.variable_name}} for dynamic inputs
Response JSON schemaOptional structured output definition

Accessing Output

{{context.steps.ImageToTextNodeName.output}}

DocSearch Node

Retrieve context-aware information from a connected Search AI app using Retrieval-Augmented Generation (RAG).

Key Capabilities

  • RAG-powered: Combines document retrieval with LLM-generated responses
  • Search AI integration: Connects to a configured Search AI app to query indexed content
  • Dynamic queries: Accepts static text or context variables as input
  • Meta filters: Narrow search scope to specific documents or sources (optional)

Common Use Cases

  • Retrieve relevant policies, manuals, or help articles based on user queries
  • Context-aware Q&A grounded in indexed documents
  • Internal knowledge base search (wikis, technical docs, training material)

Setup Prerequisites

Before configuring the node:
  1. Set up a Search AI App: Configure a Search AI application and enable the Answer Generation API scope.
  2. Link Search AI in the Platform: Go to Settings > Integrations > Search AI > Link an App. Enter the app credentials, test the connection, and confirm. Use https://platform.kore.ai for the Search AI URL.

Configuration

FieldDescription
Node NameUnique name for the node
QueryStatic text or dynamic input variable (for example, {{context.steps.Start.userQuery}})
Search AI ConnectionSelect the linked connection configured in Settings
Meta FiltersOptional JSON rules to narrow results to specific files or sources; if omitted, applies to all documents

Accessing Output

The output path is dynamic and depends on the Search AI API response:
{{context.steps.DocSearch.response.response.answer}}
// Path may vary—check the sample Search AI response for the correct key

Human Node

Pause workflow execution to collect human input, approval, or review before proceeding.

Key Capabilities

  • Custom input fields: Define fields for the reviewer (Text, Number, Boolean, Date)
  • Timeout handling: Set a timeout duration or wait indefinitely
  • Sync and Async modes: Determined by the tool’s endpoint configuration
  • Three outcome paths: On Success, On Timeout (Terminate or Skip), On Failure

Common Use Cases

  • Approval workflows (expenses, leave requests, procurement)
  • Quality assurance checkpoints before publishing AI-generated output
  • Compliance review for sensitive or regulated steps
  • Exception handling and escalation for edge cases

How It Works

When the workflow reaches the Human Node, it sends a POST request to the configured endpoint. Execution pauses until the reviewer responds, times out, or a delivery failure occurs.
ModeBehavior
SyncWorkflow pauses and waits for reviewer response within the endpoint timeout
AsyncWorkflow sends an immediate acknowledgement and continues; notifies the callback URL when the request is sent, when the response is received, and when the final output is generated

Configuration

1. Request Destination: Select Custom Request in Send & wait for response (currently the only supported option). 2. Request Definition: Click Define Request and provide:
FieldDetails
Request TypePOST only
API Endpoint URLEndpoint URL or cURL command
Auth ProfilePre-authorize (system credentials) or user-authorize (per-user runtime auth)
HeadersKey-value pairs; CallbackURL and Token are auto-included
BodyAuto-generated at runtime from Input Fields + Reviewer Note
3. Input Fields: Define fields the reviewer must fill in.
  • Supported types: Text, Number, Boolean, Date
  • Set default values and mark required/optional
  • Pre-fill with context variables: {{context.user.name}}
  • Click Payload preview to inspect the full payload
4. Reviewer Note:
FieldDescription
Subject lineEmail subject or message title
Message bodyContext or instructions for the reviewer (resolved at runtime)
Assign toReviewer’s email address
5. Timeout Behavior:
  • No timeout: Waits indefinitely.
  • Set timeout: Default 120 seconds (configurable in seconds, minutes, hours, or days).
6. Outcome Paths:
OutcomeBehavior
On SuccessAll mandatory fields received a response; workflow continues along the success path
On Timeout - TerminateNo response within timeout; flow ends via the End node
On Timeout - SkipNo response within timeout; continues with null output to the next node
On FailureRequest delivery error; follows the configured fallback node

Accessing Reviewer Responses

// Full response payload
{{context.steps.NodeName.output}}

// Specific field
{{context.steps.NodeName.output.Approval}}
{{context.steps.NodeName.output.Comments}}
  • Inside Loops: The loop does not advance to the next iteration until the Human node receives a response.
  • In Parallel Branches: The branch merge waits for the Human node to complete before continuing.

Managing Nodes

Adding Nodes

MethodHow
Plus iconClick + on any existing node and select a type
Assets panelDrag a node type onto the canvas
Bottom trayClick a node type in the quick-access tray below the canvas

Connecting Nodes

  • Drag from a node’s output connector to another node’s input.
  • Use the Connections tab in the node configuration panel.
  • All nodes must connect to Start (directly or indirectly).

Constraints

  • Maximum 10 outgoing connections per node
  • No duplicate connections from the same parent
  • No backward loops (prevents cycles)

Deleting Nodes

Right-click → Delete. Reconnect any dependent paths afterward.

Auto Arrange

Right-click on the canvas → Auto Arrange for automatic layout.

Debugging

The Debug panel shows:
  • Execution status per node
  • Input/output values at each step
  • Error messages
  • Timing metrics
  • Iteration details (for Loop nodes; click the loop icon to drill into individual runs)