Skip to main content
Agent Flows are intelligent conversational workflows that combine Dialog Tasks with Agent Nodes to deliver autonomous, goal-driven customer service experiences. DialogGPT orchestrates intent identification and routing, while Agent Nodes handle conversation execution within individual use cases. This enables AI Agents to autonomously plan, reason, and execute multi-step actions.

Key Components

ComponentRole
Dialog TasksDefine the scope and structure of a use case (e.g., “Web Check-In Assistance”, “Account Balance Inquiry”)
Agent NodesProvide agentic capabilities: natural language understanding, slot filling, confirmation handling, and tool invocation within business rules
Deterministic NodesEntity, message, and service nodes for regulated or compliance-critical steps
The hybrid design lets you combine deterministic nodes for strict control with Agent Nodes for flexible, natural conversation handling.

Choosing an Approach

Deterministic

Use when:
  • Regulatory compliance requires exact wording and predictable behavior (legal disclaimers, financial disclosures)
  • Audit trails and full traceability are required
  • Conversation follows a fixed, linear path
  • Consistent responses matter more than natural conversation
Examples: Account verification, payment processing, medical triage, loan applications

Agentic

Use when:
  • Natural, human-like interaction is the priority
  • User inputs vary widely for the same intent
  • Ease of maintenance matters (a single Agent Node handles increasing complexity)
  • You’re building toward more autonomous experiences
Examples: Product recommendations, general inquiries, travel planning, content discovery

Hybrid

Use when:
  • Mixed requirements exist — some steps need strict control, others benefit from flexibility
  • Transitioning gradually from deterministic to agentic
  • Different use cases within the same app have different needs
Examples:
  • Banking: transactions are deterministic, general inquiries are agentic
  • Healthcare: appointment booking is deterministic, health information is agentic

Trade-offs

FactorDeterministicAgentic
PerformanceFaster, predictable latencyDependent on LLM response times
CostLower (no LLM calls for responses)Higher LLM API costs, lower dev/maintenance costs
ScalabilityScales linearly with complexitySingle agent handles growing complexity
User experienceConsistent but potentially rigidNatural and engaging; requires prompt engineering

Scoping Agent Flows

Avoid use cases that are too granular (dialog bloat, maintenance overhead) or too broad (poor accuracy, weak semantic matching). Find the middle ground with clear semantic boundaries.

When to Split a Use Case

Split into separate flows when:
  • Users have distinct goals (“Book appointment” vs. “Cancel appointment”)
  • Fulfillment uses different backend APIs or workflows
  • Training phrases are semantically distinct
  • Required entities or business rules differ

When to Keep Together

Keep as a single flow when:
  • Variations express the same goal (“What’s my balance?” / “How much do I have?”)
  • The same API handles all variations
  • Training phrases overlap significantly
  • Required entities are identical

Writing Effective Descriptions

Good descriptions are:
  • Semantically rich: Activate more embedding dimensions
  • Action and goal-oriented: Include the primary action and desired outcome
  • Contextual: Explain when users typically need this
Poor scoping (high collision risk) — all retrieved for “where is my order?”:
  • Check_Order_Status, Track_Order, View_Order_Details, Get_Order_Information
Well-scoped examples:
  1. Track_Order_Shipment — Track shipping and delivery status for orders in transit. Users want to know WHERE their package is and WHEN it will arrive. Includes tracking numbers, carrier information, and estimated delivery dates.
  2. View_Order_History — View past completed orders. Users want to see WHAT they ordered, when, and final totals. For historical reference, NOT active in-transit shipments.
  3. Modify_Pending_Order — Make changes to an order that hasn’t shipped. Users want to UPDATE their order — change address, cancel items, or adjust quantities. Only for orders still processing, not yet dispatched.

Dialog Tasks

Dialog Tasks define the scope and structure of each use case. Each task consists of interconnected nodes that retrieve information, perform actions, connect to external services, and send messages to users. Related features:
  • Sub-intent management and Node Grouping — Configure sub-intents using group nodes or configure a task as a sub-intent
  • Component Transitions — Configure if-else conditions between nodes based on custom criteria
  • Voice and IVR Integration — Enable voice interaction (see Voice Call Properties)
  • User and Error Prompt Management — Customize messaging at each node
  • Context Object — Share data across tasks, intents, and FAQs (see Context Object)

Creating Dialog Tasks

Navigate to Automation > Dialogs, then click Create Dialog.
For optimal performance, limit dialog tasks to 50 or fewer. Exceeding this may cause sluggish UI response and increased latency.
Three creation methods are available:

From Scratch

  1. Click Start From Scratch.
  2. Enter an Intent Name (required) and Intent Description (recommended). Add up to 5 secondary descriptions to broaden semantic coverage and improve intent detection accuracy.
  3. Set availability: Customer Use Case, Agent AI Use Case, or both.
  4. Configure Intent Settings: set the task as sub-intent only or hide it from help.
  5. Set Analytics - Containment Type: Abandonment as Self-Service or Drop Off.
  6. Optionally set Conversation Context, Intent Preconditions, or Context Output.
  7. Click Proceed.

Generate with AI

  1. Click Generate with AI.
  2. Enter an Intent Name and a meaningful Description, then click Generate.
  3. Preview the generated flow. Click Regenerate with a revised description to refine.
  4. Click Proceed when satisfied.
The platform auto-defines entities, prompts, error prompts, service tasks, and other parameters. Customize as needed after generation.
If no description is provided, only an error prompt node is generated. A meaningful description is strongly recommended.

From Marketplace Templates

  1. Click Marketplace and browse categories and integrations. Configured integrations are labeled Installed.
  2. For Dialog Action Templates (API call templates): select an integration, then click Install on the desired template.
  3. For Dialog Templates (pre-created flows): select a template, click Install, configure name and description, set up utterances and channel experience, then click Finish.
Marketplace templates require the integration to be configured in your AI Agent first.

Session Management

Session variables persist data across tasks, dialogs, and users. Use them in JavaScript within dialog nodes.

JavaScript API

"EnterpriseContext" : {
    "get"    : function(key){},
    "put"    : function(key, value, ttl){},  // ttl in minutes
    "delete" : function(key){}
},
"BotContext" : {
    "get"    : function(key){},
    "put"    : function(key, value, ttl){},
    "delete" : function(key){}
},
"UserContext" : {
    "get" : function(key){}  // read-only
},
"UserSession" : {
    "get"    : function(key){},
    "put"    : function(key, value, ttl){},
    "delete" : function(key){}
},
"BotUserSession" : {
    "get"    : function(key){},
    "put"    : function(key, value, ttl){},
    "delete" : function(key){}
}
put(), get(), and delete() support EnterpriseContext, BotContext, UserSession, and BotUserSession. UserContext supports get() only. All methods operate on root-level objects — nested paths are not supported.

Session Variable Types

TypeScopeDescription
EnterpriseContextAll apps, all users, all sessionsEnterprise-wide key-value store. Use carefully to avoid unnecessary data exposure.
BotContextAll users of a specific appApp-level shared variables (e.g., default currency based on user location)
UserContextAll apps for a user (read-only)System-provided user data
UserSessionAll apps for a specific userUser-specific data shared across all apps (e.g., home address for commerce and delivery apps)
BotUserSessionSpecific app + specific userPer-user, per-app data (e.g., source and destination for a travel app)

UserContext Read-Only Keys

KeyValue
_idKore.ai user ID
emailIdEmail address
firstName / lastNameName
profImageAvatar filename
profColorAccount color
activationStatusactive, inactive, suspended, or locked
jTitleJob title
orgIdOrganization ID
customDataCustom data passed via web SDK
identitiesAlternate user IDs (val, type)

Standard Keys

KeyPurpose
_labels_Returns a friendly label for a GUID (e.g., project name instead of numeric ID)
_tenant_Returns the tenant name for enterprise apps (e.g., JIRA subdomain in a URL)
_fields_Stores end-user action task inputs not included in the payload response
_last_runUTC timestamp of the last web service poll in ISO 8601 format

Method Limitations

  • delete(): Removes root-level objects only. To delete nested keys, use delete context.session.BotUserSession.{path}. You cannot delete a root-level object using this syntax.
  • put(): Inserts at root-level only. BotUserSession.put("Company.Address", val) is not supported.
  • get(): Retrieves root-level objects only. BotUserSession.get("Company.name") is not supported.

Context Object

The Context object persists data throughout dialog execution and across all intents (dialog tasks, action tasks, alert tasks, FAQs). The NLP engine populates intent, entities, and history automatically.
The context object has a size limit of 1024 KB. The platform notifies designers when this limit is approached. Conversations may be discarded if the limit is exceeded in future releases.
Usage: Reference context keys in URLs (https://example.com/{{context.entities.topic}}/rss), script nodes, entity nodes, and SDK payloads. Update context key values in script nodes to influence dialog execution.

Context Object Keys

KeyScopeDescriptionSyntax
intentDialogRecognized intentcontext.intent.<intent name>
entitiesDialogKey-value pairs of user-provided entity valuescontext.entities.<entity name>
traitsDialogTraits set for the given context
currentLanguageGlobalCurrent conversation language
suggestedLanguagesGlobalLanguages detected from the user’s first utterance, ordered by confidence; reset each conversation
historyGlobalArray of node execution records (nodeId, state, type, componentName, timestamp)
onHoldTasksDialogRead-only array of tasks on hold during the current conversation
<service node name>.responseDialogHTTP response from a Service node (statusCode, body)context.<node name>.response.body
resultsFoundDialogtrue if results were returned
message_toneGlobalTone emotions and scores for the current node
dialog_toneGlobalAverage tone emotions and scores for the full dialog session
Developer Defined KeyDialogCustom key-value pair set by the developercontext.<varName>
UserQueryDialogOriginal and rephrased user querycontext.UserQuery.originalUserQuery, context.UserQuery.rephrasedUserQuery

Node States

StateDescription
processingPlatform begins processing the node
processedNode and connections processed; next node found but not yet moved to
waitingForUserInputUser prompted but input not yet received
pauseDialog paused while another task runs
resumePaused dialog continues after the other task completes
waitingForServerResponseAsync server response pending
errorError occurred (loop limit reached, server failure, script error)
endDialog reached the end of the flow

Tone Levels

Tone level ranges from -3 (definitely suppressed) to +3 (definitely expressed). Tone names: angry, disgust, fear, sad, joy, positive. Reuse entity values across dialogs: Set reuseEntityWords: true in preconditions to automatically carry entity values from a parent dialog into downstream dialogs without re-prompting the user.

Voice Call Properties

Voice call properties configure AI Agent behavior for voice channels: IVR, Twilio, IVR-AudioCodes, and Kore.ai Voice Gateway. Enable a voice channel first, then configure properties at two levels:
  • App level: Set during channel enablement.
  • Component level: Override per node — applicable to Entity, Message, Confirmation, Agent Node, and Standard Responses.
Access node-level properties in the Dialog Builder by selecting a node and opening the IVR Properties section.

App-Level Channel Settings

FieldDescriptionChannels
IVR Data Extraction KeySyntax for extracting filled data; overridable at entity/confirmation node levelIVR
End of Conversation BehaviorTrigger a task/script/message, or terminate the call at end of conversationIVR, Twilio, AudioCodes, Voice Gateway
Call Termination HandlerDialog task to run when call ends in errorIVR, Twilio, AudioCodes, Voice Gateway
Call Control ParametersProperty name-value pairs for VXML definitions / AudioCodes session parametersIVR, AudioCodes
Threshold KeyVariable where ASR confidence levels are stored (pre-populated; do not change unless necessary)IVR
ASR Confidence ThresholdRange 0–1.0; defines when IVR hands control to the AI AgentIVR
Timeout PromptDefault prompt when the user doesn’t respond within the timeout periodIVR, Twilio, AudioCodes, Voice Gateway
GrammarVXML grammar for speech/DTMF input (custom text or URL)IVR
No Match PromptDefault prompt when user input doesn’t match defined grammarIVR
Barge-InAllow user input while a prompt is playingIVR, Twilio, AudioCodes, Voice Gateway
TimeoutMax wait for user input (1–60 seconds)IVR, Twilio, AudioCodes, Voice Gateway
No. of RetriesMax retry attempts (1–10)IVR, Twilio, AudioCodes
LogSend chat log to IVR systemIVR

Node-Level Voice Settings

Applicable to: Entity, Message, Confirmation, Agent Node, and Standard Responses.
FieldDescriptionChannels
Initial PromptsPrompts played when IVR first executes the nodeIVR, Twilio, AudioCodes, Voice Gateway
Timeout PromptsPrompts when user doesn’t respond in time. Supports Customize Retries Behavior (1–10 retries) and Behavior on Exceeding Retries (call termination handler, initiate dialog, or jump to node). Retries customization applies to IVR only, at Entity, Confirmation, and Message nodes.IVR, Twilio, AudioCodes, Voice Gateway
TimeoutPreset (1–60 sec) or select an environment variable. Non-numeric or >60-second variable values fall back to the channel-level timeout.IVR, Twilio, AudioCodes, Voice Gateway
No Match PromptsPrompts when input doesn’t match grammar. Supports customizable retries (IVR only).IVR
Error PromptsPrompts when input is an invalid entity type. Supports customizable retries (IVR only).IVR, Twilio, AudioCodes, Voice Gateway
GrammarSpeech/DTMF grammar (custom text or URL)IVR, Twilio
No. of RetriesMax retries (1–10); overrides app-level settingIVR, Twilio, AudioCodes, Voice Gateway
Behavior on Exceeding RetriesCall termination handler, initiate dialog, or jump to nodeIVR, Twilio, AudioCodes, Voice Gateway
Barge-InAllow input during prompt (default: No)IVR, Twilio, AudioCodes, Voice Gateway
Call Control ParametersNode-level VXML/AudioCodes parameters; overrides app-level valuesIVR, AudioCodes, Voice Gateway
LogSend chat log to IVR (default: No)IVR
RecordingRecording state at this node (default: Stop)IVR
Additional app-level-only settings:
FieldDescriptionChannel
Locale DefinitionSets the xml:lang attribute in VXML to enhance ASR language recognitionIVR
Document Type DefinitionsDTD settings (Status, Public ID, System ID) for VXML structure validationIVR
Fallback RedirectionRedirect URL used when the call hangs up (default: disabled)IVR
VXML Error ThresholdMax VXML errors before corrective action (default: 3); customizable to 1, 2, or 3IVR
Propagate Values to Linked AppsPropagates Voice Call Properties from a Universal App to linked apps (default: disabled)IVR
Multiple prompts can be defined per prompt type and played in sequence. Drag to reorder. This avoids repetition since prompts play in defined order across retries.

Configuring Grammar

At least one Speech Grammar must be defined for IVR. Supported systems:

Nuance

  1. Set Enable Transcription to No.
  2. In Grammar: select Speech or DTMF, enter the VXML path to dlm.zip: https://nuance.kore.ai/downloads/kore_dlm.zip?nlptype=krypton&dlm_weight=0.2&lang=en-US (adjust the path and language code for your setup).
  3. Click Add Grammar and add the path to nle.zip using the same steps.
  4. Save.

Voximal / UniMRCP

  1. Set Enable Transcription to Yes.
  2. Enter the transcription engine source:
    • Voximal: builtin:grammar/text
    • UniMRCP: builtin:grammar/transcribe
  3. Leave the Grammar section blank — the transcription source handles speech vetting.
  4. Save.

Building Multilingual Applications

AI for Service supports 100+ languages. A multilingual application has two key components: input processing and response processing.

Input Processing

ApproachHow it worksBest for
Native MultilingualProcesses input in the original language using BGEM3 embeddings and multilingual LLMs — no translation overheadContextual understanding across languages, lower latency, cost optimization
Translation-BasedConverts input to the app’s default language before processingLanguage-specific business logic, legacy integrations, single-language data processing

Response Processing

ApproachLevelBest for
Locale-Specific ResponsesNode-levelCompliance-critical content, brand messaging, regulated industries (authored per language)
Translation Engines (Google Cloud, Microsoft, custom)App-levelBroad language coverage (50+ languages), transactional messages, rapid deployment
LLM-Based Translation and RephrasingApp-level or node-levelConversational tone, cultural adaptation, dynamic context-dependent messaging
Locale-Specific trade-offs:
AdvantagesLimitations
Full wording controlHigh maintenance overhead
Culturally appropriateHard to scale across many nodes
No translation cost or latencyRequires multilingual content creators
Translation Engine trade-offs:
AdvantagesLimitations
Fast and cost-effectiveLess wording control
Supports 100+ languages automaticallyMay miss cultural nuances; limited context awareness
Minimal setup and maintenanceCannot adjust tone post-translation
LLM-Based trade-offs:
AdvantagesLimitations
Context-aware and culturally adaptiveHigher latency and cost per response
Combines translation and rephrasing in one callRequires prompt engineering expertise
Flexible prompt customizationLess deterministic output
Can personalize based on user contextMay require guardrails for regulated industries

Hybrid Patterns

PatternWhen to Use
Translation Engine + LLM RephrasingDynamic responses with broad language coverage; maintain tone consistency at scale. Note: two API calls increase latency.
Locale-Specific + LLM RephrasingMaximum flexibility — control base content while enabling personalized delivery
Agent Node Business RulesAdd a language instruction directly in the Agent Node: “Always respond in the same language as the user input, maintaining consistent terminology and cultural context.” No separate translation configuration needed.

Decision Guide

RequirementRecommended Approach
Compliance-critical contentLocale-specific responses only
50+ languages, transactionalTranslation Engine
Conversational tone mattersLLM-based rephrasing
Dynamic responses + broad coverageTranslation Engine + LLM rephrasing
Specific content + personalizationLocale-specific + LLM rephrasing

Testing Checklist

  • Verify language detection accuracy.
  • Review translations with native speakers.
  • Test edge cases: mixed-language input, special characters, RTL languages.
  • Measure latency across approaches.
  • Monitor API costs per interaction.

Common Pitfalls

  • Compliance content: Always use locale-specific responses for legal text; never rely on automated translation.
  • Double translation: Don’t enable both a Translation Engine and LLM translation simultaneously — this causes double translation and unpredictable output.
  • Skipping native speaker review: Translations may be technically correct but culturally inappropriate. Always validate with native speakers.