Skip to main content
Connect commercial and custom models to Agent Platform.

Overview

External models are AI models hosted outside the platform. Once connected, they can be used across Agent Platform in Agentic Apps, Prompt Studio, Tools, and Evaluation Studio. Supported Providers (Easy Integration):
ProviderAuthenticationTool Calling
OpenAIAPI Key
AnthropicAPI Key
GoogleAPI Key
CohereAPI Key
Azure OpenAIAPI Key + Endpoint
Amazon BedrockIAM Role ARN
Vertex AIAPI Key
Microsoft FoundryAPI Key / Service Principal
Custom Models (API Integration): Connect any model via REST API endpoint. For the complete list of supported models, see Supported Models.

Manage Connected Models

View Models

  • Go to ModelsExternal Models to see all connected models.

Manage Connections

Each model can have multiple connections with different API keys, enabling separate usage tracking and billing.
ActionDescription
Inference ToggleEnable/disable model availability across Platform
EditUpdate API key or credentials
DeleteRemove the connection
When adding multiple API keys for the same model, each connection must have a unique name and API key. In Agentic Apps, you can assign specific connections at the Agent or Supervisor level.

Add a Model via Easy Integration

Use Easy Integration for commercial providers with API keys or IAM roles.

Standard Providers (OpenAI, Anthropic, Google, Cohere)

  1. Go to ModelsExternal ModelsAdd a model.
  2. Select Easy Integration → click Next.
  3. Choose your provider → click Next.
  4. Select a model from the supported list.
  5. Enter a Connection name and your API key.
  6. Click Confirm.
The model is now available across Agent Platform.

Amazon Bedrock

Bedrock uses IAM role-based authentication instead of API keys. Prerequisites: Create an IAM role in AWS with Bedrock permissions and a trust policy allowing Agent Platform to assume the role. See Configuring Amazon Bedrock for IAM setup. Steps:
  1. Go to ModelsExternal ModelsAdd a model.
  2. Select Easy IntegrationAWS BedrockNext.
  3. Configure credentials and model details:
FieldDescription
IAM Role ARNYour IAM role with Bedrock permissions
Trusted Principal ARNPlatform’s AWS principal (pre-populated)
Model NameInternal identifier
Model IDBedrock Model ID or Endpoint ID
RegionAWS region of the model
HeadersOptional custom headers
  1. Configure model settings using Default or Existing Provider Structures.
  2. Click Confirm.

Vertex AI

Vertex AI uses API key authentication to access Gemini models (2.5 and 3.0 families) from your Google Cloud account. Prerequisites: Create an API key in your Google Cloud account with Vertex AI API access.
  • New users: Use the express mode setup to generate an API key automatically, then manage keys under APIs & Services > Credentials.
  • Existing users: Enable the Vertex AI API, create a service account (vertex-ai-runner) with the Vertex AI Platform Express User role, create an API key linked to that service account under APIs & Services > Credentials, and store the key securely.
Steps:
  1. Go to ModelsExternal ModelsAdd a model.
  2. Select Easy IntegrationVertex AINext.
  3. Choose a configuration method:
Option A: Manual Setup
FieldDescription
ModelSelect a Gemini model from the dropdown
Connection nameInternal identifier for this connection
API keyYour Google Vertex AI API key
Project ID(Optional) Your Google Cloud project identifier
Region(Optional) Google Cloud region where models are deployed
Click Confirm to save. Option B: Import from cURL Enter a Connection name, paste a cURL command in the text area, click Fetch to extract the configuration, then click Confirm. The model is now available across Agent Platform.

Microsoft Foundry

Microsoft Foundry supports two authentication methods: entering credentials manually or using an Azure Active Directory Service Principal. Steps:
  1. Go to ModelsExternal ModelsAdd a model.
  2. Select Easy IntegrationMicrosoft FoundryNext.
  3. Choose an authentication method:
Option A: Enter Manually Directly provide credentials from your model’s Details page in Microsoft Foundry.
FieldDescription
Connection nameUnique name to identify this connection
Target URIEndpoint URI from your model’s Details page
KeyAPI key from your model’s Details page
Deployment nameDeployment name as defined in Microsoft Foundry
Option B: Use Service Principal Authenticate through an Azure Active Directory Service Principal. Requires a pre-configured Microsoft Foundry connection.
FieldDescription
Connection nameUnique name to identify this connection
If no connection exists, click Configure Service Principal and complete these steps:
  1. In the Azure Portal, go to App registrations+ New registration. Enter a name, select account type, and click Register.
  2. Copy the Application (Client) ID and Directory (Tenant) ID from the Overview page.
  3. Go to Certificates & secrets+ New client secret. Set an expiry and copy the Value immediately.
  4. In your resource group, go to Access control (IAM)Add role assignment. Assign a role (e.g., Contributor) and select your registered app.
  5. Click Configure Service Principal in the Platform, enter a Connection name, and fill in Tenant ID, Application (Client) ID, Client Secret, and Subscription ID. Click Test, then Save.
  6. Configure model settings under Model configurations. Enable the features your model supports:
FeatureDescription
Structured ResponseJSON-formatted outputs for Prompts and Tools
Tool CallingFunction calling for Agentic Apps and AI nodes
Parallel Tool CallingMultiple tool calls per request
StreamingReal-time token generation
Data GenerationSynthetic data generation in Prompt Studio
ModalitiesText-to-Text, Text-to-Image, Image-to-Text, Audio-to-Text
Note: Tool calling must be enabled for the model to work in Agentic Apps.
Under Body, specify the model name and select a provider to set the API reference:
TemplateUse When
OpenAI (Chat Completions)Model follows OpenAI chat API format
Anthropic (Messages)Model follows Anthropic messages API format
  1. Click Save as draft to store without activating, or Confirm to finalize.
The model is now listed in the External Models tab and available in Prompts, Tools, and Agentic Apps.

Add a Model via API Integration

Use API Integration for custom endpoints or self-hosted models.
Note: For Agentic Apps compatibility, custom models must support tool calling and follow OpenAI or Anthropic request/response structures.

Steps

  1. Go to ModelsExternal ModelsAdd a model.
  2. Select Custom Integration → click Next.
  3. Enter basic configuration:
FieldDescription
Connection NameUnique identifier
Model Endpoint URLFull API endpoint URL
Authorization ProfileSelect configured auth profile or None
HeadersOptional key-value pairs for requests
  1. Configure model settings using Default or Existing Provider Structures.
  2. Click Confirm

Model Configuration Modes

When using API Integration or advanced Bedrock setup, choose one of these configuration modes:

Default Mode

Manually configure request/response handling for complete control. 1. Define Variables
Variable TypeDescription
PromptPrimary input text (required)
System PromptSystem instructions (optional)
ExamplesFew-shot examples (optional)
Custom VariablesAdditional dynamic inputs with name, display name, and data type
2. Configure Request Body Create JSON payload using {{variable}} placeholders:
{
  "model": "your-model-name",
  "messages": [
    {"role": "system", "content": "{{system.prompt}}"},
    {"role": "user", "content": "{{prompt}}"}
  ],
  "max_tokens": 1000,
  "temperature": 0.7
}
3. Map Response JSON Paths Click Test to send a sample request, then configure extraction paths:
FieldDescriptionExample
Output PathLocation of generated textchoices[0].message.content
Input TokensInput token countusage.prompt_tokens
Output TokensOutput token countusage.completion_tokens

Existing Provider Structures Mode

Automatically apply pre-defined schemas from known providers. Recommended when your model follows a standard API format. 1. Select Provider Template
TemplateUse When
OpenAI (Chat Completions)Model follows OpenAI chat API format
Anthropic (Messages)Model follows Anthropic messages API format
Google (Gemini)Model follows Gemini API format
2. Enter Model Name Specify the model identifier for request bodies. 3. Enable Model Features Enable only features your model supports:
FeatureDescription
Structured ResponseJSON-formatted outputs for Prompts and Tools
Tool CallingFunction calling for Agentic Apps and AI nodes
Parallel Tool CallingMultiple tool calls per request
StreamingReal-time token generation for Agentic Apps
Data GenerationSynthetic data generation in Prompt Studio
ModalitiesText-to-Text, Text-to-Image, Image-to-Text, Audio-to-Text
Warning: Enabling unsupported features may cause unexpected behavior.

Troubleshooting

IssueSolution
Test failsVerify endpoint URL and authentication
Empty responseCheck JSON path mapping matches response structure
Model not in dropdownsEnsure Inference toggle is ON
Tool calling not workingVerify model supports it and feature is enabled
Bedrock connection failsCheck IAM role ARN and trust policy configuration
Vertex AI auth errorEnsure API key is valid and not an OAuth token; check that Vertex AI API is enabled for your project
Microsoft Foundry connection failsVerify Target URI, API key, and deployment name; for Service Principal, confirm Tenant ID, Client ID, and Client Secret are correct