Overview
External models are AI models hosted outside the platform. Once connected, they can be used across Agent Platform in Agentic Apps, Prompt Studio, Tools, and Evaluation Studio. Supported Providers (Easy Integration):| Provider | Authentication | Tool Calling |
|---|---|---|
| OpenAI | API Key | ✓ |
| Anthropic | API Key | ✓ |
| API Key | ✓ | |
| Cohere | API Key | ✓ |
| Azure OpenAI | API Key + Endpoint | ✓ |
| Amazon Bedrock | IAM Role ARN | ✓ |
| Vertex AI | API Key | ✓ |
| Microsoft Foundry | API Key / Service Principal | ✓ |
Manage Connected Models
View Models
- Go to Models → External Models to see all connected models.
Manage Connections
Each model can have multiple connections with different API keys, enabling separate usage tracking and billing.| Action | Description |
|---|---|
| Inference Toggle | Enable/disable model availability across Platform |
| Edit | Update API key or credentials |
| Delete | Remove the connection |
Add a Model via Easy Integration
Use Easy Integration for commercial providers with API keys or IAM roles.Standard Providers (OpenAI, Anthropic, Google, Cohere)
- Go to Models → External Models → Add a model.
- Select Easy Integration → click Next.
- Choose your provider → click Next.
- Select a model from the supported list.
- Enter a Connection name and your API key.
- Click Confirm.
Amazon Bedrock
Bedrock uses IAM role-based authentication instead of API keys. Prerequisites: Create an IAM role in AWS with Bedrock permissions and a trust policy allowing Agent Platform to assume the role. See Configuring Amazon Bedrock for IAM setup. Steps:- Go to Models → External Models → Add a model.
- Select Easy Integration → AWS Bedrock → Next.
- Configure credentials and model details:
| Field | Description |
|---|---|
| IAM Role ARN | Your IAM role with Bedrock permissions |
| Trusted Principal ARN | Platform’s AWS principal (pre-populated) |
| Model Name | Internal identifier |
| Model ID | Bedrock Model ID or Endpoint ID |
| Region | AWS region of the model |
| Headers | Optional custom headers |
- Configure model settings using Default or Existing Provider Structures.
- Click Confirm.
Vertex AI
Vertex AI uses API key authentication to access Gemini models (2.5 and 3.0 families) from your Google Cloud account. Prerequisites: Create an API key in your Google Cloud account with Vertex AI API access.- New users: Use the express mode setup to generate an API key automatically, then manage keys under APIs & Services > Credentials.
- Existing users: Enable the Vertex AI API, create a service account (
vertex-ai-runner) with the Vertex AI Platform Express User role, create an API key linked to that service account under APIs & Services > Credentials, and store the key securely.
- Go to Models → External Models → Add a model.
- Select Easy Integration → Vertex AI → Next.
- Choose a configuration method:
| Field | Description |
|---|---|
| Model | Select a Gemini model from the dropdown |
| Connection name | Internal identifier for this connection |
| API key | Your Google Vertex AI API key |
| Project ID | (Optional) Your Google Cloud project identifier |
| Region | (Optional) Google Cloud region where models are deployed |
Microsoft Foundry
Microsoft Foundry supports two authentication methods: entering credentials manually or using an Azure Active Directory Service Principal. Steps:- Go to Models → External Models → Add a model.
- Select Easy Integration → Microsoft Foundry → Next.
- Choose an authentication method:
| Field | Description |
|---|---|
| Connection name | Unique name to identify this connection |
| Target URI | Endpoint URI from your model’s Details page |
| Key | API key from your model’s Details page |
| Deployment name | Deployment name as defined in Microsoft Foundry |
| Field | Description |
|---|---|
| Connection name | Unique name to identify this connection |
- In the Azure Portal, go to App registrations → + New registration. Enter a name, select account type, and click Register.
- Copy the Application (Client) ID and Directory (Tenant) ID from the Overview page.
- Go to Certificates & secrets → + New client secret. Set an expiry and copy the Value immediately.
- In your resource group, go to Access control (IAM) → Add role assignment. Assign a role (e.g., Contributor) and select your registered app.
- Click Configure Service Principal in the Platform, enter a Connection name, and fill in Tenant ID, Application (Client) ID, Client Secret, and Subscription ID. Click Test, then Save.
- Configure model settings under Model configurations. Enable the features your model supports:
| Feature | Description |
|---|---|
| Structured Response | JSON-formatted outputs for Prompts and Tools |
| Tool Calling | Function calling for Agentic Apps and AI nodes |
| Parallel Tool Calling | Multiple tool calls per request |
| Streaming | Real-time token generation |
| Data Generation | Synthetic data generation in Prompt Studio |
| Modalities | Text-to-Text, Text-to-Image, Image-to-Text, Audio-to-Text |
Note: Tool calling must be enabled for the model to work in Agentic Apps.Under Body, specify the model name and select a provider to set the API reference:
| Template | Use When |
|---|---|
| OpenAI (Chat Completions) | Model follows OpenAI chat API format |
| Anthropic (Messages) | Model follows Anthropic messages API format |
- Click Save as draft to store without activating, or Confirm to finalize.
Add a Model via API Integration
Use API Integration for custom endpoints or self-hosted models.Note: For Agentic Apps compatibility, custom models must support tool calling and follow OpenAI or Anthropic request/response structures.
Steps
- Go to Models → External Models → Add a model.
- Select Custom Integration → click Next.
- Enter basic configuration:
| Field | Description |
|---|---|
| Connection Name | Unique identifier |
| Model Endpoint URL | Full API endpoint URL |
| Authorization Profile | Select configured auth profile or None |
| Headers | Optional key-value pairs for requests |
- Configure model settings using Default or Existing Provider Structures.
- Click Confirm
Model Configuration Modes
When using API Integration or advanced Bedrock setup, choose one of these configuration modes:Default Mode
Manually configure request/response handling for complete control. 1. Define Variables| Variable Type | Description |
|---|---|
| Prompt | Primary input text (required) |
| System Prompt | System instructions (optional) |
| Examples | Few-shot examples (optional) |
| Custom Variables | Additional dynamic inputs with name, display name, and data type |
{{variable}} placeholders:
| Field | Description | Example |
|---|---|---|
| Output Path | Location of generated text | choices[0].message.content |
| Input Tokens | Input token count | usage.prompt_tokens |
| Output Tokens | Output token count | usage.completion_tokens |
Existing Provider Structures Mode
Automatically apply pre-defined schemas from known providers. Recommended when your model follows a standard API format. 1. Select Provider Template| Template | Use When |
|---|---|
| OpenAI (Chat Completions) | Model follows OpenAI chat API format |
| Anthropic (Messages) | Model follows Anthropic messages API format |
| Google (Gemini) | Model follows Gemini API format |
| Feature | Description |
|---|---|
| Structured Response | JSON-formatted outputs for Prompts and Tools |
| Tool Calling | Function calling for Agentic Apps and AI nodes |
| Parallel Tool Calling | Multiple tool calls per request |
| Streaming | Real-time token generation for Agentic Apps |
| Data Generation | Synthetic data generation in Prompt Studio |
| Modalities | Text-to-Text, Text-to-Image, Image-to-Text, Audio-to-Text |
Warning: Enabling unsupported features may cause unexpected behavior.
Troubleshooting
| Issue | Solution |
|---|---|
| Test fails | Verify endpoint URL and authentication |
| Empty response | Check JSON path mapping matches response structure |
| Model not in dropdowns | Ensure Inference toggle is ON |
| Tool calling not working | Verify model supports it and feature is enabled |
| Bedrock connection fails | Check IAM role ARN and trust policy configuration |
| Vertex AI auth error | Ensure API key is valid and not an OAuth token; check that Vertex AI API is enabled for your project |
| Microsoft Foundry connection fails | Verify Target URI, API key, and deployment name; for Service Principal, confirm Tenant ID, Client ID, and Client Secret are correct |