Prompts and LLM Configuration
UseLlmModel and Prompt to control which model your agent uses, how it generates responses, and what instructions it follows.
Prerequisites
- AgenticAI Core SDK installed and configured.
- A valid connection configured for your LLM provider (OpenAI, Anthropic, or Azure OpenAI).
Configure the LLM model
Basic configuration
Builder pattern
UseLlmModelBuilder and LlmModelConfigBuilder for a fluent configuration style:
Supported providers
OpenAI
Anthropic (Claude)
Azure OpenAI
LLM parameters
Temperature (0.0–2.0)
Controls output randomness. Lower values produce more predictable responses; higher values produce more varied ones.| Range | Behavior | Use for |
|---|---|---|
| 0.0–0.3 | Deterministic, focused | Factual queries, data extraction |
| 0.4–0.7 | Balanced | General-purpose agents |
| 0.8–1.5 | Creative, diverse | Brainstorming, content generation |
| 1.6–2.0 | Highly random | Experimental use cases |
Max tokens
Sets the maximum number of tokens the model generates per response.| Response type | Recommended range |
|---|---|
| Short answers | 500–1000 |
| Detailed responses | 1000–2000 |
| Long-form content | 2000–4000 |
Top P (0.0–1.0)
Nucleus sampling parameter — controls the token pool the model samples from.- 0.1–0.5: Focused, less diverse sampling.
- 0.6–0.9: Balanced diversity.
- 0.95–1.0: Maximum diversity.
Penalties (−2.0 to 2.0)
Reduce repetition in responses:frequency_penalty: Penalizes tokens that appear frequently in the output.presence_penalty: Encourages the model to introduce new topics.
Configure prompts
System prompt
Sets the base role for the agent:Custom prompt
Provides detailed instructions and context beyond the system role:Instructions
Pass structured rules as a list. Use instructions for compliance, tone, and handling guidelines — especially for sensitive domains:Template variables
Prompts support runtime variable substitution using{{variable}} syntax:
| Variable | Description |
|---|---|
{{app_name}} | Application name. |
{{app_description}} | Application description. |
{{agent_name}} | Current agent name. |
{{memory.store.field}} | Access memory store data. |
{{session_id}} | Current session identifier. |
Orchestrator prompts
For supervisor or orchestrator agents, define routing rules in the custom prompt:Task-specific configurations
Match yourLlmModelConfig to the nature of the agent’s task:
Factual tasks — use low temperature for consistent, accurate responses:
Optimization tips
Cost- Use smaller models for simple, repetitive tasks.
- Set
max_tokensto the minimum needed for the expected response length. - Set
max_iterationsto limit unnecessary tool calls. - Configure reasonable timeouts to avoid runaway sessions.
- Use the latest model versions for your provider.
- Increase
max_tokenswhen detailed responses are required. - Lower temperature for tasks that require consistency.
- Increase
max_iterationsfor complex multi-step workflows.