Skip to main content

Agent Definition

Agents in Distri are defined using Markdown files with TOML frontmatter. This format allows you to specify agent behavior, capabilities, model settings, and tool configurations.

Agent Details Interface

Basic Structure

An agent definition file (e.g., agent.md) contains:

  1. TOML frontmatter: Configuration metadata between --- markers
  2. Markdown content: Instructions defining the agent's role and capabilities
---
name = "my_agent"
description = "A helpful assistant"
max_iterations = 3

[tools]
external = ["*"]

[model_settings]
model = "gpt-4.1-mini"
temperature = 0.7
max_tokens = 500
---

# ROLE
You are a helpful assistant that answers questions clearly and concisely.

# CAPABILITIES
- Answer questions about various topics
- Provide explanations and examples
- Help with problem-solving

Core Properties

Required Fields

PropertyTypeDescription
namestringUnique identifier for the agent. Must be a valid JavaScript identifier (alphanumeric + underscores, cannot start with number).
descriptionstringBrief description of the agent's purpose.

Agent Configuration

PropertyTypeDefaultDescription
versionstring"0.2.2"Version of the agent definition.
instructionsstring""The markdown content below the frontmatter (set automatically).
max_iterationsnumberNoneMaximum number of execution iterations.
history_sizenumber5Number of previous messages to include in context.
icon_urlstringNoneURL to agent icon for A2A discovery.

Model Settings

Configure the LLM used by the agent under [model_settings]:

[model_settings]
model = "gpt-4.1-mini"
temperature = 0.7
max_tokens = 1000
context_size = 20000
top_p = 1.0
frequency_penalty = 0.0
presence_penalty = 0.0
PropertyTypeDefaultDescription
modelstring"gpt-4.1-mini"Model identifier (e.g., gpt-4, claude-3-opus).
temperaturefloat0.7Sampling temperature (0.0-2.0). Lower = more deterministic.
max_tokensnumber1000Maximum tokens in response.
context_sizenumber20000Maximum context window size.
top_pfloat1.0Nucleus sampling parameter.
frequency_penaltyfloat0.0Penalty for token frequency (-2.0 to 2.0).
presence_penaltyfloat0.0Penalty for token presence (-2.0 to 2.0).

Model Provider

Configure custom model providers under [model_settings.provider]:

[model_settings.provider]
name = "openai"

# Or for custom endpoints:
[model_settings.provider]
name = "openai_compat"
base_url = "https://your-endpoint.com/v1"
api_key = "your-api-key" # Optional if using secrets

Available providers: openai, openai_compat, vllora

Analysis Model Settings

Optional lower-level model for lightweight analysis tasks (e.g., browser observations):

[analysis_model_settings]
model = "gpt-4.1-mini"
temperature = 0.3
max_tokens = 500

Tools Configuration

Configure which tools the agent can use under [tools]:

[tools]
builtin = ["final", "transfer_to_agent"]
external = ["*"]

[[tools.mcp]]
server = "fetch"
include = ["*"]
exclude = ["delete_*"]

[tools.packages]
search = ["search", "search_images"]
PropertyTypeDescription
builtinstring[]Built-in tools to enable (e.g., final, transfer_to_agent).
externalstring[]External tools from client. Use ["*"] for all.
packagesobjectDAP package tools: { package_name: ["tool1", "tool2"] }
mcparrayMCP server configurations (see below).

MCP Tool Configuration

[[tools.mcp]]
server = "fetch" # MCP server name
include = ["fetch_*"] # Glob patterns to include
exclude = ["fetch_secret"] # Glob patterns to exclude

Tool Call Format

Control how the agent formats tool calls:

tool_format = "xml"  # Options: xml, jsonl, code, provider, none
FormatDescription
xmlStreaming-capable XML format (default). Example: <search><query>test</query></search>
jsonlJSONL with tool_calls blocks.
codeTypeScript/JavaScript code blocks.
providerNative provider format.
noneNo tool formatting.

Sub-Agents

Define agents that this agent can transfer control to:

sub_agents = ["research_agent", "writing_agent", "review_agent"]

When combined with the transfer_to_agent builtin tool, enables agent handover workflows.


Skills (A2A)

Define A2A-compatible skills for agent discovery:

[[skills]]
id = "search"
name = "Web Search"
description = "Search the web for information"

Advanced Configuration

Filesystem Mode

Control where filesystem operations run:

file_system = "remote"  # Options: remote, local
ModeDescription
remoteRun filesystem/artifact tools on the server (default).
localHandle via external tool callbacks (client-side).

Prompt Configuration

append_default_instructions = true  # Include default system prompts
include_scratchpad = true # Include persistent scratchpad in prompts
context_size = 32000 # Override model context size for this agent

Feature Flags

enable_reflection = false           # Enable reflection subagent
enable_todos = false # Enable TODO management
write_large_tool_responses_to_fs = false # Write large outputs as artifacts

Browser Configuration

Enable browser automation for the agent:

[browser_config]
enabled = true
headless = true
persist_session = false

Custom Partials

Define Handlebars partials for custom prompts:

[partials]
custom_header = "prompts/header.hbs"
custom_footer = "prompts/footer.hbs"

Hooks

Attach named hooks to the agent:

hooks = ["log_events", "validate_output"]

User Message Overrides

Customize how user messages are constructed:

[user_message_overrides]
include_artifacts = true
include_step_count = true

[[user_message_overrides.parts]]
type = "template"
source = "context_prompt"

[[user_message_overrides.parts]]
type = "session_key"
source = "current_observation"

Registering Agents

Push your agent to Distri using the CLI:

# Login to Distri Cloud
distri login

# Push agents from current directory
distri push

# Or run locally
distri serve --port 8080

Agents are automatically discovered from the agents/ directory or any .md files with valid agent frontmatter.


API Endpoints

GET /agents

Response:

[
{
"id": "maps_agent",
"name": "maps_agent",
"description": "Operate Google Maps tools to execute user instructions"
}
]

Complete Example

---
name = "research_agent"
description = "Research assistant that finds and summarizes information"
version = "1.0.0"
max_iterations = 5
history_size = 10

[tools]
builtin = ["final"]
external = ["*"]

[[tools.mcp]]
server = "fetch"
include = ["*"]

[model_settings]
model = "gpt-4"
temperature = 0.5
max_tokens = 2000
context_size = 32000

[model_settings.provider]
name = "openai"

sub_agents = ["writing_agent"]
enable_reflection = true
---

# ROLE
You are a thorough research assistant. When given a topic, you search for
relevant information, evaluate sources, and provide comprehensive summaries.

# CAPABILITIES
- Search the web for information on a given topic
- Create concise summaries of long-form content
- Properly attribute information to sources
- Hand off to writing_agent for final document creation

# TASK
Research the following topic and provide a detailed summary with citations:
{{task}}

Best Practices

  1. Clear Role Definition: Be specific about the agent's personality and behavior in the instructions.
  2. Tool Mapping: List capabilities that correspond to available tools so the LLM understands what actions it can take.
  3. Iteration Limits: Set max_iterations based on task complexity to prevent runaway loops.
  4. Model Selection: Use faster models (gpt-4.1-mini) for simple tasks, more capable models (gpt-4, claude-3-opus) for complex reasoning.
  5. Temperature Tuning: Lower (0.3-0.5) for factual tasks, higher (0.7-0.9) for creative tasks.
  6. Context Management: Set appropriate history_size and context_size based on your use case.