Agent Configuration
Agents are the core of Arkenos. Each agent defines a voice persona with its own system prompt, voice, STT provider, functions, and webhooks.Agent Modes
Arkenos supports two agent modes:Standard Agents
Dashboard-configured agents that use the shared voice engine. No code required — configure everything through the UI:- System prompt and greeting message
- STT provider selection
- Voice selection (Resemble AI)
- Function calling definitions
- Pre/post-call webhooks
Custom Agents (Preview)
Write your own Python agent code for full control over the voice pipeline. Custom agents:- Have a built-in code editor in the dashboard
- Include an AI coding assistant (powered by Gemini) to help write and edit code
- Are built into Docker containers and deployed automatically
- Track file versions for rollback
- Can use any libraries or custom logic beyond what standard agents offer
Creating an Agent
Navigate to Dashboard → Agents → Create Agent or use the API:Configuration Options
| Field | Type | Description |
|---|---|---|
system_prompt | String | Instructions for the LLM defining agent behavior |
first_message | String | Greeting the agent speaks when the call starts |
stt_provider | String | assemblyai, elevenlabs, or deepgram |
voice_id | String | Resemble AI voice UUID |
functions | Array | Function calling definitions (see Function Calling) |
webhooks.pre_call | String | URL called before the voice session starts |
webhooks.post_call | String | URL called after the session ends |
STT Provider Selection
Each agent can use a different speech-to-text provider:| Provider | Best For |
|---|---|
| AssemblyAI (default) | High accuracy, general purpose |
| ElevenLabs | Low latency streaming |
| Deepgram | Real-time streaming, enterprise |