Agents
Agents in Dezifi
An Agent is your unit of intelligence — one LLM, the Tools it can call, the Skills and knowledge it draws on, and the Guardrails that constrain it. Build one with a 9-step wizard, test it in chat, ship it as an API or a Slack bot.
What you'll learn
- What an Agent is in Dezifi and how it differs from a Workflow
- The 9-step builder at a glance
- When to reach for an Agent vs. a Workflow
- How Agents tie into Tools, Skills, RAG, Memory, Voice and Guardrails
Agent vs. Workflow
An Agent is a single intelligent worker — one model, one Tool belt, one Guardrail profile. A Workflow orchestrates one or more Agents with branching, loops and human approval. Start with an Agent. Promote to a Workflow when you need explicit control flow.
The 9-step builder
Every Agent is configured the same way. Each step is independent — you can revisit any of them.
- 1
Basic Info
Name, description, and category (DevOps, Security, Quality, Support, Productivity, Analytics, or Custom). - 2
LLM Selection
Pick the model — OpenAI, Anthropic, Google, AWS Bedrock, Azure OpenAI, or a locally hosted model. - 3
Tool Selection
Choose which integrations the Agent can call at runtime. - 4
Skill Selection
Attach reusable prompts and instruction sets from the Skill library. - 5
RAG Configuration
Bind one or more Knowledge Bases for retrieval-augmented responses. - 6
Memory Configuration
Pick short-term, long-term, or hybrid memory depending on whether the Agent should remember across Sessions. - 7
Voice Configuration
Optionally enable speech-in / speech-out with a configurable STT and TTS provider. - 8
Guardrails Configuration
Attach a Guardrail profile to enforce safety, PII and policy checks on every Run. - 9
Review and Publish
Confirm the configuration. Publishing creates a versioned Agent that can be tested, exposed as an API, or wired into a Workflow.
When to use an Agent
Reach for a single Agent when the job is conversational, open-ended, or fits a single question/response shape — support triage, code review, document Q&A, internal analytics chat. Reach for a Workflow when the job has fixed steps, multi-Agent handoff, scheduled triggers, or human approvals.
Frequently asked questions
- Can one Agent use multiple LLMs?
- Each Agent runs on a single primary LLM, but you can route Tools through their own models and chain Agents in a Workflow that mixes providers.
- Do I have to pick every step in the builder?
- Only Basic Info, LLM Selection, and Review are required. Tools, Skills, RAG, Memory, Voice and Guardrails are all optional — leave them empty and add them later.
- How are Agents versioned?
- Each Publish action creates a new version. Past versions remain available — you can roll back, compare in Eval, or run A/B traffic splits.
- Where do Agents run?
- Inside your Workspace. Each tenant is isolated. On-premise and private-cloud deployments execute Agents inside your own infrastructure.