Reference
LLM models
Dezifi is provider-neutral. You configure providers once at the Workspace level, then pick a specific model per Agent. Match model class to the job, not the other way around.
What you'll learn
- Which providers Dezifi supports
- How to add provider credentials in Settings
- How to choose between fast/cheap and powerful/slow models
- When to route specific Tools through a different model
Supported providers
OpenAI, Anthropic, Google (Gemini), AWS Bedrock, Azure OpenAI, and self-hosted models via an OpenAI-compatible endpoint (vLLM, Ollama, LM Studio). Multiple providers can live in the same Workspace.
Configure a provider
- 1
Open Settings → LLM Providers
Workspace admins see a list of providers and a status pill for each. - 2
Add credentials
Paste the API key or paste the IAM role ARN for Bedrock. Keys are encrypted at rest and never shown in plain text after save. - 3
Pick which models are exposed
Toggle individual models on or off so builders only see the ones you have approved. - 4
Set a default
Mark one model as the Workspace default. New Agents start with this selection in step 2 of the builder.
Choosing a model
A simple decision rubric — pick the box that matches the job.
- 1
High-volume classification or routing
Use a small, fast model — GPT-4o-mini, Claude Haiku, Gemini Flash. Sub-second latency, pennies per Run. - 2
Customer-facing chat with grounding
Use a mid-tier model — GPT-4o, Claude Sonnet, Gemini Pro. Good reasoning, predictable cost. - 3
Multi-step reasoning or code generation
Use a frontier model — GPT-4.1, Claude Opus, Gemini Ultra. Slower and more expensive, but materially better at long-horizon tasks. - 4
Air-gapped or data-residency requirements
Use a self-hosted model behind your OpenAI-compatible endpoint. Bedrock and Azure OpenAI also satisfy most regional requirements.
Frequently asked questions
- Can different Agents use different providers in the same Workspace?
- Yes. Each Agent picks its own model in step 2 of the builder. You can mix OpenAI, Anthropic, Bedrock and local in the same Workspace.
- How is cost tracked per provider?
- Every Run records token usage and a dollar amount based on the provider rate card. Analytics rolls cost up by Agent, Workflow, model and Workspace.
- Can I fail over to a backup model?
- Yes. The model setting on an Agent accepts a primary plus optional fallbacks. Dezifi retries on the next provider if the primary returns an error or times out.
- Do you support fine-tuned models?
- Yes. Point the Agent at a fine-tuned model ID for OpenAI, Bedrock, Azure, or your self-hosted endpoint. The builder treats it like any other model.