Admin

LLM configurations

Each workspace configures its own LLM providers. Bring your own keys, test the connection, and choose a default model. Agents can override the default per-agent.

What you'll learn
  • Which providers Dezifi supports
  • How to add a provider with your API key
  • How to set a workspace default model
  • How to test the connection before going live

Supported providers

OpenAI, Anthropic, Google, AWS Bedrock, Azure OpenAI, and locally hosted models (Ollama, vLLM, any OpenAI-compatible endpoint). Each provider is configured independently so you can run a mix — for example, Claude for reasoning and a local model for classification.

Add a provider

  1. 1

    Open LLM configurations

    Settings → LLM configurations → + Add provider. Pick the provider from the list.
  2. 2

    Enter credentials

    Paste the API key. For Bedrock and Azure, also enter the region and deployment name. For local models, enter the base URL of your inference server.
  3. 3

    Test the connection

    Click Test. Dezifi sends a one-token probe to confirm the credentials work. Failures show the exact error returned by the provider.
  4. 4

    Pick a default model

    Toggle Default for the model you want new agents to start with. Each agent can override this in the builder.

Manage existing providers

  1. 1

    Update credentials

    Open the provider row and click Edit. Rotate the key by pasting a new value. Existing in-flight runs continue with the old key until they complete.
  2. 2

    Disable a provider

    Toggle the provider off to stop new runs from using it. Useful during a key rotation or when retiring a model family.
  3. 3

    Remove a provider

    Delete from the row menu. Agents pinned to a model under this provider must be re-pointed before their next run.

Frequently asked questions

Where are LLM API keys stored?
Encrypted at rest, scoped to the workspace, and never exposed in the UI after creation. Only the last four characters are visible for identification.
Can I use a different model per agent?
Yes. The workspace default applies to new agents. Each agent has a model picker in the builder that overrides the default.
Does Dezifi proxy LLM calls or route directly?
Direct from Dezifi to your chosen provider, with your key. Dezifi captures the call for tracing and guardrail enforcement but does not insert an intermediate model.
What if my test connection fails?
The provider's exact error is surfaced. Most failures are wrong key, region, or deployment name. Fix and re-test before saving.