Models
The Models page is where operators manage the platform-side model catalog.
Agirunner does not treat model selection as hidden prompt state. The platform keeps provider connectivity, discovered models, defaults, and role-level assignment policy in one place so runtime task claims can consume an explicit contract.
What Lives Here
Section titled “What Lives Here”- provider connectivity and credentials
- model discovery and refresh
- the shared system default route
- orchestrator and specialist overrides
Currently Supported Providers
Section titled “Currently Supported Providers”The current public stack supports these model-provider integrations:
- Anthropic
- OpenAI
- OpenRouter
- Ollama
- vLLM
That mix covers both hosted APIs and self-hosted or local model gateways. The platform owns the provider records and model catalog; the runtime receives the resolved provider, model, and credential contract only when a task is claimed.
Recommendation
Section titled “Recommendation”Agirunner supports multiple providers and model classes, but supported is not the same thing as recommended.
For orchestrator and specialist roles in normal production use, we
currently recommend gpt-5.4 with at least low reasoning, and
usually medium, as the best default starting point.
OpenAI can be configured here with API keys or a subscription-backed sign-in.
Lower-capability or non-reasoning models can still work for narrow cases, but they could degrade planning, recovery, handoffs, and workflow completion quality. They often look cheaper in isolation while making the overall workflow more brittle.
Why It Matters
Section titled “Why It Matters”This page is where you decide how the control plane should resolve model use before any task reaches the runtime. It is the difference between “the model choice was implied” and “the model choice was explicit, reviewable, and operator-controlled.”
How It Connects To The Rest Of The System
Section titled “How It Connects To The Rest Of The System”Model routing only matters because other surfaces depend on it:
- Specialists, Skills, And Models determines which specialist or orchestrator role should inherit a model route
- Orchestrator uses these defaults and overrides when it plans or supervises work
- Workflows and Workflow Detail are where operators see the behavioral consequences of those routing choices
- Runtime is where the claimed task finally uses the resolved model contract during execution
This page answers “which models are available and how should they be routed?”, while the specialist and workflow surfaces answer “who gets which route?” and “how did that route perform on real work?”
Operator Use
Section titled “Operator Use”Use this page when you need to:
- connect or repair a provider
- refresh the catalog after provider changes
- inspect discovered models that still need complete provider metadata
- change the shared default route
- give the orchestrator or a specialist role a different model or reasoning posture
One important guardrail lives here: discovery may surface provider
models before Agirunner knows their full execution limits. Those rows
stay visible so operators can see what the provider exposed, but the
platform only allows a model to be enabled once both context_window
and max_output_tokens are known.