A universal OpenCode plugin for dynamic model discovery across any OpenAI-compatible provider.
Originally inspired by opencode-lmstudio, this project has been fully refactored into a general-purpose model discovery plugin with richer configuration controls for providers, models, naming, caching, and discovery behavior.
- Universal Provider Support: Works with any OpenAI-compatible provider (LM Studio, Ollama, LocalAI, gateways, and more)
- Dynamic Model Discovery: Queries each provider's
/v1/modelsendpoint to discover available models - Auto-Injection: Automatically adds unconfigured models into OpenCode provider config
- Provider Filtering: Include or exclude specific providers from discovery
- Model Filtering: Use regex rules to precisely control which discovered models are injected
- Configurable Discovery: Control discovery behavior with enable/disable switches and TTL-based caching
- Smart Model Formatting: Optional human-friendly display names for discovered models
- Organization Owner Extraction: Extracts and sets
organizationOwnerfrom model IDs when available - Health Check Monitoring: Verifies providers are accessible before attempting discovery
- Model Merging: Intelligently merges discovered models with existing configuration
- Error Handling: Smart error categorization with actionable suggestions
npm install opencode-models-discovery
# or
bun add opencode-models-discoveryAdd the plugin to your opencode.json:
{
"$schema": "https://opencode.ai/config.json",
"plugin": [
"opencode-models-discovery@latest"
],
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Ollama (local)",
"options": {
"baseURL": "http://127.0.0.1:11434/v1"
}
},
"lmstudio": {
"npm": "@ai-sdk/openai-compatible",
"name": "LM Studio (local)",
"options": {
"baseURL": "http://127.0.0.1:1234/v1"
}
}
}
}The plugin configuration is placed in the plugin array using tuple format ["plugin-name", { config }]:
{
"plugin": [
["opencode-models-discovery", {
"providers": {
"include": [],
"exclude": []
},
"models": {
"includeRegex": [],
"excludeRegex": []
},
"discovery": {
"enabled": true,
"ttl": 15000
},
"smartModelName": false
}]
]
}Set smartModelName to true if you want discovered models to use human-friendly display names instead of the raw model_id. (e.g., "Qwen3 30B A3B" instead of "qwen/qwen3-30b-a3b")
Control which providers are discovered:
| Option | Type | Description |
|---|---|---|
providers.include |
string[] |
If non-empty, only these providers will be discovered |
providers.exclude |
string[] |
These providers will be skipped (only used when include is empty) |
{
"plugin": [
["opencode-models-discovery", {
"providers": {
"include": ["ollama"],
"exclude": ["lmstudio"]
}
}]
]
}Control which discovered models are auto-injected with regular expressions:
| Option | Type | Description |
|---|---|---|
models.includeRegex |
string[] |
If non-empty, only discovered model IDs matching at least one regex will be added |
models.excludeRegex |
string[] |
Discovered model IDs matching any regex will be skipped (only used when includeRegex is empty) |
Regex filtering only applies to auto-discovered models. Models already explicitly configured by the user are preserved.
{
"plugin": [
["opencode-models-discovery", {
"models": {
"includeRegex": ["^qwen/", "gpt-4"],
"excludeRegex": ["embedding", "test"]
}
}]
]
}- On OpenCode startup, the plugin's
confighook is called - The plugin iterates through all configured providers
- For each provider, it checks if the baseURL contains
/v1/(supports any npm package) - For each accessible provider, it queries the
/v1/modelsendpoint - Discovered models are automatically merged into the provider's configuration
- The enhanced configuration is used for the current session
The plugin supports any OpenAI-compatible provider. Here are the most common ones:
| Provider | Default Port | Use Case | npm Package |
|---|---|---|---|
| Ollama | 11434 | Local model inference engine | @ai-sdk/openai-compatible |
| LM Studio | 1234 | Local LLM with UI | @ai-sdk/openai-compatible |
| LocalAI | 8080 | Self-hosted AI inference | @ai-sdk/openai-compatible |
| llama.cpp Server | 8080 | Standalone llama.cpp server | @ai-sdk/openai-compatible |
| Text Generation WebUI | 5000 | OpenAI-compatible extension | @ai-sdk/openai-compatible |
| FastChat (Vicuna) | 8001 | Multi-model serving | @ai-sdk/openai-compatible |
| vLLM | 8000 | High-performance inference | @ai-sdk/openai-compatible |
| CLIProxyAPI | 8317 | A LLM proxy server | @ai-sdk/anthropic (with /v1 backend) & @ai-sdk/openai-compatible |
Providers using @ai-sdk/anthropic but backed by OpenAI-compatible servers (like Ollama's Anthropic compatibility mode) are also supported:
{
"provider": {
"ollama": {
"npm": "@ai-sdk/anthropic",
"name": "Ollama (Anthropic Mode)",
"options": { "baseURL": "http://127.0.0.1:11434/v1" }
}
}
}Cloud services with OpenAI-compatible APIs are also supported:
- Cloudflare Workers AI
- Azure OpenAI Service (with appropriate endpoint configuration)
- Groq (ultra-fast inference)
- Together AI
- Perplexity AI
- Any custom OpenAI-compatible API
The plugin identifies OpenAI-compatible providers using two detection methods:
- Strict Detection:
npm === "@ai-sdk/openai-compatible" - URL-based Detection:
baseURLcontains/v1/pattern
A provider is considered discoverable if either condition matches.
{
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Ollama",
"options": { "baseURL": "http://127.0.0.1:11434/v1" }
}
}
}{
"provider": {
"ollama-anthropic": {
"npm": "@ai-sdk/anthropic",
"name": "Ollama (Anthropic Mode)",
"options": { "baseURL": "http://127.0.0.1:11434/v1" }
}
}
}{
"provider": {
"lmstudio": {
"npm": "@ai-sdk/openai-compatible",
"name": "LM Studio",
"options": { "baseURL": "http://127.0.0.1:1234/v1" }
}
}
}This means providers using @ai-sdk/anthropic with OpenAI-compatible backends (like Ollama's Anthropic compatibility mode) are also supported, as long as the baseURL contains /v1/.
- OpenCode with plugin support
- At least one OpenAI-compatible provider running locally or remotely
- Provider server API accessible (e.g.,
http://127.0.0.1:11434/v1)
When available, the plugin writes logs through OpenCode's structured server log API via client.app.log(...) using the service name opencode-models-discovery.
If structured logging is unavailable in the runtime, the plugin falls back to prefixed console.* output. Key log categories are emitted through metadata such as plugin, config, discovery, event, and filtering to make local debugging easier with opencode --print-logs.
MIT
Contributions are welcome! Please feel free to submit a Pull Request.
This project is not built by the OpenCode team and is not affiliated with OpenCode in any way.