Integration registry.
The table below is the canonical registry of editor extensions and model adapters that ship with OpenCode or are certified by the project. Status flags track whether an integration is officially maintained, community-maintained with a signed manifest, or in preview. Each entry links to a dedicated page with configuration and troubleshooting notes.
Zero-click summary. Two editor extensions are official. Four editor extensions are community-certified. Eight model adapters ship in the reference config. Any OpenAI-compatible endpoint is supported without a new adapter.
| Integration | Type | Install | Docs | Status |
|---|---|---|---|---|
| OpenCode VSCode | Editor extension | VSCode marketplace | VSCode guide | Official |
| OpenCode JetBrains | Editor extension | JetBrains plugin repo | JetBrains guide | Official |
| OpenCode Neovim | Editor extension | Plugin manager | Extension hub | Community certified |
| OpenCode Helix | Editor extension | Config snippet | Extension hub | Community certified |
| OpenCode Zed | Editor extension | Zed extensions | Extension hub | Community certified |
| OpenCode Sublime | Editor extension | Package Control | Extension hub | Community certified |
| Ollama | Local adapter | Built in | Ollama setup | Official |
| OpenAI-compatible | Hosted adapter | Built in | Custom API | Official |
| Anthropic | Hosted adapter | Built in | Custom API | Official |
| Azure OpenAI | Hosted adapter | Built in | Custom API | Official |
| vLLM | Self-hosted | OpenAI-compatible | Custom API | Official |
| TGI | Self-hosted | OpenAI-compatible | Custom API | Official |
| LM Studio | Local adapter | Built in | Custom API | Official |
How the editor extensions share the wire protocol.
OpenCode editor extensions do not embed model logic. They open a local Unix socket, hand a session token to the OpenCode CLI, and stream plan/apply events over a line-delimited JSON protocol. That design keeps every extension light — the VSCode extension is under a megabyte, the JetBrains plugin is a single JAR, and the Neovim plugin is a Lua file with no native dependencies. When you upgrade the CLI, every extension upgrades with it, because the heavy lifting lives in one binary.
The wire protocol is deliberately small. There are four message types: plan, apply, tool-call, and status. An extension sends a plan request with the user prompt and the active selection, the CLI returns a plan the user can approve, and then the apply message streams diffs back to the extension to render inline. Tool calls — reading files, running tests, shell commands — flow through the CLI so one policy applies whether you drove the agent from the terminal or from an inline diff button.
Zero-click summary. Every editor extension is a thin UI. The CLI is the only component that speaks to models. One policy covers all surfaces.
How the model adapters negotiate capabilities.
OpenCode treats model adapters as a capability negotiation. At startup the CLI asks the adapter which features are available: streaming, function calls, native tool calls, structured output, vision input, and long context. The agent then picks the cheapest tool-call schema the model supports — native function calling when available, a JSON tool-call fallback when not — and routes prompts accordingly. That means a GPT-class model, a Claude-class model, and a small local Ollama model all work with the same agent loop, even though the wire formats differ.
The adapters themselves are short. The OpenAI-compatible adapter is under 300 lines and covers any endpoint that speaks the chat completions schema, which includes Azure OpenAI, vLLM, TGI, LM Studio, and most in-house gateways. The Anthropic adapter is similarly compact and maps OpenCode tool calls onto the Claude messages API. The Ollama adapter speaks the native Ollama HTTP API directly so it can pick up local model metadata — quantization, context length, embeddings — without an extra round trip.
Zero-click summary. Adapters negotiate capabilities at startup. One agent loop, many wire formats. Fallbacks cover models without native tool calls.
Editor extensions: official, community, and preview.
The integration registry distinguishes three tiers so teams know what to expect. Official extensions are built and signed by the OpenCode maintainers, pass the integration test suite on every release, and ship on the same cadence as the CLI. Community-certified extensions are written by contributors outside the core team, but they pass the integration test suite and carry a signed manifest so enterprise teams can verify provenance. Preview extensions are new work that has not yet met the certification bar — they are welcome in the registry but are flagged so no one is surprised.
The VSCode and JetBrains extensions are the only two in the official tier today. The Neovim, Helix, Zed, and Sublime extensions are community-certified, which in practice means the maintainers contribute upstream patches when the wire protocol adds a capability and the extensions track those changes on a predictable cadence. We expect the community tier to grow — that is the point of a wire-protocol-first design — but the bar for official is signing, test coverage, and a maintenance SLA.
VSCode extension.
The OpenCode VSCode extension is the most feature-complete editor integration. It adds a side panel, inline diff review, selection-scoped prompts, a status-bar transcript, and a shortcut map that covers every CLI verb. The extension is distributed through the Visual Studio Code marketplace and is signed with the same key used for the CLI binaries so endpoint management teams can verify both from one attestation.
JetBrains plugin.
The OpenCode JetBrains plugin covers IntelliJ IDEA, PyCharm, GoLand, WebStorm, Rider, and CLion. It exposes OpenCode as a tool window with the same plan/apply flow as the CLI, plus an action registry so shortcuts can be rebound to match local conventions. The plugin ships through the JetBrains plugin repository and supports every currently maintained IDE major version.
Community editor plugins.
The OpenCode extension hub tracks the community-certified plugins for Neovim, Helix, Zed, and Sublime Text. Each plugin exposes the same plan/apply flow as the CLI, with UX choices that respect the host editor — Neovim uses the quickfix list for diffs, Helix uses the picker UI, Zed uses its action palette, and Sublime uses the command palette and output panel.
Model adapters: hosted, self-hosted, and local.
OpenCode groups model adapters into three categories so a security architect can reason about each separately. Hosted adapters talk to a vendor API — OpenAI-compatible, Anthropic, Azure OpenAI — and send prompts off your network. Self-hosted adapters talk to an inference server you run inside your perimeter, such as vLLM or TGI, and the only network traffic is inside your VPC. Local adapters talk to an inference server on the same machine as the developer, such as Ollama or LM Studio, and do not emit a packet off the box. The custom API page has full configuration examples for each.
Teams in regulated industries usually start with a hosted adapter, graduate to a self-hosted deployment once they have a budget for GPUs, and add a local adapter for offline work. OpenCode is the same binary across all three — you change one config block to switch tiers. The agent does not assume the model is frontier, so smaller local models run the same plan/apply flow, just with shorter horizons per task.
Our release engineering pipeline follows the NIST SSDF 1.1 controls for signed distributions, which is why the integration registry lists signing status per entry. For supply-chain posture on third-party adapters we reference guidance from the NIST C-SCRM program when reviewing preview integrations.
Zero-click summary. Three adapter tiers: hosted, self-hosted, local. Change one config block to move tiers. Signing status is tracked per entry.
Rolling out OpenCode across a mixed editor team.
Most teams we talk to are not monolithic. The backend engineers live in Neovim, the frontend team is on VSCode, the data science team runs PyCharm, and the platform team toggles between all three. A plugin-locked coding agent forces those teams to pick a side, which is rarely politically viable. OpenCode avoids that problem by design — the CLI is the canonical surface and each editor just wraps it.
The rollout script most teams use: install the CLI on every developer workstation through the endpoint management system, pin a default model adapter in a shared config, and let each engineer install the editor extension they prefer. The CLI handles logging, auditing, and policy enforcement uniformly, so the platform team does not care which editor the patch came from. Any compliance report reads the same way whether the engineer drove OpenCode from VSCode, JetBrains, or a Neovim buffer in tmux.
We have four editor fiefdoms and one compliance team. OpenCode let us ship one agent to all of them. The CLI is the policy boundary, and nobody had to switch editors to get the productivity boost.
The JetBrains plugin and the VSCode extension share a transcript format, so our review workflow does not branch by editor. That is the detail that sold the platform team.
Swapping from a hosted adapter to our self-hosted vLLM cluster was a four-line config change. The agent behavior was indistinguishable.