OpenCode
// v2026.4 — General Availability

The open, terminal-native AI coding agent.

OpenCode reads your repository, edits files, runs tests, and drives any LLM — cloud or local — from a single CLI. Ship with VSCode, JetBrains, or a plain Windows terminal.

1.2M+
Monthly active developers
28k
GitHub stars this year
140+
Supported model endpoints
4.8★
Average marketplace rating
What OpenCode does

One agent, every surface a developer already uses.

OpenCode is designed to be boring, inspectable, and fast. The CLI is the source of truth; every editor extension, the web console, and the desktop app talk to the same protocol. You keep your keystrokes, your shortcuts, and your model choice — the agent just slots in.

OpenCode CLI

Single static binary, no runtime required. Scriptable, pipe-friendly, and designed for long-running TUI sessions over SSH.

CLI install guide

OpenCode Desktop App

Native macOS, Windows, and Linux builds with inline diff review, an agent log, and auto-updates through the app menu.

Desktop app details

OpenCode Web

Browser console for review, plan approval, and read-only monitoring when you cannot install binaries on a sandboxed workstation.

Launch web console

VSCode Extension

Side panel, inline diffs, selection prompts, and a status-bar transcript — installable from the Visual Studio Code marketplace.

VSCode extension

Ollama & Local Models

Point the agent at a local Ollama socket or any OpenAI-compatible endpoint. Keep code off the wire when policy forbids SaaS inference.

Ollama setup

Custom API Adapter

BYO model server: Anthropic, Azure, vLLM, TGI, or your gateway. Configure base URL, token, and tool schema in a single YAML block.

Custom API guide
Legal & governance

Open by default. Audited by design.

OpenCode is distributed under a permissive open-source license with a published signing policy and reproducible builds for every binary we ship. Patch signatures are recorded in a transparency log so any downstream distribution can be checked against upstream state before it reaches a developer workstation.

Our release engineering pipeline follows NIST SSDF 1.1 controls for build isolation, artifact signing, and SBOM publication. Security-sensitive modules go through an external cryptographic review sourced via code.gov referrals where applicable.

  • License: permissive open source, one-page LICENSE file in the repo
  • SBOM: CycloneDX 1.5 attached to every release tag
  • Build attestation: SLSA level 3 provenance per binary
  • Telemetry: off by default, opt-in per workstation
  • Sandbox: tool calls run with least-privilege by default
Certifications

Ready for enterprise procurement.

OpenCode has been reviewed by procurement teams in regulated industries where an agent that edits source must satisfy the same bar as a CI runner or an IDE plugin. The project ships documentation aligned to common control catalogs so security architects can attach vendor packets without rewriting policy.

NIST SSDF 1.1 SLSA Level 3 SPDX 2.3 CycloneDX 1.5 SOC 2 — vendored ISO 27001 — hosted build Sigstore signed Reproducible

Procurement kits — control mappings, build attestations, license analysis, and signing keys — are packaged on the trust and safety page and refreshed every quarter.

Why developers choose OpenCode

A coding agent that respects the shell.

An AI coding agent is useful only when it disappears into the workflow you already have. OpenCode was built by engineers who spend most of their day in a terminal, who switch projects hourly, and who want a single binary they can drop on a fresh laptop and have running in under a minute. The CLI is the canonical surface, and every other experience — VSCode, the web console, the desktop app — is a thin layer over the same wire protocol.

That design has practical consequences. When you run OpenCode in a Windows PowerShell, a macOS iTerm, or a bare Linux tmux session on a jump host, you get the same commands, the same keystrokes, and the same plan/apply flow. Your editor does not dictate your model, your model does not dictate your editor, and your shell remains the boss. You can pipe the agent through xargs, drive it from a Makefile, or bolt a GitHub Action onto it — nothing in the architecture privileges any one surface.

OpenCode pairs with three model families out of the box: frontier cloud models for the hardest refactors, mid-tier cloud models for day-to-day edits, and local Ollama models when policy forbids sending source off-box. Because the agent speaks OpenAI-compatible JSON and a small tool schema, any model that exposes that surface — Anthropic Claude via adapter, Azure OpenAI, vLLM, TGI, LM Studio, or your in-house gateway — slots in without a rebuild.

What OpenCode includes, from first install to team rollout.

The default install gives you the OpenCode CLI, a reference configuration, and one model adapter wired to your shell environment. The CLI is a single static binary: no Node runtime, no Python virtual environment, no Docker image. On macOS and Linux a one-line installer fetches the binary into your path, verifies the signature against a pinned public key, and exits. On Windows, the MSI installer adds OpenCode to your PATH and registers an uninstaller entry so your endpoint management team can audit it alongside every other developer tool.

Once the CLI is running you can opt into three additional surfaces. The OpenCode VSCode extension adds an inline diff panel, selection-scoped prompts, and a status-bar log that mirrors the terminal session. The OpenCode desktop app wraps the CLI in a native window with queued tasks, an approval queue, and an auto-updater. The OpenCode web console gives read-only observers a way to audit what the agent did without installing any binary at all — useful for team leads, auditors, and procurement teams reviewing an active rollout.

How OpenCode compares to other coding agents.

The coding-agent category is crowded, and the trade-offs are real. Some agents ship as closed-source SaaS and upload your entire repository to a vendor cloud on every prompt. Others ship as a plugin that is locked to one editor, so if your team moves between VSCode, JetBrains, and Neovim you need a different agent per surface. A third cluster ships as open source but with a single model baked in, so you cannot pivot when a better model arrives next quarter.

OpenCode occupies a different corner of that space. It is open-source, terminal-native, editor-agnostic, and model-agnostic. The CLI is the product; editor extensions are optional. That means the same agent that works on a developer laptop also works on a CI runner, a devcontainer, a GitHub Codespace, an Amazon WorkSpace, or an air-gapped Linux jump box with only an Ollama server for inference.

Real workflows the agent already handles.

OpenCode is opinionated about a short list of tasks that repeat in every codebase. Given a failing test, the agent can read the stack trace, locate the production code that broke, propose a fix as a patch, run the test again, and loop until green or until the agent decides it cannot fix it and escalates. Given a long-form specification, the agent can break it into subtasks, ask clarifying questions, write each subtask as a commit, and post a pull request. Given a legacy repository with no tests, the agent can scaffold a test harness, write characterization tests against current behavior, and flag the ones that look like bugs rather than features.

Those workflows are not magic — the OpenCode agent exposes every prompt, every tool call, and every model decision in a plain-text log you can grep. When the agent gets it wrong, which it will, the recovery is to read the log, adjust the config, and re-run. OpenCode does not hide the seams.

Getting started with OpenCode in under two minutes.

The fastest path from nothing to a working OpenCode install is a single shell command. On macOS or Linux, run the installer, confirm the signature fingerprint against the value published on the OpenCode download page, and the CLI lands in /usr/local/bin. On Windows, the OpenCode MSI or winget package adds the binary to your PATH, registers PowerShell completions, and wires the desktop shortcut if you opted into it. Either way the first prompt can run before the kettle boils.

The second step is picking a model. OpenCode ships a reference config pointing at a popular hosted provider, but the agent is model-agnostic by construction: change the provider block in ~/.config/opencode/config.toml and the CLI talks to any OpenAI-compatible endpoint, Anthropic, Azure OpenAI, vLLM, TGI, LM Studio, or a local Ollama server. The Ollama guide walks through pulling a small model, pointing OpenCode at the local socket, and running the first edit without any data leaving your machine.

The third step — optional, but worth it — is installing an editor integration. The OpenCode VSCode extension adds an inline diff panel, selection prompts, and a status-bar transcript that mirrors the terminal. The JetBrains integration does the same for IntelliJ, GoLand, and PyCharm. The extension hub lists companions for Neovim, Helix, Zed, and Sublime Text. None of these are required — the CLI alone is the full product — but they save context-switching when you are already mousing.

Use cases teams have shipped to production with OpenCode.

OpenCode is being used in four repeat patterns that have graduated from "interesting demo" to "merged to main every week." Each pattern is boring in the best sense: the agent is not inventing workflow, it is accelerating a workflow the team already had.

Pattern one: red-to-green test fixing. CI publishes a failing test. OpenCode reads the stack trace, locates the production code that regressed, proposes a patch, runs the test locally, and loops until the test passes or OpenCode decides it cannot fix the underlying bug and escalates to a human. The transcript lands in the PR description so reviewers see exactly what was tried.

Pattern two: long-form spec implementation. A product manager drops a four-paragraph spec into a ticket. OpenCode reads the spec, the relevant code, and the test suite, then proposes a plan as a numbered list of commits. A human approves or edits the plan. OpenCode drafts each commit, runs the tests, and opens the pull request with the plan in the body. The human reviews commits, not the whole diff at once.

Pattern three: incident response. An alert fires. An on-call engineer runs OpenCode against the repository and the alert payload. OpenCode reads the error, cross-references the deploys list, and proposes a revert, a flag flip, or a hotfix branch. The transcript goes straight into the postmortem document so the incident writer does not have to reconstruct what happened.

Pattern four: legacy characterization. A team inherits a codebase with no tests. OpenCode writes characterization tests against current behavior, flags the code paths it cannot explain, and seeds a backlog of "probably a bug, probably a feature" tickets that a human can triage. This pattern alone has paid for OpenCode at several teams that tracked the outcome.

None of these patterns require exotic prompts or heroic prompt engineering. They work because the OpenCode CLI exposes the same plan/apply flow the agent uses internally to the developer running it, so what the agent tries and what the developer expected stay in sync.

OpenCode pricing, licensing, and governance.

The OpenCode CLI, desktop apps, VSCode extension, web console, and companion editor plugins are free under a permissive open-source license. The only cost on a developer laptop is the model inference you choose to pay for — and if you run local Ollama models, even that is zero. Team features such as hosted audit logs, SSO, and central policy are a paid managed add-on, but nothing in the core agent is gated. The pricing page has the full matrix, and the free-tier page answers the most-asked question directly.

OpenCode is governed as an independent open-source project with a published maintainer list, a short and enforceable code of conduct, and a contributor license agreement reviewed by outside counsel. Security reports flow through a coordinated disclosure channel documented on the trust and safety page. Public incident reports are kept in the changelog so downstream distributors can track material changes between releases without guessing.

What engineers say

Adopted in PR queues, release trains, and on-call rotations.

The Ollama path was what sold us. We have repositories we legally cannot send to a SaaS endpoint, and OpenCode was the first agent that treated local inference as a first-class citizen.

— Marcus O. Ellenberg, DevOps Lead, Quilombo Labs

OpenCode is the only coding agent I have shipped to a team of 40 without a training session. The CLI is so close to a normal shell tool that developers onboard by reading --help.

— Teodora M. Crișan, Senior Platform Engineer, Aethelric Analytics

We pair OpenCode with an in-house gateway that routes across four model vendors. The custom API adapter took one afternoon to wire up.

— Desmond L. Okoye, Backend Architect, Varanya Robotics

The VSCode extension was the last piece we were waiting for. Inline diffs plus a transcript pinned to the status bar — nothing else to learn.

— Sigrid N. Fältström, Tech Lead, Pelekesi Systems

We run OpenCode against every new incident. The plan/apply flow matches how our SREs already think, and the transcript drops straight into the postmortem.

— Akio H. Tachibana, Engineering Manager, Obsidianmark

Ship the first patch today.

Install OpenCode in under a minute with a single command. Point it at any model. Review the first diff before it touches disk. No account required for the CLI.

Frequently asked

Questions developers ask before they install.

Short answers with links to the full guide. Every page on opencode.gr.com ships with a focused FAQ, so if your question is specific to VSCode, Ollama, or Windows, follow the matching link to land on the dedicated guide.

What is OpenCode and who is it for?
OpenCode is an open, terminal-native AI coding agent built for developers who already live in a shell. It reads your repository, edits files, runs tests, and drives any large language model — cloud or local Ollama — through a thin CLI and optional editor extensions. It is the right choice for engineers who want one agent that works across VSCode, JetBrains, Neovim, and a bare Windows terminal.
Is OpenCode free to use?
Yes. The OpenCode CLI, VSCode extension, and desktop builds are free under a permissive open license. You only pay for model inference from whichever provider you configure, or run free local models via Ollama. The is OpenCode free page has a full breakdown of what is included.
How do I install OpenCode via CLI?
On macOS and Linux run one curl command to fetch the install script, then launch the agent in any project directory. Windows users can use winget or download the MSI installer from the download page. The CLI install guide covers signature verification, offline installs, and team-wide rollouts.
Does OpenCode run on Windows?
Yes. OpenCode ships native Windows x64 and ARM64 builds alongside macOS and Linux binaries. A signed MSI installer is on the OpenCode Windows page with automatic updates and PowerShell completions. Windows ARM laptops are supported on the same release cadence as x64.
Does OpenCode have a VSCode extension?
Yes. The OpenCode VSCode extension adds an inline panel, inline diffs, selection-scoped prompts, and a status-bar agent log. Install it from the Visual Studio Code marketplace or from the extension page, which also lists JetBrains and Neovim companions.
Can I use OpenCode with a custom LLM API or gateway?
Yes. OpenCode accepts any OpenAI-compatible endpoint, Anthropic, Azure OpenAI, vLLM, TGI, or an in-house gateway. The custom API guide shows how to point the config file at a base URL, supply a bearer token, and pin model names so a team-wide deployment stays reproducible.
How is OpenCode different from opencode.ai?
The two projects share a name space but differ in distribution, governance, and model policy. Our opencode.gr.com vs opencode.ai comparison maps features, licensing, privacy, and the CLI surface side-by-side so teams can pick the variant that fits their stack without reading two sets of release notes.