Joe Bruechner
Projects

Linear CLI

A CLI that integrates Linear's project management SDK with coding agents like Claude Code. Create issues from natural language, generate git branches, and manage workflows without leaving the terminal.

v0.3.0Mar 2026
01

The Problem

Coding agents like Claude Code love generating markdown files for plans and implementations — useful, but it clutters the codebase. You end up with dozens of PLAN.md, IMPLEMENTATION.md, and TODO.md files scattered throughout your project.

Linear is an exceptional product for issue tracking. The idea was simple: what if Claude Code could create Linear tickets from proposed plans instead of giant markdown files?

02

The Solution

Linear CLI bridges the gap between AI coding agents and Linear's project management platform. It exposes a clean, composable command-line interface that agents can invoke directly — no UI interaction needed.

The CLI authenticates via personal API key with secure OS keychain storage (macOS Keychain, Windows Credential Manager, Linux Secret Service), discovers your team's projects and workflows, and lets you create fully-structured issues with labels, assignees, priorities, and parent relationships — all from a single command or natural language prompt.

  • 22 commands covering the full Linear workflow — issues, projects, cycles, documents, and batch operations
  • AI-powered issue creation from natural language via Claude Haiku 4.5 — extracts title, team, labels, and priority from freeform text
  • Auto-generate git branches linked to issues with configurable naming styles (feature, kebab, plain)
  • Bulk operations with 3-concurrent parallel processing and continue-on-error mode
  • Every command supports --json for scripting, CI/CD, and coding agent integration
  • Dual runtime support — Node.js (standard) or Bun (5-10x faster startup)
03

AI Agent Integration

When Claude Code wants to propose an implementation plan, instead of writing a markdown file it can call `linear agent` with a natural language description. The AI extracts structured issue data — title, description, team, project, labels, priority — using workspace context awareness. If a label like 'bug' doesn't exist, it creates one with an intelligent color assignment.

A workspace context engine caches your teams, projects, labels, states, and recent issues with a 5-minute TTL, then feeds that context to Claude so it can resolve natural language references. 'Frontend team' maps to the correct team key. 'Current sprint' maps to the active cycle. The engine fetches all context data in parallel to avoid the N+1 problem with Linear's SDK, which lazy-loads most associations.

The `linear context` command exports your entire workspace state as structured JSON — teams, projects, labels, states, recent issues — optimized for feeding into AI agent contexts. Combined with --json output on every command, the CLI becomes a first-class tool in any coding agent's toolkit.

agent-usage.sh
# AI-powered issue creation from natural language
linear agent "The login page throws a 500 error when \
  the user enters an email with a plus sign. This is \
  urgent and should be assigned to the backend team."

# Batch mode — create multiple issues from a description
linear agent --batch "We need to add OAuth support: \
  1. Add Google OAuth provider \
  2. Add GitHub OAuth provider \
  3. Update the login UI with social buttons \
  4. Write integration tests for OAuth flow"

# Dry-run to preview what the AI extracts
linear agent --dry-run "Refactor the payment service \
  to use the new Stripe API v2"

# Export workspace context for AI agents
linear context --json > workspace.json

# Use templates for common issue types
linear agent --template bug "Search indexing fails \
  for documents over 10MB"
# AI-powered issue creation from natural language
linear agent "The login page throws a 500 error when \
  the user enters an email with a plus sign. This is \
  urgent and should be assigned to the backend team."

# Batch mode — create multiple issues from a description
linear agent --batch "We need to add OAuth support: \
  1. Add Google OAuth provider \
  2. Add GitHub OAuth provider \
  3. Update the login UI with social buttons \
  4. Write integration tests for OAuth flow"

# Dry-run to preview what the AI extracts
linear agent --dry-run "Refactor the payment service \
  to use the new Stripe API v2"

# Export workspace context for AI agents
linear context --json > workspace.json

# Use templates for common issue types
linear agent --template bug "Search indexing fails \
  for documents over 10MB"
AI agent integration — natural language to structured Linear issues with workspace awareness.
04

What Made It Hard

Cross-runtime secret storage was the first real challenge. The CLI needs to store API keys securely — macOS Keychain, Windows Credential Manager, Linux Secret Service. Node.js uses keytar (a native module), but Bun has its own runtime. The solution is a SecretsProvider interface with lazy-loaded backends and runtime detection, so keytar is never imported under Bun and credentials stored by one runtime are readable by the other since they hit the same OS APIs. For headless environments (CI/Docker), the CLI falls back to LINEAR_API_KEY and ANTHROPIC_API_KEY environment variables, avoiding the D-Bus dependency entirely.

The Linear SDK's async design created performance problems. Properties like project.teams() require individual async calls, which means fetching context for 50 projects generates 50+ sequential API calls. The context engine batches these with Promise.all and caches the result, but getting there required understanding where the SDK's lazy loading was silently creating N+1 patterns.

AI response parsing needed more robustness than expected. Claude sometimes wraps JSON in markdown code blocks, includes preamble text, or returns subtly malformed structures. The parser strips code fences, validates each field with type checks, and uses an exponential backoff retry strategy with rate limit handling. The first 100 characters of unparseable responses are included in error messages — small details that save significant debugging time.

terminal-usage.sh
# Install globally
npm install -g @brueshi/linear-cli

# Authenticate — stores key in OS keychain
linear auth login

# Create an issue from natural language
linear issue create "Add dark mode support to the settings page" \
  --team "Engineering" \
  --priority urgent \
  --label "feature,frontend"

# Generate a git branch from an issue
linear branch create ENG-142

# List issues assigned to you
linear issue list --assignee me --status "In Progress"

# Full-text search with filters
linear search "auth bug" --team "Backend"

# Personal dashboard — what's on your plate
linear dashboard
# Install globally
npm install -g @brueshi/linear-cli

# Authenticate — stores key in OS keychain
linear auth login

# Create an issue from natural language
linear issue create "Add dark mode support to the settings page" \
  --team "Engineering" \
  --priority urgent \
  --label "feature,frontend"

# Generate a git branch from an issue
linear branch create ENG-142

# List issues assigned to you
linear issue list --assignee me --status "In Progress"

# Full-text search with filters
linear search "auth bug" --team "Backend"

# Personal dashboard — what's on your plate
linear dashboard
Common usage patterns — designed to feel natural from the terminal.
context-engine.ts
// Workspace context engine — parallel fetching with 5-min cache
import { LinearClient } from "@linear/sdk";

interface WorkspaceContext {
  teams: { id: string; name: string; key: string }[];
  projects: { id: string; name: string; teamIds: string[] }[];
  labels: { id: string; name: string; color: string }[];
  states: { id: string; name: string; type: string }[];
  recentIssues: { id: string; identifier: string; title: string }[];
}

const CACHE_TTL = 5 * 60 * 1000; // 5 minutes
let cached: { data: WorkspaceContext; timestamp: number } | null = null;

export async function getWorkspaceContext(
  client: LinearClient
): Promise<WorkspaceContext> {
  if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
    return cached.data;
  }

  // Fetch all context in parallel — avoids N+1 from SDK lazy loading
  const [teams, projects, labels, states, issues] = await Promise.all([
    client.teams().then((r) => r.nodes),
    client.projects().then((r) => r.nodes),
    client.issueLabels().then((r) => r.nodes),
    client.workflowStates().then((r) => r.nodes),
    client.issues({ first: 20, orderBy: LinearDocument.PaginationOrderBy.UpdatedAt })
      .then((r) => r.nodes),
  ]);

  const data: WorkspaceContext = {
    teams: teams.map((t) => ({ id: t.id, name: t.name, key: t.key })),
    projects: projects.map((p) => ({ id: p.id, name: p.name, teamIds: [] })),
    labels: labels.map((l) => ({ id: l.id, name: l.name, color: l.color })),
    states: states.map((s) => ({ id: s.id, name: s.name, type: s.type })),
    recentIssues: issues.map((i) => ({
      id: i.id, identifier: i.identifier, title: i.title,
    })),
  };

  cached = { data, timestamp: Date.now() };
  return data;
}
// Workspace context engine — parallel fetching with 5-min cache
import { LinearClient } from "@linear/sdk";

interface WorkspaceContext {
  teams: { id: string; name: string; key: string }[];
  projects: { id: string; name: string; teamIds: string[] }[];
  labels: { id: string; name: string; color: string }[];
  states: { id: string; name: string; type: string }[];
  recentIssues: { id: string; identifier: string; title: string }[];
}

const CACHE_TTL = 5 * 60 * 1000; // 5 minutes
let cached: { data: WorkspaceContext; timestamp: number } | null = null;

export async function getWorkspaceContext(
  client: LinearClient
): Promise<WorkspaceContext> {
  if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
    return cached.data;
  }

  // Fetch all context in parallel — avoids N+1 from SDK lazy loading
  const [teams, projects, labels, states, issues] = await Promise.all([
    client.teams().then((r) => r.nodes),
    client.projects().then((r) => r.nodes),
    client.issueLabels().then((r) => r.nodes),
    client.workflowStates().then((r) => r.nodes),
    client.issues({ first: 20, orderBy: LinearDocument.PaginationOrderBy.UpdatedAt })
      .then((r) => r.nodes),
  ]);

  const data: WorkspaceContext = {
    teams: teams.map((t) => ({ id: t.id, name: t.name, key: t.key })),
    projects: projects.map((p) => ({ id: p.id, name: p.name, teamIds: [] })),
    labels: labels.map((l) => ({ id: l.id, name: l.name, color: l.color })),
    states: states.map((s) => ({ id: s.id, name: s.name, type: s.type })),
    recentIssues: issues.map((i) => ({
      id: i.id, identifier: i.identifier, title: i.title,
    })),
  };

  cached = { data, timestamp: Date.now() };
  return data;
}
Context engine fetches workspace data in parallel and caches for 5 minutes — feeds AI accurate team/project/label resolution.
Built with
TypeScript/Node.js/Claude/Linear