Skip to content

Configuration

Catalyst uses a two-layer configuration system that keeps secrets out of git while allowing project metadata to be shared with your team. The setup script (setup-catalyst.sh) generates both layers automatically.

Safe to commit. Contains non-sensitive project metadata that Catalyst reads to understand your project structure, ticket conventions, and workflow state mapping.

{
"catalyst": {
"projectKey": "acme",
"repository": {
"org": "acme-corp",
"name": "api"
},
"project": {
"ticketPrefix": "ACME",
"name": "Acme Corp API"
},
"linear": {
"teamKey": "ACME",
"stateMap": {
"backlog": "Backlog",
"todo": "Todo",
"research": "In Progress",
"planning": "In Progress",
"inProgress": "In Progress",
"inReview": "In Review",
"done": "Done",
"canceled": "Canceled"
}
},
"thoughts": {
"user": null
}
}
}
FieldTypeDescription
catalyst.projectKeystringLinks to the secrets config file (config-{projectKey}.json)
catalyst.repository.orgstringGitHub organization
catalyst.repository.namestringRepository name
catalyst.project.ticketPrefixstringLinear ticket prefix (e.g., “ACME”)
catalyst.project.namestringHuman-readable project name
catalyst.linear.teamKeystringLinear team identifier used in ticket IDs (e.g., “ACME” for ACME-123). Must match ticketPrefix.
catalyst.linear.teamIdstring|nullCached Linear team UUID. Resolved by resolve-linear-ids.sh.
catalyst.linear.stateMapobjectMaps workflow phases to your Linear workspace state names
catalyst.linear.stateIdsobject|nullMap of Linear state display names to UUIDs. Eliminates per-call UUID resolution.
catalyst.thoughts.userstring|nullHumanLayer thoughts user name

The stateMap controls automatic Linear status updates as you move through the development workflow:

KeyUpdated WhenDefault
backlogInitial ticket stateBacklog
todoAcknowledged, unstartedTodo
researchRunning research-codebaseIn Progress
planningRunning create-planIn Progress
inProgressRunning implement-planIn Progress
inReviewRunning create-pr or describe-prIn Review
doneRunning merge-prDone
canceledManual cancellationCanceled

Set any key to null to skip that automatic transition.

stateMap values are auto-detected from Linear — when you run setup-catalyst.sh with a Linear API token, the script fetches your team’s actual workflow states and populates stateMap with the correct names. Manual customization is only needed for non-standard state names.

The teamId and stateIds fields cache Linear UUIDs so that linear-transition.sh can pass them directly to the linearis CLI, skipping per-call name-to-UUID resolution. This reduces Linear API requests by ~17% per state transition — significant during orchestrator runs with parallel workers.

Populate the cache by running:

Terminal window
plugins/dev/scripts/resolve-linear-ids.sh

This makes a single Linear GraphQL query to fetch all workflow states for the configured team and writes the results to .catalyst/config.json. Re-run with --force after changing workflow states in Linear. The cache is optional — linear-transition.sh falls back to name-based calls when stateIds is absent.

In most teams, the intended meaning is:

  • research — Catalyst is still understanding the problem and the current code
  • planning — the implementation approach is being written and reviewed
  • inProgress — code changes are actively being made
  • inReview — a PR exists and is being worked through review and CI
  • done — the PR has merged

This is useful because the PR stage is not just “waiting on somebody else.” In Catalyst’s model, inReview still includes active follow-up work such as fixing CI, addressing automated review feedback, updating the PR description, and re-checking merge readiness.

Catalyst can open PRs, watch checks, address review comments, and try to merge safely. But GitHub decides what is actually required before main can be merged into.

Those merge requirements live in GitHub branch protection or repository rulesets, not in .catalyst/config.json.

If you want GitHub to block merges until review is complete, configure that in GitHub:

  • require pull requests for main
  • require status checks before merge
  • require one or more approving reviews
  • require conversation resolution if review threads must be closed
  • optionally enable auto-merge once those requirements pass

Catalyst should behave as if these gates matter, but only GitHub can enforce them.

For most teams using Catalyst, the best default is autonomous mode: let Catalyst work the PR to completion, but make GitHub enforce the quality gates around checks and unresolved review comments.

  • Enable pull requests.
  • Enable squash merge.
  • Enable auto-merge.
  • Enable automatic deletion of head branches after merge.
  • Set the default branch to main.

Target refs/heads/main with an active branch ruleset that:

  • blocks direct deletion
  • blocks non-fast-forward pushes
  • requires pull requests for changes into main
  • requires review conversations to be resolved before merge
  • requires status checks to pass before merge

For autonomous mode, set:

  • required approving reviews: 0
  • required review thread resolution: true
  • required status checks: true

This gives you a fully automated merge path where Catalyst can:

  • open the PR
  • wait for checks and bot comments
  • fix actionable feedback
  • resolve review threads
  • merge once the PR is genuinely clean

without waiting for a human approval click.

For this repo shape, the recommended required check currently enabled in GitHub is:

  • Cloudflare Pages

Once your repository runs the following checks on every PR to main, you should add them as required checks too:

  • audit-references
  • check-versions
  • validate

Cloudflare Pages covers preview deploy readiness. The other three checks are repository-owned guardrails:

  • audit-references catches broken plugin references
  • check-versions verifies plugin changes are releasable through Release Please
  • validate checks release configuration consistency

If your repository has additional always-on checks, add them too. The important rule is: only mark a check as required if it runs on every PR to main.

If you want a human signoff before merge, keep everything above and additionally set:

  • required approving reviews: 1 or more

That changes the operating model from autonomous shipping to human-approved shipping. Catalyst still does the same review-follow-up work, but GitHub will not allow the merge until a human reviewer approves it.

The recommended operating model is:

  • automated reviewers can leave comments and request fixes
  • Catalyst should address actionable review feedback and resolve threads
  • GitHub should block merge until required conversations and checks are complete
  • human approval should be optional and controlled by the repository owner, not assumed by Catalyst

Catalyst can do the work of:

  • opening the PR
  • waiting for checks
  • reading bot and human review comments
  • fixing code
  • updating the PR
  • attempting the merge once the PR is clean

But the repository settings are what make those expectations enforceable for every contributor, not just when Catalyst happens to be driving.

Secrets Config (~/.config/catalyst/config-{projectKey}.json)

Section titled “Secrets Config (~/.config/catalyst/config-{projectKey}.json)”

Never committed. One file per project, linked by projectKey.

{
"catalyst": {
"linear": {
"apiToken": "lin_api_...",
"teamKey": "ACME",
"defaultTeam": "ACME"
},
"sentry": {
"org": "acme-corp",
"project": "acme-web",
"authToken": "sntrys_..."
},
"posthog": {
"apiKey": "phc_...",
"projectId": "12345"
},
"exa": {
"apiKey": "..."
}
}
}
IntegrationRequired FieldsUsed By
LinearapiToken, teamKeycatalyst-dev, catalyst-pm
Sentryorg, project, authTokencatalyst-debugging
PostHogapiKey, projectIdcatalyst-analytics
ExaapiKeycatalyst-dev (external research)

Only configure the integrations you use. The setup script prompts for each one.

The orchestration monitor reads OpenTelemetry backend endpoints from the per-project secrets file ~/.config/catalyst/config-<projectKey>.json (layer 2). If that file is not present it falls back to the global ~/.config/catalyst/config.json.

{
"otel": {
"enabled": true,
"prometheusUrl": "http://localhost:9090",
"lokiUrl": "http://localhost:3100"
}
}
FieldTypeDefaultDescription
otel.enabledbooleanfalseEnable OTel proxy endpoints on orch-monitor
otel.prometheusUrlstringnullPrometheus query URL (for /api/otel/query and cost/token panels)
otel.lokiUrlstringnullLoki query URL (for /api/otel/logs, Tool Usage, and API Errors panels)

Environment variable overrides: OTEL_ENABLED, PROMETHEUS_URL, LOKI_URL. Env vars take precedence over the file when both are set.

Deprecated names: the monitor still accepts otel.prometheus and otel.loki for one release cycle, but emits a deprecation warning on startup. Rename to otel.prometheusUrl and otel.lokiUrl to silence the warning.

If you’re running the claude-code-otel Docker Compose stack locally, the defaults above match the standard ports. For hosted backends (Grafana Cloud, Datadog, etc.), point these URLs at your hosted Prometheus/Loki-compatible endpoints.

See Setting up the OTel stack for the full installation guide.

The orch-monitor daemon receives GitHub events through a smee.io tunnel — see GitHub webhooks for orch-monitor for the why and the full setup flow. The webhook config is split across two files because the channel URL is per-machine (one daemon, one tunnel, every project on the laptop) while the env-var name is team-wide.

~/.config/catalyst/config.json — cross-project, per-machine, not committed:

{
"catalyst": {
"monitor": {
"github": {
"smeeChannel": "https://smee.io/<channel-id>"
}
}
}
}

.catalyst/config.json — per-repo, committed, team-wide:

{
"catalyst": {
"monitor": {
"github": {
"webhookSecretEnv": "CATALYST_WEBHOOK_SECRET",
"watchRepos": [
"coalesce-labs/catalyst",
"coalesce-labs/adva"
]
},
"linear": {
"webhookSecretEnv": "CATALYST_LINEAR_WEBHOOK_SECRET"
}
}
}
}
FieldWhereTypeDefaultDescription
catalyst.monitor.github.smeeChannel~/.config/catalyst/config.jsonstring(none)Per-machine smee.io channel URL the daemon tunnels deliveries through
catalyst.monitor.github.webhookSecretEnv.catalyst/config.jsonstring"CATALYST_WEBHOOK_SECRET"Name of the env var the HMAC secret value is read from at runtime
catalyst.monitor.github.watchRepos.catalyst/config.jsonstring[][]Repos (owner/repo) subscribed at daemon startup — additive on top of worker-driven auto-discovery. See Persistent watch list.
catalyst.monitor.linear.webhookSecretEnv.catalyst/config.jsonstring"CATALYST_LINEAR_WEBHOOK_SECRET"Name of the env var the Linear HMAC secret is read from. Empty/missing → POST /api/webhook/linear returns 503. See Linear webhooks.
catalyst.monitor.suppressVersionWarning.catalyst/config.jsonbooleanfalseSuppress the version-drift warning printed by catalyst-monitor start / restart when running an older daemon version than the highest available in the plugin cache. See Version drift detection.

Environment variable overrides:

  • CATALYST_SMEE_CHANNEL — overrides any file-derived channel.
  • The env var named by webhookSecretEnv (default CATALYST_WEBHOOK_SECRET) holds the shared GitHub HMAC secret value.
  • The env var named by monitor.linear.webhookSecretEnv (default fallback CATALYST_LINEAR_WEBHOOK_SECRET) holds the Linear HMAC secret value.

If the channel is missing from both files (and unset in env), the receiver disables itself silently and the daemon falls back to 10-minute polling. Run plugins/dev/scripts/setup-webhooks.sh to provision both files and the secret.

Deprecated location: catalyst.monitor.github.smeeChannel was originally written to .catalyst/config.json (Layer 1). The monitor still reads that location for one release cycle and emits a one-shot deprecation warning on startup if it finds a value there. Re-running setup-webhooks.sh migrates the value to the right home and clears it from the committed config.

Per-repo configuration for the orchestrator’s production deploy state machine. When a repo emits GitHub Deployments, the orchestrator’s Phase 4 loop watches deployment_status events on the merge SHA and only writes status: "done" after a success on the configured production environment. Repos that don’t emit Deployments opt out via skipDeployVerification: true (the default for unknown repos), and the orchestrator short-circuits MERGED → done immediately.

{
"catalyst": {
"deploy": {
"coalesce-labs/adva": {
"timeoutSec": 1800,
"productionEnvironment": "production",
"stagingEnvironment": "staging",
"skipDeployVerification": false
},
"coalesce-labs/catalyst": {
"skipDeployVerification": true
}
}
}
}
FieldWhereTypeDefaultDescription
catalyst.deploy.<repo>.timeoutSec.catalyst/config.jsoninteger1800Hard timeout for the deploy phase. After this elapses without a deployment_status resolution, the orchestrator escalates with comms.attention and writes status: "stalled".
catalyst.deploy.<repo>.productionEnvironment.catalyst/config.jsonstring"production"GitHub deployment environment that gates status: "done".
catalyst.deploy.<repo>.stagingEnvironment.catalyst/config.jsonstring"staging"Optional staging environment shown in the dashboard but not gating.
catalyst.deploy.<repo>.skipDeployVerification.catalyst/config.jsonbooleantrueWhen true, MERGED → done immediately (today’s CTL-133 behavior). When false, the new lifecycle states (`merged → deploying → done

Lifecycle states the orchestrator writes to the worker’s signal file (CTL-211):

StatusTriggerNotes
mergedgh pr view returns state=MERGED AND skipDeployVerification: falsePR landed, waiting for deploy to start
deployinggithub.deployment.created (or _status in_progress/pending) for production env on merge SHADeploy in flight
donegithub.deployment_status.success for production envTerminal success — Linear ticket transitions to Done
deploy-failed`github.deployment_status.failureerror` for production env
stalledtimeoutSec elapsed without resolutionEscalates with comms.attention "deploy-timeout"

The retry budget is currently fixed at 3 attempts per worker. After the budget is exhausted, attention is raised as deploy-budget-exhausted and the worker stays at deploy-failed until a human intervenes.

Repos without GitHub Deployments: catalyst itself, repos using bare git push deploys, and most CI-only setups. Set skipDeployVerification: true (the default) for these — the worker’s terminal state will be done immediately on PR merge, matching today’s CTL-133 contract.

The monitor dashboard supports AI-powered status summaries. Configuration spans both layers:

Project config (.catalyst/config.json) — opt-in toggle:

{
"catalyst": {
"ai": {
"enabled": true
}
}
}

Secrets config (~/.config/catalyst/config-{projectKey}.json) — provider credentials:

{
"ai": {
"gateway": "https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}",
"provider": "anthropic",
"model": "claude-haiku-4-5-20251001",
"apiKey": "sk-ant-..."
}
}
FieldRequiredDefaultDescription
ai.enabledYes (project config)falseMaster toggle. No API calls when off.
ai.gatewayYes (secrets)Cloudflare AI Gateway URL
ai.providerNoanthropicAI provider: anthropic or openai
ai.modelNoclaude-haiku-4-5-20251001Model ID
ai.apiKeyYes (secrets)Provider API key

The AI briefing generates a natural-language status summary and suggests session labels based on Linear ticket context. It is on-demand (button click) or optionally auto-refreshing. Zero cost when disabled.

The monitor exposes POST /api/summarize for on-demand orchestrator summaries. Unlike the briefing endpoint (which routes through a Cloudflare AI gateway), summarize calls each provider directly using an API key sourced from an environment variable.

Project config (.catalyst/config.json):

{
"catalyst": {
"ai": {
"enabled": true,
"defaultProvider": "anthropic",
"defaultModel": "claude-sonnet-4-6",
"providers": {
"anthropic": { "apiKeyEnv": "ANTHROPIC_API_KEY" },
"openai": { "apiKeyEnv": "OPENAI_API_KEY" },
"grok": { "apiKeyEnv": "XAI_API_KEY" }
}
}
}
}
FieldRequiredDefaultDescription
ai.defaultProviderNoanthropicProvider used when request omits provider
ai.defaultModelNoclaude-sonnet-4-6Model used when request omits model
ai.providers.{name}.apiKeyEnvYes (per provider)Name of the env var that holds that provider’s API key

Only providers whose apiKeyEnv resolves to a non-empty value at monitor startup are considered enabled. If no providers have their env var set, the endpoint returns 503 {"error": "AI not configured"}.

Request body (POST /api/summarize):

FieldRequiredDefaultDescription
orchIdYesOrchestrator directory name (e.g. orch-2026-04-22-3)
templateNorun-summaryrun-summary, attention-digest, or worker-status
providerNoconfig defaultanthropic, openai, or grok
modelNoconfig defaultProvider-specific model ID

Response body (200 OK):

{
"summary": "string",
"provider": "anthropic",
"model": "claude-sonnet-4-6",
"cost": 0.0123,
"tokens": 1500,
"cached": false,
"generatedAt": "2026-04-22T20:00:00.000Z"
}

Results are cached in-memory for 5 minutes keyed by (orchId, template, snapshotHash, provider, model). When the cache hits, cached is true and no provider call is made. A simple per-provider rate limiter (concurrency + minimum interval) returns 429 on bursts.

Define the commands that run when creating a new worktree via /create-worktree or /orchestrate. This replaces the default auto-detected setup (dependency install + thoughts init) with full project control — like conductor.json’s lifecycle hooks.

{
"catalyst": {
"worktree": {
"setup": [
"humanlayer thoughts init --directory ${DIRECTORY} --profile ${PROFILE}",
"humanlayer thoughts sync",
"bun install"
]
}
}
}

Commands run in order, inside the new worktree directory. Each command supports variable substitution:

VariableValue
${WORKTREE_PATH}Absolute path to the new worktree
${BRANCH_NAME}Git branch name
${TICKET_ID}Same as branch name
${REPO_NAME}Repository name
${DIRECTORY}Thoughts directory (from catalyst.thoughts.directory or repo name)
${PROFILE}Thoughts profile (from catalyst.thoughts.profile or auto-detected)

If catalyst.worktree.setup is not configured, the script falls back to auto-detected setup: make setup or bun/npm install, then humanlayer thoughts init + sync. Once you define setup, only your commands run — the auto-detection is skipped entirely.

Catalyst now pre-trusts newly created worktrees in Claude Code automatically, so you do not need to add a separate trust-workspace.sh command to your setup array.

Optional. Add this block to enable /orchestrate — see Orchestration for full documentation.

{
"catalyst": {
"orchestration": {
"worktreeDir": null,
"maxParallel": 3,
"hooks": {
"setup": ["bun install"],
"teardown": []
},
"workerCommand": "/catalyst-dev:oneshot",
"workerModel": "opus",
"testRequirements": {
"backend": ["unit"],
"frontend": ["unit"],
"fullstack": ["unit"]
},
"verifyBeforeMerge": true,
"allowSelfReportedCompletion": false
}
}
}
FieldTypeDefaultDescription
worktreeDirstring|null~/catalyst/wt/<projectKey>Base directory for worktrees
maxParallelnumber3Max concurrent workers
hooks.setupstring[][]Run after worktree creation (supports ${WORKTREE_PATH}, ${BRANCH_NAME}, ${TICKET_ID}, ${REPO_NAME}, ${DIRECTORY} variables)
hooks.teardownstring[][]Run before worktree removal
workerCommandstring/catalyst-dev:oneshotPlugin-namespaced skill to dispatch in each worker. Must be in /<plugin>:<skill> form — bare slashes (e.g. /oneshot) are rejected at dispatch.
workerModelstringopusModel for worker sessions
testRequirementsobjectSee aboveRequired test types by scope (backend/frontend/fullstack)
verifyBeforeMergebooleantrueRun adversarial verification on merged commits (post-merge)
allowSelfReportedCompletionbooleanfalseWhen true, verification failures are advisory (wave advances). When false (default), verification failures block wave advancement until remediation is filed

Optional. Controls where catalyst skills auto-file improvement tickets at run end and on whose permission. CTL-183 ships the routing layer, CTL-176 ships the findings-collection layer that populates it: skills call plugins/dev/scripts/add-finding.sh to record observations during a run, and the end-of-run hook drains the queue via file-feedback.sh.

{
"catalyst": {
"feedback": {
"autoFile": false,
"githubRepo": "coalesce-labs/catalyst",
"labels": ["auto-submitted"]
}
}
}
FieldTypeDefaultDescription
autoFilebooleanfalseWhen true, skills may auto-file findings at run end without prompting. When false or absent, skills prompt before filing each run.
githubRepostring"coalesce-labs/catalyst"<owner>/<repo> slug used when Linear filing fails or is unavailable. Defaults to upstream; override to redirect findings to your own fork.
labelsstring[]["auto-submitted"]Base labels applied to every auto-filed ticket. The invoking skill name is appended automatically (e.g., oneshot, orchestrate, implement-plan).

Skills attempt linearis issues create first, using catalyst.linear.teamKey. On Linear failure (no API key, team mismatch, CLI unavailable), they fall back to gh issue create --repo <feedback.githubRepo>. Destinations are never split — GitHub is used only when Linear is unavailable.

The first time a skill is ready to auto-file, it prompts:

Would you like us to automatically file tickets at the end of each run? [Y/n]

  • YesautoFile is set to true in .catalyst/config.json; no prompt on subsequent runs.
  • No → nothing is persisted; the prompt will return on the next run.

Revoke by setting autoFile to false or deleting the feedback block. The plugins/dev/scripts/feedback-consent.sh helper exposes check, grant, and status subcommands for scripted use.

See Integrations › Linear ⇄ GitHub Sync for the maintainer-side setup that mirrors auto-submitted-labeled GitHub issues back into Linear.

Skills record improvement findings the moment they are observed by calling plugins/dev/scripts/add-finding.sh with --title and --body. Each call appends one JSON line to a per-run queue; the end-of-run hook reads the queue and files one ticket per line via file-feedback.sh (respecting consent and routing above).

Queue path resolution (first match wins):

  1. $CATALYST_FINDINGS_FILE — orchestrator dispatch sets this to <orch-dir>/findings.jsonl so the orchestrator and all workers share one queue per run.
  2. .catalyst/findings/${CATALYST_SESSION_ID}.jsonl — standalone oneshot / implement-plan runs, scoped to the catalyst session id.
  3. .catalyst/findings/current.jsonl — final fallback when neither var is set.

Each line has the shape:

{"ts":"2026-04-24T20:30:00Z","skill":"oneshot","title":"","body":"","severity":"low","tags":[]}

The hook deletes the queue file after a successful full drain. On partial failure (some entries filed, some not), the queue is preserved so the next run can retry.

Optional. Controls where orchestrator artifacts are persisted and how long they are retained. The archive is a hybrid SQLite index plus filesystem blob store written by catalyst-archive (see ADR-009).

Goes in the global user config at ~/.config/catalyst/config.json:

{
"archive": {
"root": "~/catalyst/archives",
"syncToThoughts": false,
"retention": { "days": 90 }
}
}
FieldTypeDefaultDescription
rootstring~/catalyst/archivesRoot directory for archived blobs. One subdirectory per orchestrator id.
syncToThoughtsbooleanfalseWhen true, catalyst-archive sweep also copies the top-level SUMMARY.md to thoughts/shared/handoffs/.
retention.daysnumber|nullnull (no prune)Default threshold for catalyst-archive prune when --older-than is not supplied.

Environment variables override these paths when set:

  • CATALYST_ARCHIVE_ROOT — overrides archive.root
  • CATALYST_RUNS_DIR — orchestrator runtime source (default ~/catalyst/runs)
  • CATALYST_DB_FILE — SQLite index path (default ~/catalyst/catalyst.db)
  • CATALYST_COMMS_DIR — catalyst-comms source (default ~/catalyst/comms/channels)

The archive root is created on first sweep and tolerates missing optional artifacts (e.g., a worker without a rollup fragment). Re-running the sweep is idempotent (all upserts).

Workflow Context (.catalyst/.workflow-context.json)

Section titled “Workflow Context (.catalyst/.workflow-context.json)”

Auto-managed by Claude Code hooks and skills. Not committed to git.

{
"lastUpdated": "2025-10-26T10:30:00Z",
"currentTicket": "PROJ-123",
"orchestration": null,
"mostRecentDocument": {
"type": "plans",
"path": "thoughts/shared/plans/...",
"created": "2025-10-26T10:30:00Z",
"ticket": "PROJ-123"
},
"workflow": {
"research": [],
"plans": [],
"handoffs": [],
"prs": []
}
}
FieldTypeDescription
currentTicketstring | nullActive ticket ID for this worktree
orchestrationstring | nullOrchestration run name (set by create-worktree.sh --orchestration). Groups orchestrator + workers for per-run telemetry via catalyst.orchestration OTel resource attribute.

This file is what enables skill chaining — when you save research, create-plan finds it automatically. When you save a plan, implement-plan finds it. You never need to specify file paths between workflow phases.

The workflow-context.sh script manages this file programmatically:

Terminal window
workflow-context.sh init # Create file if missing
workflow-context.sh set-ticket PROJ-123 # Set currentTicket (no document needed)
workflow-context.sh set-orchestration NAME # Set orchestration run name
workflow-context.sh add research "path" "PROJ-123" # Add document + set ticket
workflow-context.sh recent research # Get most recent document of type
workflow-context.sh most-recent # Get most recent document (any type)
workflow-context.sh ticket PROJ-123 # Get all documents for a ticket

The workflow context file is created automatically at several points:

  • Skill prerequisites — all workflow skills call check-project-setup.sh which runs workflow-context.sh init
  • Worktree creationcreate-worktree.sh initializes the file and sets currentTicket from the worktree name (e.g., worktree ENG-123 sets ticket to ENG-123)
  • Ticket-based skills/oneshot PROJ-123 calls set-ticket immediately after parsing the ticket, before any research begins

The workflow context file is also read by direnv to populate OTEL_RESOURCE_ATTRIBUTES with the current ticket. This enables per-ticket telemetry correlation in Claude Code’s native OpenTelemetry support.

Setup: Add a .envrc to your repo root:

Terminal window
source_up
use_otel_context "your-project-name"

The use_otel_context function (from ~/.config/direnv/lib/otel.sh) sets these OTEL resource attributes:

AttributeSource
projectArgument to use_otel_context
hostnameMachine short name
git.branchCurrent git branch
linear.keyTicket from branch name, fallback to currentTicket in workflow context

source_up inherits environment from parent .envrc files (e.g., profile-based secrets at the workspace root). When using worktrees, create-worktree.sh generates a .envrc and runs direnv allow automatically.

direnv is recommended when working across multiple repositories. It automatically loads per-directory environment variables, keeping API keys isolated between projects and populating OTel resource attributes for observability.

Terminal window
brew install direnv

Add the shell hook to your profile (~/.zshrc or ~/.bashrc):

Terminal window
eval "$(direnv hook zsh)" # or bash

Catalyst ships two direnv library functions. Install them to ~/.config/direnv/lib/ so they’re available in all .envrc files:

use_profile — loads environment variables from a named profile file:

~/.config/direnv/lib/profiles.sh
# Loads vars from ~/.config/direnv/profiles/{name}.env
# Later profiles override earlier ones.

use_otel_context — sets OTEL_RESOURCE_ATTRIBUTES for telemetry correlation:

~/.config/direnv/lib/otel.sh
# Sets project, hostname, git.branch, linear.key, catalyst.orchestration

Create profile files at ~/.config/direnv/profiles/ to separate credentials by project:

~/.config/direnv/profiles/
├── personal.env # Global defaults (Cloudflare, AWS, PostHog)
├── adva.env # Client-specific keys (Supabase, Postmark, geocoding APIs)
├── slides.env # Project-specific keys (ElevenLabs, Gemini TTS)
└── accounting.env # Project-specific keys (Wave, Monarch)

Each file is a simple KEY=value format — no export prefix needed (direnv handles that).

Each project root gets an .envrc file that layers profiles and sets OTel context:

~/code-repos/github/acme/project/.envrc
use_profile personal # Base credentials
use_profile acme # Client-specific overrides
use_otel_context "acme" # OTel resource attributes

Sub-directories (e.g., Conductor workspaces or worktrees) inherit from the parent:

~/conductor/workspaces/acme/workspace-1/.envrc
source_up # Inherit from parent .envrc
use_otel_context "acme" # OTel context for this workspace

The source_up directive walks up the directory tree until it finds a parent .envrc, chaining configurations. This means worktrees and Conductor workspaces automatically get the parent project’s API keys without duplicating them.

Without direnv, API keys end up in shell profiles (.zshrc) where they’re global — every project sees every key. With direnv profiles:

  • Credentials are scopedcd into a project and only its keys are loaded
  • OTel attributes are automatic — every Claude Code session gets the right project and linear.key labels without manual configuration
  • Worktrees inheritsource_up means new worktrees get the right environment immediately
  • No secret leakage.envrc files are committed (they reference profiles, not secrets); profile .env files are local-only

The thoughts system provides git-backed persistent context across sessions. The setup script handles initialization, but for manual setup:

Terminal window
cd /path/to/your-project
humanlayer thoughts init
# Or with a specific profile for multi-project isolation
humanlayer thoughts init --profile acme

Directory structure:

<org_root>/
├── thoughts/ # Shared by all org projects
│ ├── repos/
│ │ ├── project-a/
│ │ │ ├── {your_name}/
│ │ │ └── shared/
│ │ └── project-b/
│ └── global/
├── project-a/
│ └── thoughts/ # Symlinks to ../thoughts/repos/project-a/
└── project-b/
└── thoughts/ # Symlinks to ../thoughts/repos/project-b/
Terminal window
humanlayer thoughts sync # Sync changes
humanlayer thoughts status # Check status
humanlayer thoughts sync -m "Updated research" # Sync with message
# Back up to GitHub
cd <org_root>/thoughts
gh repo create my-thoughts --private --source=. --push

Change projectKey in .catalyst/config.json to point to a different secrets file:

{
"catalyst": {
"projectKey": "work"
}
}

For fully isolated multi-client setups, see Multi-Project Setup.

  1. File exists: ls .catalyst/config.json
  2. Valid JSON: cat .catalyst/config.json | jq
  3. Correct location: must be in the .catalyst/ directory (or .claude/ for backward compat)
  4. Secrets file exists: ls ~/.config/catalyst/config-{projectKey}.json
Terminal window
humanlayer thoughts status
humanlayer thoughts init # Re-initialize if needed