Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

簡介

什麼是PRECC?

PRECC (Claude Code 預測性錯誤糾正) 是一個Rust工具,透過官方的PreToolUse鉤子機制攔截Claude Code的bash命令。它在錯誤發生之前修復它們,節省token並消除重試循環。

對社區用戶免費。

問題

Claude Code在可預防的錯誤上浪費大量token:

  • 目錄錯誤 – 在沒有 Cargo.toml 的父目錄中執行 cargo build,然後在讀取錯誤後重試。
  • 重試循環 – 失敗的命令產生冗長的輸出,Claude讀取、推理並重試。每個循環消耗數百個token。
  • 冗長輸出findls -R 等命令輸出數千行,Claude必須處理這些內容。

四大支柱

上下文修復 (cd-prepend)

檢測到 cargo buildnpm test 等命令在錯誤的目錄中運行時,在執行前添加 cd /正確/路徑 &&

GDB除錯

偵測附加GDB進行更深入除錯的機會,提供結構化的除錯資訊而不是原始的核心傾印。

工作階段挖掘

挖掘Claude Code工作階段日誌中的失敗-修復對。當同樣的錯誤再次發生時,PRECC已經知道修復方法並自動應用。

自動化技能

內建和挖掘技能庫,匹配命令模式並重寫它們。技能定義為TOML檔案或SQLite列,便於檢查、編輯和分享。

工作原理(30秒版本)

  1. Claude Code即將執行一個bash命令。
  2. PreToolUse鉤子將命令作為JSON透過stdin發送給 precc-hook
  3. precc-hook 在3毫秒內透過管線(技能、目錄修正、壓縮)處理命令。
  4. 修正後的命令作為JSON透過stdout回傳。
  5. Claude Code執行修正後的命令。

瑣碎錯誤會被合併;重寫原因隨鉤子回應一起返回,因此每次修正都可稽核,而非悄然發生。

安全邊界

PRECC僅在語義等價可證明保留或使用者可驗證時才進行重寫。破壞性命令(rmgit push --forcegit reset --hard)即使匹配技能也絕不重寫。每次變換必須是有界的——重寫後的命令仍須包含原始命令的核心token。無界重寫會被自動還原。每次應用的重寫都會被記錄並顯示,以便您稽核、停用或撤銷。

自適應壓縮

如果命令在壓縮後失敗,PRECC會自動在重試時跳過壓縮,以便Claude獲得完整的未壓縮輸出來除錯。

即時使用統計

當前版本 :

指標
鉤子呼叫次數
節省的token
節省比率%
RTK重寫
CD修正
鉤子延遲 ms (p50)
獨立用戶

Measured Savings (Ground Truth)

各版本節省情況

這些數字會從匿名遙測資料自動更新。

連結

安裝

快速安裝 (Linux / macOS)

curl -fsSL https://peria.ai/install.sh | bash

這會下載適用於您平臺的最新版本二進制文件,驗證SHA256校驗和,並將其放置在 ~/.local/bin/ 中。

安裝後,初始化PRECC:

precc init

precc init 在Claude Code中註冊PreToolUse鉤子,創建數據目錄,並初始化技能數據庫。

安裝選項

SHA256驗證

默認情況下,安裝程序會根據已發佈的SHA256校驗和驗證二進制文件。要跳過驗證(不推薦):

curl -fsSL https://peria.ai/install.sh | bash -s -- --no-verify

自定義安裝前綴

安裝到自定義位置:

curl -fsSL https://peria.ai/install.sh | bash -s -- --prefix /opt/precc

附加工具 (–extras)

PRECC附帶可選的附加工具。使用 --extras 安裝它們:

curl -fsSL https://peria.ai/install.sh | bash -s -- --extras

這將安裝:

工具用途
RTK命令重寫工具包
lean-ctxCLAUDE.md和提示文件的上下文壓縮
nushell用於高級管道的結構化Shell
cocoindex-code代碼索引以加快上下文解析

Windows (PowerShell)

irm https://peria.ai/install.ps1 | iex

然後初始化:

precc init

手動安裝

  1. GitHub Releases 下載適用於您平臺的發佈二進制文件。
  2. 根據版本中的 .sha256 文件驗證SHA256校驗和。
  3. 將二進制文件放置在 PATH 中的目錄中(例如 ~/.local/bin/)。
  4. 運行 precc init

更新

precc update

強制更新到特定版本:

precc update --force --version 0.3.0

啓用自動更新:

precc update --auto

驗證安裝

$ precc --version
precc 0.3.0

$ precc savings
Session savings: 0 tokens (no commands intercepted yet)

如果找不到 precc,請確保 ~/.local/bin 在您的 PATH 中。

快速入門

5分鐘內啓動PRECC。

步驟1:安裝

curl -fsSL https://peria.ai/install.sh | bash

步驟2:初始化

$ precc init
[precc] Hook registered with Claude Code
[precc] Created ~/.local/share/precc/
[precc] Initialized heuristics.db with 8 built-in skills
[precc] Ready.

步驟3:驗證Hook已激活

$ precc skills list
  # Name               Type      Triggers
  1 cargo-wrong-dir    built-in  cargo build/test/clippy outside Rust project
  2 git-wrong-dir      built-in  git * outside a repo
  3 go-wrong-dir       built-in  go build/test outside Go module
  4 make-wrong-dir     built-in  make without Makefile in cwd
  5 npm-wrong-dir      built-in  npm/npx/pnpm/yarn outside Node project
  6 python-wrong-dir   built-in  python/pytest/pip outside Python project
  7 jj-translate       built-in  git * in jj-colocated repo
  8 asciinema-gif      built-in  asciinema rec

步驟4:正常使用Claude Code

打開Claude Code並照常工作。PRECC在後臺靜默運行。當Claude發出一個會失敗的命令時,PRECC會在執行前修正它。

示例:錯誤目錄的Cargo Build

假設你的項目在 ~/projects/myapp/,Claude發出:

cargo build

~/projects/(高了一級,那裏沒有 Cargo.toml)。

沒有PRECC: Claude收到錯誤 could not find Cargo.toml in /home/user/projects or any parent directory,讀取、推理,然後用 cd myapp && cargo build 重試。代價:浪費約2,000個token。

使用PRECC: Hook檢測到缺失的 Cargo.toml,在 myapp/ 中找到它,並將命令重寫爲:

cd /home/user/projects/myapp && cargo build

Claude永遠看不到錯誤。零token浪費。

步驟5:查看節省情況

會話結束後,查看PRECC節省了多少token:

$ precc savings
Session Token Savings
=====================
Total estimated savings: 4,312 tokens

Breakdown:
  Pillar 1 (cd prepends):       2,104 tokens  (3 corrections)
  Pillar 4 (skill activations):   980 tokens  (2 activations)
  RTK rewrites:                 1,228 tokens  (5 rewrites)

後續步驟

  • 技能 – 查看所有可用技能以及如何創建自己的技能。
  • Hook管道 – 瞭解底層發生了什麼。
  • 節省 – 詳細的token節省分析。

許可證

PRECC提供兩個層級:Community(免費)和Pro。

Community層(免費)

Community層包括:

  • 所有內置技能(錯誤目錄修正、jj翻譯等)
  • 支持完整Pillar 1和Pillar 4的Hook管道
  • 基本的 precc savings 摘要
  • 使用 precc ingest 進行會話挖掘
  • 無限本地使用

Pro層

Pro解鎖額外功能:

  • 詳細節省分析precc savings --all 逐命令分析
  • GIF錄製precc gif 用於創建終端動畫GIF
  • IP地理圍欄合規 – 適用於受監管的環境
  • 電子郵件報告precc mail report 發送分析報告
  • GitHub Actions分析precc gha 用於調試失敗的工作流
  • 上下文壓縮precc compress 用於CLAUDE.md優化
  • 優先支持

激活許可證

$ precc license activate XXXX-XXXX-XXXX-XXXX --email you@example.com
[precc] License activated for you@example.com
[precc] Plan: Pro
[precc] Expires: 2027-04-03

檢查許可證狀態

$ precc license status
License: Pro
Email:   you@example.com
Expires: 2027-04-03
Status:  Active

GitHub Sponsors激活

如果您通過GitHub Sponsors贊助PRECC,您的許可證將通過您的GitHub郵箱自動激活。無需密鑰——只需確保您的贊助者郵箱匹配:

$ precc license status
License: Pro (GitHub Sponsors)
Email:   you@example.com
Status:  Active (auto-renewed)

設備指紋

每個許可證都綁定到設備指紋。使用以下命令查看:

$ precc license fingerprint
Fingerprint: a1b2c3d4e5f6...

如果需要將許可證轉移到新機器,請先停用:

precc license deactivate

然後在新機器上激活。

許可證過期?

當Pro許可證到期時,PRECC會恢復到Community層。所有內置技能和核心功能繼續工作。只有Pro特有功能變爲不可用。詳情請參閱FAQ

鉤子管道

precc-hook 二進制文件是PRECC的核心。它位於Claude Code和shell之間,在5毫秒內處理每個bash命令。

Claude Code如何調用鉤子

Claude Code支持PreToolUse鉤子——可以在執行前檢查和修改工具輸入的外部程序。當Claude即將運行bash命令時,它通過stdin將JSON發送給 precc-hook 並從stdout讀取響應。

管道階段

Claude Code
    |
    v
+---------------------------+
| 1. Parse JSON stdin       |  Read the command from Claude Code
+---------------------------+
    |
    v
+---------------------------+
| 2. Skill matching         |  Query heuristics.db for matching skills (Pillar 4)
+---------------------------+
    |
    v
+---------------------------+
| 3. Directory correction   |  Resolve correct working directory (Pillar 1)
+---------------------------+
    |
    v
+---------------------------+
| 4. GDB check              |  Detect debug opportunities (Pillar 2)
+---------------------------+
    |
    v
+---------------------------+
| 5. RTK rewriting          |  Apply command rewrites for token savings
+---------------------------+
    |
    v
+---------------------------+
| 6. Emit JSON stdout       |  Return modified command to Claude Code
+---------------------------+
    |
    v
  Shell executes corrected command

示例:JSON輸入和輸出

輸入(來自Claude Code)

{
  "tool_input": {
    "command": "cargo build"
  }
}

PRECC檢測到當前目錄沒有 Cargo.toml,但 ./myapp/Cargo.toml 存在。

輸出(到Claude Code)

{
  "hookSpecificOutput": {
    "updatedInput": {
      "command": "cd /home/user/projects/myapp && cargo build"
    }
  }
}

如果不需要修改,updatedInput.command 爲空,Claude Code使用原始命令。

階段詳情

階段1:解析JSON

從stdin讀取完整的JSON對象。提取 tool_input.command。如果解析失敗,鉤子立即退出,Claude Code使用原始命令(fail-open設計)。

階段2:技能匹配

查詢SQLite啓發式數據庫,尋找觸發模式與命令匹配的技能。技能按優先級順序檢查。內置TOML技能和挖掘的技能都會被評估。

階段3:目錄修正

對於構建命令(cargogomakenpmpython 等),檢查預期的項目文件是否存在於當前目錄中。如果不存在,掃描附近目錄尋找最近匹配並添加 cd <dir> && 前綴。

目錄掃描使用緩存的文件系統索引,TTL爲5秒,以保持高速。

階段4:GDB檢查

如果命令可能產生崩潰(例如運行調試二進制文件),PRECC可以建議或注入GDB包裝器來捕獲結構化的調試輸出,而不是原始崩潰日誌。

階段5:RTK重寫

應用RTK(重寫工具包)規則,縮短冗長命令、抑制嘈雜輸出或重構命令以提高token效率。

階段6:輸出JSON

將修改後的命令序列化回JSON並寫入stdout。如果沒有更改,輸出信號Claude Code使用原始命令。

性能

整個管道在5毫秒(p99)內完成。關鍵優化:

  • SQLite使用WAL模式實現無鎖併發讀取
  • 預編譯的正則表達式模式用於技能匹配
  • 緩存的文件系統掃描(5秒TTL)
  • 熱路徑中無網絡調用
  • Fail-open:任何錯誤都回退到原始命令

手動測試鉤子

你可以直接調用鉤子:

$ echo '{"tool_input":{"command":"cargo build"}}' | precc-hook
{"hookSpecificOutput":{"updatedInput":{"command":"cd /home/user/myapp && cargo build"}}}

技能

技能是PRECC用來檢測和糾正命令的模式匹配規則。它們可以是內置的(作爲TOML文件分發)或從會話日誌中挖掘的。

內置技能

技能觸發條件動作
cargo-wrong-dir在Rust項目外運行 cargo build/test/clippy在命令前添加 cd 到最近的 Cargo.toml 目錄
git-wrong-dir在git倉庫外運行 git *在命令前添加 cd 到最近的 .git 目錄
go-wrong-dir在Go模塊外運行 go build/test在命令前添加 cd 到最近的 go.mod 目錄
make-wrong-dir當前目錄沒有Makefile時運行 make在命令前添加 cd 到最近的Makefile目錄
npm-wrong-dir在Node項目外運行 npm/npx/pnpm/yarn在命令前添加 cd 到最近的 package.json 目錄
python-wrong-dir在Python項目外運行 python/pytest/pip在命令前添加 cd 到最近的Python項目
jj-translate在jj共存倉庫中運行 git *重寫爲等效的 jj 命令
asciinema-gifasciinema rec重寫爲 precc gif

列出技能

$ precc skills list
  # Name               Type      Triggers
  1 cargo-wrong-dir    built-in  cargo build/test/clippy outside Rust project
  2 git-wrong-dir      built-in  git * outside a repo
  3 go-wrong-dir       built-in  go build/test outside Go module
  4 make-wrong-dir     built-in  make without Makefile in cwd
  5 npm-wrong-dir      built-in  npm/npx/pnpm/yarn outside Node project
  6 python-wrong-dir   built-in  python/pytest/pip outside Python project
  7 jj-translate       built-in  git * in jj-colocated repo
  8 asciinema-gif      built-in  asciinema rec
  9 fix-pytest-path    mined     pytest with wrong test path

顯示技能詳情

$ precc skills show cargo-wrong-dir
Name:        cargo-wrong-dir
Type:        built-in
Source:      skills/builtin/cargo-wrong-dir.toml
Description: Detects cargo commands run outside a Rust project and prepends
             cd to the directory containing the nearest Cargo.toml.
Trigger:     ^cargo\s+(build|test|clippy|run|check|bench|doc)
Action:      prepend_cd
Marker:      Cargo.toml
Activations: 12

將技能導出爲TOML

$ precc skills export cargo-wrong-dir
[skill]
name = "cargo-wrong-dir"
description = "Prepend cd for cargo commands outside a Rust project"
trigger = "^cargo\\s+(build|test|clippy|run|check|bench|doc)"
action = "prepend_cd"
marker = "Cargo.toml"
priority = 10

編輯技能

$ precc skills edit cargo-wrong-dir

這將在您的 $EDITOR 中打開技能定義。保存後,技能會自動重新加載。

Advise 命令

precc skills advise 分析您最近的會話,並根據重複模式建議新技能:

$ precc skills advise
Analyzed 47 commands from the last session.

Suggested skills:
  1. docker-wrong-dir: You ran `docker compose up` outside the project root 3 times.
     Suggested trigger: ^docker\s+compose
     Suggested marker: docker-compose.yml

  2. terraform-wrong-dir: You ran `terraform plan` outside the infra directory 2 times.
     Suggested trigger: ^terraform\s+(plan|apply|init)
     Suggested marker: main.tf

Accept suggestion [1/2/skip]?

聚類技能

$ precc skills cluster

將相似的挖掘技能分組,幫助識別冗餘或重疊的模式。

挖掘技能與內置技能

內置技能隨PRECC一起分發,定義在 skills/builtin/*.toml 中。它們涵蓋了最常見的目錄錯誤。

挖掘技能由 precc ingestprecc-learner 守護進程從您的會話日誌創建。它們存儲在 ~/.local/share/precc/heuristics.db 中,特定於您的工作流程。詳情請參閱挖掘

節省

PRECC追蹤每次攔截的估計token節省。使用 precc savings 查看PRECC阻止了多少浪費。

快速摘要

$ precc savings
Session Token Savings
=====================
Total estimated savings: <span data-stat="session_tokens_saved">8,741</span> tokens

Breakdown:
  Pillar 1 (cd prepends):         <span data-stat="session_p1_tokens">3,204</span> tokens  (<span data-stat="session_p1_count">6</span> corrections)
  Pillar 4 (skill activations):   <span data-stat="session_p4_tokens">1,560</span> tokens  (<span data-stat="session_p4_count">4</span> activations)
  RTK rewrites:                   <span data-stat="session_rtk_tokens">2,749</span> tokens  (<span data-stat="session_rtk_count">11</span> rewrites)
  Lean-ctx wraps:                 <span data-stat="session_lean_tokens">1,228</span> tokens  (<span data-stat="session_lean_count">2</span> wraps)

詳細分類(Pro)

$ precc savings --all
Session Token Savings (Detailed)
================================
Total estimated savings: <span data-stat="session_tokens_saved">8,741</span> tokens

Command-by-command:
  #  Time   Command                          Saving   Source
  1  09:12  cargo build                      534 tk   cd prepend (cargo-wrong-dir)
  2  09:14  cargo test                       534 tk   cd prepend (cargo-wrong-dir)
  3  09:15  git status                       412 tk   cd prepend (git-wrong-dir)
  4  09:18  npm install                      824 tk   cd prepend (npm-wrong-dir)
  5  09:22  find . -name "*.rs"              387 tk   RTK rewrite (output truncation)
  6  09:25  cat src/main.rs                  249 tk   RTK rewrite (lean-ctx wrap)
  7  09:31  cargo clippy                     534 tk   cd prepend (cargo-wrong-dir)
  ...

Pillar Breakdown:
  Pillar 1 (context resolution):   <span data-stat="session_p1_tokens">3,204</span> tokens  <span data-stat="session_p1_pct">36.6</span>%
  Pillar 2 (GDB debugging):            0 tokens   0.0%
  Pillar 3 (mined preventions):        0 tokens   0.0%
  Pillar 4 (automation skills):    <span data-stat="session_p4_tokens">1,560</span> tokens  <span data-stat="session_p4_pct">17.8</span>%
  RTK rewrites:                    <span data-stat="session_rtk_tokens">2,749</span> tokens  <span data-stat="session_rtk_pct">31.5</span>%
  Lean-ctx wraps:                  <span data-stat="session_lean_tokens">1,228</span> tokens  <span data-stat="session_lean_pct">14.1</span>%

如何估算節省

每種修正類型都有基於沒有PRECC時會發生什麼的估計token成本:

修正類型估計節省原因
cd prepend~500 tokens錯誤輸出 + Claude推理 + 重試
技能激活~400 tokens錯誤輸出 + Claude推理 + 重試
RTK rewrite~250 tokensClaude需要閱讀的冗長輸出
Lean-ctx wrap~600 tokens大文件內容被壓縮
挖掘預防~500 tokens已知的失敗模式被避免

這些是保守估計。實際節省通常更高,因爲Claude對錯誤的推理可能很冗長。

累計節省

節省數據在PRECC數據庫中跨會話持久化。隨着時間推移,您可以跟蹤總體影響:

$ precc savings
Session Token Savings
=====================
Total estimated savings: <span data-stat="session_tokens_saved">8,741</span> tokens

Lifetime savings: <span data-stat="total_tokens_saved">142,389</span> tokens across <span data-stat="total_sessions">47</span> sessions

Status Bar

After installation, PRECC wires a statusLine entry into ~/.claude/settings.json so the Claude Code status bar shows live session metrics:

$0.42 spent | 1.2M in/out | 📊 last cmd: −1.2K | PRECC: 7 fixes | 5.8ms avg | this session: 320 saved over 7 cmds (~$0.05) | lifetime: 8.9K saved over 217 cmds (~$2.85)

Each segment:

SegmentSourceMeaningResets on session restart?
$0.42 spentClaude Code’s cost.total_cost_usdCumulative session cost reported by Claude CodeYes
1.2M in/outClaude Code’s total_input_tokens + total_output_tokensNon-cached input + output tokens across the sessionYes
📊 last cmd: −1.2KPRECC measurement of the most recent Bash commandReal ground-truth saving from re-running the originalNo (persists across sessions)
PRECC: 7 fixesPRECC session aggregate from metrics.logNumber of corrections this session — fix count only, no fake token estimateYes
5.8ms avgPRECC hook latency p50Time PRECC spent processing each tool callYes
bash 18% of totalPRECC post_observations.log filtered by session windowShare of session tokens that came from Bash output — clarifies why PRECC’s savings are naturally a fraction of total cost (PRECC only optimizes Bash output)Yes
this session: 320 saved over 7 cmds (~$0.05)~/.local/share/precc/.lifetime_summary.json minus the per-session baseline at ~/.local/share/precc/sessions/<session_id>.savings_baselineReal per-session delta. Baseline is captured the first time PRECC sees this session_id; subsequent refreshes compute current_lifetime − baseline so the value reflects savings accrued in this session only. Hidden when delta is zero (start of session)Yes (baseline re-snapshots)
lifetime: 8.9K saved over 217 cmds (~$2.85)~/.local/share/precc/.lifetime_summary.json + current session’s cost.total_cost_usd / total_used_tokens rateCumulative tokens saved and re-measured commands since PRECC was first installed, plus an estimated USD value computed from the current session’s per-token rate. Cost estimate is conservative — it uses (input+output) as the denominator while the cost includes cache tokens, so the per-token rate is overstated and the resulting savings figure is lower than actualNo

The lifetime: segment is placed last so it’s the first to be truncated if Claude Code’s UI clips the bar at the right edge.

Why cost and token count don’t divide

The displayed 1.2M in/out is not the denominator that produced $0.42 spent. Claude Code’s cost.total_cost_usd is computed from the API’s full token breakdown — base input, output, plus cache reads and cache creations. The session-wide cumulative cache token counts are not exposed in the statusline schema, so PRECC can only show the visible (non-cache) portion.

On long sessions with heavy file rereads, cache reads can be 10× the visible token count. That’s why pairing the two as a ratio would mislead — PRECC shows them as independent segments instead.

Why PRECC doesn’t compute the cost

The cost number is authoritative. PRECC reads cost.total_cost_usd verbatim from the JSON Claude Code pipes into the status command on stdin. That’s the same number Claude Code charges against your subscription/usage budget. You can verify it any time with the built-in /cost slash command — both should agree.

What drives the cost

For Claude Opus 4.6:

Token typeStandard (≤200k context)1M context tier
Input$15 / MTok$30 / MTok
Output$75 / MTok$150 / MTok
Cache write$18.75 / MTok$37.50 / MTok
Cache read$1.50 / MTok$3 / MTok

The biggest drivers on long sessions are usually:

  1. Output tokens — most expensive per-token type, especially on the 1M context tier
  2. Repeated cache reads — cheap individually but accumulate fast across many turns
  3. Cache creations — written once per file read, ~1.25× the base input rate

PRECC reduces the visible-token cost by compressing Bash output (the 📊 last cmd: segment shows the per-command saving), but it cannot reduce cache reads of files Claude has already loaded.

Stable session counts

The “PRECC: N fixes” segment counts events since the persisted session start, written to ~/.local/share/precc/sessions/<session_id>.start on the first statusline refresh of each session. This makes the count monotonic — it cannot drop mid-session even if cost.total_duration_ms is missing on a particular refresh (which would otherwise collapse the window to “since now” and silently drop nearly all events).

Auto-refreshed lifetime snapshot

The lifetime: segment reads ~/.local/share/precc/.lifetime_summary.json, which is rewritten:

  • On every PostToolUse measurement (so it stays current as commands accumulate)
  • On every precc savings invocation

The this session: segment reads the same lifetime file but subtracts a per-session baseline persisted to ~/.local/share/precc/sessions/<session_id>.savings_baseline on the first refresh of each session.

No need to manually refresh anything — the files update themselves.

Suppressing the status bar

If you’d rather keep your existing status bar, set your own statusLine command in ~/.claude/settings.json. PRECC’s installer will detect the custom value and leave it alone on subsequent updates.

To suppress only the per-interaction 📊 PRECC line (in additionalContext), set PRECC_QUIET=1 in your shell environment.

壓縮

precc compress 縮小 CLAUDE.md 和其他上下文文件,以減少 Claude Code 加載時的 token 使用量。這是 Pro 功能。

基本用法

$ precc compress .
[precc] Scanning directory: .
[precc] Found 3 context files:
         CLAUDE.md (2,847 tokens -> 1,203 tokens, -57.7%)
         ARCHITECTURE.md (4,112 tokens -> 2,044 tokens, -50.3%)
         ALTERNATIVES.md (3,891 tokens -> 1,967 tokens, -49.5%)
[precc] Total: 10,850 tokens -> 5,214 tokens (-51.9%)
[precc] Files compressed. Use --revert to restore originals.

試運行

預覽將要更改的內容而不修改文件:

$ precc compress . --dry-run
[precc] Dry run -- no files will be modified.
[precc] CLAUDE.md: 2,847 tokens -> 1,203 tokens (-57.7%)
[precc] ARCHITECTURE.md: 4,112 tokens -> 2,044 tokens (-50.3%)
[precc] ALTERNATIVES.md: 3,891 tokens -> 1,967 tokens (-49.5%)
[precc] Total: 10,850 tokens -> 5,214 tokens (-51.9%)

還原

原始文件會自動備份。要恢復它們:

$ precc compress --revert
[precc] Restored 3 files from backups.

壓縮了什麼

壓縮器應用多種轉換:

  • 刪除冗餘空白和空行
  • 縮短冗長的措辭同時保留含義
  • 壓縮表格和列表
  • 去除註釋和裝飾性格式
  • 保留所有代碼塊、路徑和技術標識符

壓縮後的輸出仍然是人類可讀的——它不是壓縮化或混淆的。

針對特定文件

$ precc compress CLAUDE.md
[precc] CLAUDE.md: 2,847 tokens -> 1,203 tokens (-57.7%)

報告

precc report 生成一個分析儀表板,總結 PRECC 活動和 token 節省情況。

生成報告

$ precc report
PRECC Report -- 2026-04-03
==========================

Sessions analyzed: 12
Commands intercepted: 87
Total token savings: 42,389

Top skills by activation:
  1. cargo-wrong-dir     34 activations   17,204 tokens saved
  2. npm-wrong-dir       18 activations    9,360 tokens saved
  3. git-wrong-dir       12 activations    4,944 tokens saved
  4. RTK rewrite         15 activations    3,750 tokens saved
  5. python-wrong-dir     8 activations    4,131 tokens saved

Savings by pillar:
  Pillar 1 (context resolution):  28,639 tokens  67.6%
  Pillar 4 (automation skills):    7,000 tokens  16.5%
  RTK rewrites:                    3,750 tokens   8.8%
  Lean-ctx wraps:                  3,000 tokens   7.1%

Recent corrections:
  2026-04-03 09:12  cargo build -> cd myapp && cargo build
  2026-04-03 09:18  npm test -> cd frontend && npm test
  2026-04-03 10:05  git status -> cd repo && git status
  ...

通過電子郵件發送報告

將報告發送到電子郵件地址(需要郵件設置,見 Email):

$ precc report --email
[precc] Report sent to you@example.com

收件人地址從 ~/.config/precc/mail.toml 讀取。您也可以使用 precc mail report EMAIL 發送到特定地址。

報告數據

報告從本地 PRECC 數據庫 ~/.local/share/precc/history.db 生成。除非您明確通過電子郵件發送報告,否則沒有數據離開您的機器。

挖掘

PRECC挖掘Claude Code會話日誌以學習失敗-修復模式。當它再次看到同樣的錯誤時,會自動應用修復。

導入會話日誌

導入單個文件

$ precc ingest ~/.claude/logs/session-2026-04-03.jsonl
[precc] Parsing session-2026-04-03.jsonl...
[precc] Found 142 commands, 8 failure-fix pairs
[precc] Stored 8 patterns in history.db
[precc] 2 new skill candidates identified

導入所有日誌

$ precc ingest --all
[precc] Scanning ~/.claude/logs/...
[precc] Found 23 session files (14 new, 9 already ingested)
[precc] Parsing 14 new files...
[precc] Found 47 failure-fix pairs across 14 sessions
[precc] Stored 47 patterns in history.db
[precc] 5 new skill candidates identified

強制重新導入

要重新處理已導入的文件:

$ precc ingest --all --force
[precc] Re-ingesting all 23 session files...

挖掘的工作原理

  1. PRECC讀取會話JSONL日誌文件。
  2. 它識別命令對,其中第一個命令失敗,第二個是糾正後的重試。
  3. 它提取模式(出了什麼問題)和修復(Claude做了什麼不同的事)。
  4. 模式存儲在 ~/.local/share/precc/history.db 中。
  5. 當模式達到置信閾值(多次出現)時,它成爲 heuristics.db 中的挖掘技能。

示例模式

Failure: pytest tests/test_auth.py
Error:   ModuleNotFoundError: No module named 'myapp'
Fix:     cd /home/user/myapp && pytest tests/test_auth.py
Pattern: pytest outside project root -> prepend cd

precc-learner 守護進程

precc-learner 守護進程在後臺運行,自動監視新的會話日誌:

$ precc-learner &
[precc-learner] Watching ~/.claude/logs/ for new sessions...
[precc-learner] Processing session-2026-04-03-1412.jsonl... 3 new patterns

守護進程使用文件系統通知(Linux上的inotify,macOS上的FSEvents),因此在會話結束時立即做出反應。

從模式到技能

挖掘的模式在滿足以下條件時升級爲技能:

  • 跨會話至少出現3次
  • 一致的修復模式(每次相同類型的糾正)
  • 未檢測到誤報

您可以通過以下方式查看技能候選:

$ precc skills advise

有關管理技能的詳細信息,請參見 Skills

數據存儲

  • 失敗-修復對: ~/.local/share/precc/history.db
  • 升級的技能: ~/.local/share/precc/heuristics.db

兩者都是WAL模式的SQLite數據庫,用於安全的併發訪問。

電子郵件

PRECC可以通過電子郵件發送報告和文件。這需要一次性的SMTP設置。

設置

$ precc mail setup
SMTP host: smtp.gmail.com
SMTP port [587]: 587
Username: you@gmail.com
Password: ********
From address [you@gmail.com]: you@gmail.com
[precc] Mail configuration saved to ~/.config/precc/mail.toml
[precc] Sending test email to you@gmail.com...
[precc] Test email sent successfully.

配置文件

配置存儲在 ~/.config/precc/mail.toml

[smtp]
host = "smtp.gmail.com"
port = 587
username = "you@gmail.com"
password = "app-password-here"
from = "you@gmail.com"
tls = true

您可以直接編輯此文件:

$EDITOR ~/.config/precc/mail.toml

對於Gmail,請使用應用密碼而不是您的賬戶密碼。

發送報告

$ precc mail report team@example.com
[precc] Generating report...
[precc] Sending to team@example.com...
[precc] Report sent.

發送文件

$ precc mail send colleague@example.com output.log
[precc] Sending output.log to colleague@example.com...
[precc] Sent (14.2 KB).

SSH中繼支持

如果您的機器無法直接訪問SMTP服務器(例如,在企業防火牆後面),PRECC支持通過SSH隧道中繼:

[smtp]
host = "localhost"
port = 2525

[ssh_relay]
host = "relay.example.com"
user = "you"
remote_port = 587
local_port = 2525

PRECC將在發送前自動建立SSH隧道。

GIF錄製

precc gif 從bash腳本創建終端會話的動畫GIF錄製。這是Pro功能。

基本用法

$ precc gif script.sh 30s
[precc] Recording script.sh (max 30s)...
[precc] Running: echo "Hello, world!"
[precc] Running: cargo build --release
[precc] Running: cargo test
[precc] Recording complete.
[precc] Output: script.gif (1.2 MB, 24s)

第一個參數是包含要運行的命令的bash腳本。第二個參數是最大錄製時長。

腳本格式

腳本是標準的bash文件:

#!/bin/bash
echo "Building project..."
cargo build --release
echo "Running tests..."
cargo test
echo "Done!"

輸入模擬

對於交互式命令,提供輸入值作爲額外參數:

$ precc gif interactive-demo.sh 60s "yes" "my-project" "3"

每個額外參數在腳本提示輸入時作爲一行stdin輸入。

輸出選項

輸出文件默認以腳本命名(script.gif)。GIF使用深色終端主題,標準80x24尺寸。

爲什麼使用GIF而不是asciinema?

內置技能 asciinema-gif 自動將 asciinema rec 重寫爲 precc gif。GIF文件更具可移植性——它們可以在GitHub README、Slack和電子郵件中內聯顯示,無需播放器。

GitHub Actions 分析

precc gha 分析失敗的GitHub Actions運行並建議修復方案。這是Pro功能。

用法

傳入失敗的GitHub Actions運行的URL:

$ precc gha https://github.com/myorg/myrepo/actions/runs/12345678
[precc] Fetching run 12345678...
[precc] Run: CI / build (ubuntu-latest)
[precc] Status: failure
[precc] Failed step: Run cargo test

[precc] Log analysis:
  Error: test result: FAILED. 2 passed; 1 failed
  Failed test: tests::integration::test_database_connection
  Cause: thread 'tests::integration::test_database_connection' panicked at
         'called Result::unwrap() on an Err value: Connection refused'

[precc] Suggested fix:
  The test requires a database connection but the CI environment does not
  start a database service. Add a services block to your workflow:

    services:
      postgres:
        image: postgres:15
        ports:
          - 5432:5432
        env:
          POSTGRES_PASSWORD: test

功能說明

  1. 解析GitHub Actions運行URL以提取所有者、倉庫和運行ID。
  2. 通過GitHub API獲取運行日誌(如果設置了 GITHUB_TOKEN 則使用,否則公開訪問)。
  3. 識別失敗步驟並提取相關錯誤行。
  4. 分析錯誤並根據常見CI失敗模式建議修復方案。

支持的失敗模式

  • 缺少服務容器(數據庫、Redis等)
  • 運行器OS或架構不正確
  • 缺少環境變量或密鑰
  • 依賴安裝失敗
  • 測試超時
  • 權限錯誤
  • 緩存未命中導致構建緩慢

地理圍欄

PRECC包含用於受監管環境的IP地理圍欄合規性檢查。這是Pro功能。

概述

一些組織要求開發工具僅在批准的地理區域內運行。PRECC的地理圍欄功能驗證當前機器的IP地址是否在允許的區域列表中。

檢查合規性

$ precc geofence check
[precc] Current IP: 203.0.113.42
[precc] Region: US-East (Virginia)
[precc] Status: COMPLIANT
[precc] Policy: us-east-1, us-west-2, eu-west-1

如果機器在允許的區域之外:

$ precc geofence check
[precc] Current IP: 198.51.100.7
[precc] Region: AP-Southeast (Singapore)
[precc] Status: NON-COMPLIANT
[precc] Policy: us-east-1, us-west-2, eu-west-1
[precc] Warning: Current region is not in the allowed list.

刷新地理圍欄數據

$ precc geofence refresh
[precc] Fetching updated IP geolocation data...
[precc] Updated. Cache expires in 24h.

查看地理圍欄信息

$ precc geofence info
Geofence Configuration
======================
Policy file:    ~/.config/precc/geofence.toml
Allowed regions: us-east-1, us-west-2, eu-west-1
Cache age:      2h 14m
Last check:     2026-04-03 09:12:00 UTC
Status:         COMPLIANT

清除緩存

$ precc geofence clear
[precc] Geofence cache cleared.

配置

地理圍欄策略在 ~/.config/precc/geofence.toml 中定義:

[geofence]
allowed_regions = ["us-east-1", "us-west-2", "eu-west-1"]
check_on_init = true
block_on_violation = false

設置 block_on_violation = true 以阻止PRECC在允許區域外運行。

遙測

PRECC支持可選的匿名遙測以幫助改進工具。除非您明確同意,否則不會收集任何數據。

選擇加入

$ precc telemetry consent
[precc] Telemetry enabled. Thank you for helping improve PRECC.
[precc] You can revoke consent at any time with: precc telemetry revoke

選擇退出

$ precc telemetry revoke
[precc] Telemetry disabled. No further data will be sent.

檢查狀態

$ precc telemetry status
Telemetry: disabled
Last sent: never

預覽將發送的數據

在選擇加入之前,您可以查看將收集的確切數據:

$ precc telemetry preview
Telemetry payload (this session):
{
  "version": "0.3.0",
  "os": "linux",
  "arch": "x86_64",
  "skills_activated": 12,
  "commands_intercepted": 87,
  "pillars_used": [1, 4],
  "avg_hook_latency_ms": 2.3,
  "session_count": 1
}

收集的數據

  • PRECC版本、操作系統和架構
  • 彙總計數:攔截的命令、激活的技能、使用的支柱
  • 平均鉤子延遲
  • 會話數

不收集的數據

  • 不收集命令文本或參數
  • 不收集文件路徑或目錄名
  • 不收集項目名稱或倉庫URL
  • 不收集個人身份信息(PII)
  • 不收集IP地址(服務器不記錄它們)

環境變量覆蓋

無需運行命令即可禁用遙測(適用於CI或共享環境):

export PRECC_NO_TELEMETRY=1

這優先於同意設置。

數據目的地

遙測數據通過HTTPS發送到 https://telemetry.peria.ai/v1/precc。數據僅用於瞭解使用模式和確定開發優先級。

心智圖

本頁由 mindmap.db 自動產生 — 這是一個 SQLite 快照,記錄了所有 PRECC 開發工作階段和 git 提交。每一列都可追溯到其來源(commit:<sha>session:<id>doc:<path>)。

概覽

  • 已分析工作階段: 22
  • 訊息: 14023
  • 工具呼叫: 5072
  • 提交: 205
  • 時間範圍: 2026-03-20T07:04:14.787Z → 2026-04-19T11:50:10.153Z
  • 工作量(權杖):
    • 輸入: 27928
    • 輸出: 2750669
    • 快取寫入: 43349705
    • 快取讀取: 1936351239

功能

範圍標題狀態提交數權杖首次最近來源
benchfeat(bench): SWE-bench Verified/Lite driver scaffoldingstabilizing443442992026-04-172026-04-17commit:5bdd027d
benchmark_gate.shfeat: benchmark_gate.sh + pin tb dataset to 0.1.1shipped143442992026-04-172026-04-17commit:99fa9a74
realfeat: real lean-ctx (not stub), wider campaign, doc updatesshipped2298211522026-04-072026-04-17commit:6095720a
precc_mode=benchmarkfeat: PRECC_MODE=benchmark toggle + pairwise benchmark harnessshipped143442992026-04-172026-04-17commit:50c5a30f
addfeat: add precc update self-update commandshipped14425571072026-03-092026-04-17commit:e5542fba
negotiablefeat: negotiable rewrites, skill decay, explain/undo — response to criticshipped143442992026-04-172026-04-17commit:6fda67e4
statuslinefeat: statusline shows actual session token consumption + coststabilizing3254249152026-04-082026-04-13commit:4f65556d
publicfeat: public repo commits attributed to Ce-cyber-artshipped1253821192026-04-102026-04-10commit:0e4840e4
shortfeat: short install URL https://peria.ai/install.shshipped1253821192026-04-092026-04-09commit:615d3d06
rewritefeat: rewrite Pillar 2b (ccc) and Pillar 3 (compress) in Rust for single-binary deploymentshipped2381180742026-03-202026-04-08commit:78621579
shortenfeat: shorten statusline segments to fit narrower terminalsshipped1253821192026-04-082026-04-08commit:ef2c88b4
dropfeat: drop fake token estimate, append cost estimate to lifetime segmentstabilizing2253821192026-04-082026-04-08commit:2702f3f9
updatefeat: update pricing to $5/6mo + $10/yr, add webhook serverstabilizing9381180742026-02-252026-04-08commit:2d366031
clearerfeat: clearer statusline labels — meas:, drop confusing %, add bash shareshipped1253821192026-04-082026-04-08commit:4cd837b7
stablefeat: stable machine_hash for telemetry dedupstabilizing2253821192026-04-082026-04-08commit:3073f428
lifetimefeat: lifetime savings segment in statuslineshipped1253821192026-04-082026-04-08commit:9af422e8
preccfeat: precc analyze frequencies — data-driven rule gap discoveryshipped3253821192026-04-072026-04-08commit:d6f24c50
per-interactionfeat: per-interaction PRECC savings line in PostToolUseshipped1253821192026-04-082026-04-08commit:e3bc282e
webhookfeat: webhook auto-regenerates stats.json on telemetry POSTstabilizing2291341862026-03-312026-04-08commit:912b75f3
per-emailfeat: per-email aggregation for telemetryshipped1253821192026-04-082026-04-08commit:14c95e7d
v0.3.3feat: v0.3.3 — companion tools default-on, install-script clarityshipped1253821192026-04-072026-04-07commit:48fca046
measurementfeat: measurement campaign script — real per-mode measurementsshipped1253821192026-04-072026-04-07commit:36760587
quote-awarefeat: quote-aware chain split + sysadmin tool whitelist (54.2% → 55.5%)shipped1253821192026-04-072026-04-07commit:f6580598
;feat: ; chain support + ssh inner-command parsing for measurementshipped1253821192026-04-072026-04-07commit:10093218
expandfeat: expand is_safe_to_rerun coverage + measurement timeout/cacheshipped1253821192026-04-072026-04-07commit:c5a7ea79
multi-modefeat: multi-mode adaptive compression with failure learningshipped1253821192026-04-072026-04-07commit:81475afc
measuredfeat: measured savings in telemetry, detailed live stats, update nudgeshipped1253821192026-04-062026-04-06commit:06907091
scientificfeat: scientific token savings measurement, telemetry dedup, 28-language docsshipped1253821192026-04-062026-04-06commit:78a20ef2
v0.3.2feat: v0.3.2 — hook safety, adaptive compression, on-demand metrics importshipped1253821192026-04-052026-04-05commit:a0c0c882
self-hostedfeat: self-hosted telemetry endpoint at peria.ai, install UX improvementsshipped125657032026-04-042026-04-04commit:8212a18e
auto-updatefeat: auto-update consent prompt on init and manual updateshipped119243022026-04-022026-04-02commit:818be6dd
useperf: use pre-built binaries for lean-ctx and nushell installationstabilizing4101702522026-03-092026-03-31commit:8c612e55
authorizefeat: authorize peria.ai server for license key generationshipped211863642026-03-312026-03-31commit:53dfe832
licensefeat: license keys, SMTP mail-agent, updated business plan and demosstabilizing2101702522026-03-092026-03-31commit:b07c9dfb
lean-ctxfeat: lean-ctx integration for deep output compressionshipped111863642026-03-312026-03-31commit:07361e62
integratefeat: integrate three-pillar savings from precc-cc (cocoindex-code, token-saver, ClawHub)shipped2101702522026-03-202026-03-31commit:af4205f1
windowsfeat: Windows build via CI, deploy triggers workflowstabilizing225336922026-03-292026-03-29commit:7404761b
monthlyfeat: monthly usage report via email for Pro usersshipped125336922026-03-282026-03-28commit:77ad78bc
nushellfeat: nushell what-if analysis, skill clustering, comment blocker, bash unwrap (v0.2.6)shipped123379412026-03-272026-03-27commit:803df684
geofencefeat: geofence compliance guard, 3rd-party skill Claude interaction tracking (v0.2.5)shipped123379412026-03-262026-03-26commit:0c9fc765
stripefeat: Stripe payment integration, context pressure, GHA analysisshipped224570882026-03-212026-03-22commit:8eb16f78
contextfeat: context pressure warning, GHA analysis, statusline context %shipped121661412026-03-202026-03-20commit:894621ba
statusline,feat: statusline, squash deploy, ClaWHub metadata, SHA256 checksumsshipped121661412026-03-202026-03-20commit:7ab15883
gumroadfeat: Gumroad license verification via API (v0.2.2)shipped102026-03-132026-03-13commit:75c5e480
per-userfeat: per-user email-based license keys with Gumroad webhook (v0.2.2)shipped102026-03-132026-03-13commit:6d056958
posttoolusefeat: PostToolUse observability + comprehensive test coverage (v0.2.1)shipped102026-03-122026-03-12commit:6e33b7e4
multi-toolfeat: multi-tool hook dispatch, subagent propagation & Read/Grep filters (v0.2.0)shipped102026-03-122026-03-12commit:1bf5a108
skillfeat: skill advisor, sharing credits, telemetry & Rust actionbook (v0.1.9)shipped102026-03-122026-03-12commit:d41d310e
firefeat: fire anonymous update-check ping on precc update (opt-out via PRECC_NO_TELEMETRY=1)shipped102026-03-102026-03-10commit:7acce69d
enforcefeat: enforce license tier gates (Free/Pro) on ingest, mined skills, gif, mail, savingsshipped102026-03-102026-03-10commit:a7bd23e3
translatefeat: translate git commands to jj (Jujutsu) in colocated reposshipped102026-03-092026-03-09commit:d8a29e48
rtkfeat(rtk): sync rewrite rules with upstream RTK v0.27.2shipped102026-03-092026-03-09commit:ad7dca0e
applyfeat: apply skill portfolio per command for maximum token savingsshipped102026-03-092026-03-09commit:b2490073
pitchfeat(pitch): add bilingual EN/ZH PowerPoint pitch deckshipped202026-02-272026-02-28commit:8876c4b7
hookperf(hook): skip heuristics.db open via plain-text prefix cacheshipped102026-02-272026-02-27commit:89537483
initfeat(init): embed builtin skills in binary via include_str!shipped102026-02-262026-02-26commit:3a837b13
clifeat(cli): add precc skills export commandshipped202026-02-262026-02-26commit:59beea8d
gdbfeat(gdb): re-enable Pillar 2 GDB hook suggestionshipped102026-02-262026-02-26commit:a8428025
skillsfeat(skills): add git wrong-dir skill and context mappingstabilizing202026-02-252026-02-25commit:352474e1
metricsfeat(metrics): record hook latency, rtk_rewrite, cd_prepend via append-logshipped102026-02-252026-02-25commit:9bf31d12
demofeat(demo): add investor demo suiteshipped102026-02-252026-02-25commit:c818a0ac
securityfeat(security): SQLCipher encryption, binary hardening, multi-platform CIshipped102026-02-252026-02-25commit:efd3dfc8
ingestfeat(ingest): add –force flag to re-mine already-recorded sessionsshipped102026-02-222026-02-22commit:85cc8f6f

依賴關係(precc-core 模組)

  • advisordb, promote, skills
  • dietlean_ctx
  • metricsdb
  • miningskills
  • mode_selectordb, mode
  • multi_probediet, lean_ctx, mode, nushell, post_observe, rtk
  • nushelllean_ctx, mining, rtk
  • promotedb, skills
  • rtklean_ctx
  • sharingdb, license, skills
  • skill_advisormining, nushell
  • skillsdb
  • telemetrydb, license, mining

計劃與任務

計劃(請求設計/架構的提示)

  • [proposed] indeed the measurement needs to be based on precc-cc’s established KPI’s. If the two ideas are so close, perhaps you can draft a plan to integrate them (algorithmatically) step-by-step, then start to use Rust (consistent with Precc) to impl… — session:905ff169 (2026-04-18)
  • [proposed] 西班牙语网站上有人评价:中文翻譯(繁體): — session:781fe484 (2026-04-16)
  • [proposed] That’s a really solid framing — using pre-tool-call hooks as quality gates instead of just optimization is a big shift in mindset. You’re essentially moving from “make the model cheaper” to “make the system more correct,” whic… — session:ebd81938 (2026-04-05)
  • [proposed] Plan the integration of both tools, make sure we don’t take their credit and maintain a clear interface so that once it evolves, we can get smaller changes to integrate with their future changes — session:43541885 (2026-03-31)
  • [proposed] for the benchmark, we need to prepare a table to record the comparison for existing historical scenarios, as a “what-if” analysis because there is no way to measure the results for future usages. For this requirement, plan out a step-by-ste… — session:5761d7ca (2026-03-27)
  • [proposed] while bash could be improved using RTK, would its replacement with nushell a better choice for Claude Code? If so, plan an option for replacing bash with nushell to gain better accuracy and hence potentially more token savings by some small… — session:5761d7ca (2026-03-27)

任務(TaskCreate / TodoWrite 項目)

  • completed: 89
  • in_progress: 3
  • deleted: 2

最近 30 項任務:

  • [completed] Re-ingest and review residual pending — Run precc mindmap build after the fix, then classify the actually-pending tasks (done-but-unclosed vs genuinely-unfinished). — session:0925455d (2026-04-19)
  • [completed] Fold TaskCreate/TaskUpdate + dedupe TodoWrite — Replay TaskCreate/TaskUpdate events per (session_id, taskId) to derive final status. For TodoWrite, keep only the last call per session. — session:0925455d (2026-04-19)
  • [completed] Run ingest and produce MINDMAP.md — Execute ingest on local sessions + git, then render output to docs/MINDMAP.md. — session:0925455d (2026-04-19)
  • [completed] Wire precc mindmap CLI subcommand — Add ingest/render subcommands to precc-cli. — session:0925455d (2026-04-19)
  • [completed] Write mindmap render module — Query DB and render nested markdown mindmap with KPIs, features, plans, blockers. — session:0925455d (2026-04-19)
  • [completed] Write mindmap ingest module — Parse JSONL sessions + git log, extract messages/tokens/commands/decisions into SQLite. — session:0925455d (2026-04-19)
  • [completed] Design SQLite mindmap schema — Tables: sessions, messages, commands, features, plans, tasks, kpis, decisions, dependencies. Every row traces to source (session_id+uuid or commit sha). — session:0925455d (2026-04-19)
  • [in_progress] Step 4: HeaderSlicePass + kernel corpus — Shallow-clone Linux kernel, adapt filter for kernel conventions (Fixes: tag, selftests/ and kunit test-surface detection, .c/.h classification). Measure how many recent fix commits ship with a test an… — session:905ff169 (2026-04-19)
  • [completed] Step 6: concurrency extraction — Add Pipeline::run_parallel_applies that parallelizes applies() via std::thread::scope when pass count ≥ threshold. Falls back to serial below threshold (thread-spawn overhead > savings). Benchmark s… — session:905ff169 (2026-04-19)
  • [completed] [parallel] AST-aware #[test] extractor — Use syn (Rust) or tree-sitter-rust (Python) to detect added #[test] fns in a commit diff and emit a test-only patch. Gates fail→pass verification on this repo. Not blocking; parallel work for the Ru… — session:905ff169 (2026-04-19)
  • [completed] Step 7: precc skvm report tooling — Wire had_solid_hit into metrics log. Add precc skvm report that surfaces pass activation counts, cache hit rate, hook-latency percentiles. Read from metrics.db + skvm_solid_cache. Closes the observa… — session:905ff169 (2026-04-19)
  • [completed] Wire SolidificationPass into live hook — Add stage_solidification_lookup (front, short-circuits on hit) and stage_solidification_record (end) to Pipeline. Gate behind PRECC_SOLIDIFY. Add had_solid_hit flag. Open cache via db::open_metrics fo… — session:905ff169 (2026-04-19)
  • [completed] Step 3: solidification cache — skvm::solid module: Cache (SQLite-backed) with lookup/record, Key with normalization, SolidificationPass at pipeline front. Gated by PRECC_SOLIDIFY=1. Tests with in-memory DB. No wiring into live hook… — session:905ff169 (2026-04-19)
  • [completed] Wire CdPrependPass into hook’s stage_context — Replace the direct context::resolve/apply calls in precc-hook::Pipeline::stage_context with CdPrependPass via HookIR. Verify no hook tests regress; full cargo test green. — session:905ff169 (2026-04-19)
  • [completed] Step 2: migrate cd_prepend through Pass trait — Re-express the existing cd-prepend stage as a Pass impl that reuses the current context resolution. Diff-test: on a fixture corpus, the new pass must produce byte-identical output to the legacy path. … — session:905ff169 (2026-04-19)
  • [completed] Step 5 preview: CrateSlicePass sketch — Implement CrateSlicePass in precc-core::skvm::passes::crate_slice. Detects cargo &lt;build\|test\|check\|clippy&gt; without -p, reads cached cargo metadata, narrows to -p when unambiguous. Wire a minimal K… — session:905ff169 (2026-04-19)
  • [completed] Step 1: Pass trait + HookIR — precc-core::skvm::{pass, ir}. Pass trait with name/capability/applies/run. HookIR holds command, cwd, and mutable output. Capability enum: Detect|Rewrite|Slice|Verify. No behavior change; no passes re… — session:905ff169 (2026-04-19)
  • [completed] Step 0: baseline harness — Add precc-core::skvm::baseline module + precc report --skvm-baseline subcommand. Snapshots K1 (hook latency p50/p99), K3 (token savings total), activation counts from metrics.db into a named baselin… — session:905ff169 (2026-04-19)
  • [completed] Build K3-only replay corpus — For each of the 82 fix-surface commits, derive ground-truth set of changed crates and emit realistic cargo commands. CrateSlicePass evaluation will read this corpus and measure narrowing precision/rec… — session:905ff169 (2026-04-18)
  • [deleted] Run verifier over 33 candidates — Execute verifier, collect verdicts. Apply size gate to verified set. Emit precc_self_corpus.jsonl. — session:905ff169 (2026-04-18)
  • [deleted] Write fail-at-parent verifier — Per candidate: git worktree at parent, apply only test-file diff, cargo test (expect added tests FAIL), reset + apply full commit, cargo test (expect PASS). Per-worktree CARGO_TARGET_DIR to avoid tras… — session:905ff169 (2026-04-18)
  • [completed] Classify test surface of 33 candidates — Split candidates into pure_test_path (tests/ only) vs mixed_file_test (production + #[test] in same file). Reports count by class. Cheap, no cargo. — session:905ff169 (2026-04-18)
  • [completed] Run first Terminal-bench batch (5 tasks) — Execute scripts/benchmark.sh –tasks 5 using OAuth token from subscription as ANTHROPIC_API_KEY. Verify arm A (vanilla) works, then arm B (PRECC), then compare.json. — session:781fe484 (2026-04-17)
  • [completed] Add precc explain and precc undo — explain –since 1h: lists recent rewrites with diff + skill + confidence (reads stash + rewrite_log). undo <id>: re-disables the skill that produced rewrite id. — session:781fe484 (2026-04-16)
  • [completed] Confidence decay on retry-after-rewrite — post_observe: if same command class is retried within 60s after a PRECC rewrite, decrement skill confidence by 0.05 (or count as false-correction event). Below SUGGEST_THRESHOLD (0.3) skill auto-disab… — session:781fe484 (2026-04-16)
  • [completed] Add precc skills disable/enable per-project — CLI commands to disable a skill in the current project (writes to .precc/disabled-skills file at project root). Hook reads this list and skips matching skills. — session:781fe484 (2026-04-16)
  • [completed] Make every rewrite visible via additionalContext — In precc-hook, whenever the pipeline produces a non-trivial rewrite (cd-prepend, skill, RTK, lean-ctx, nushell, diet), append a one-line summary “PRECC rewrote: <orig> -> <new> [reason]” to additional… — session:781fe484 (2026-04-16)
  • [completed] Soften overstated claims in intro — Replace “Claude never sees the error. No tokens wasted.” with measured language matching README. Update strings_intro.sql and re-translate the new key for all 28 langs. — session:781fe484 (2026-04-16)
  • [completed] Fix per-language html lang and dir — build-book.sh must rewrite book.toml language= and text-direction= per language so generated pages have correct lang/dir attributes. RTL for ar, fa. — session:781fe484 (2026-04-16)
  • [completed] Rebuild book and verify — Run scripts/build-book.sh to regenerate introduction.md per language, verify first lines now show translations — session:781fe484 (2026-04-16)

阻塞項(使用者回報的失敗/卡住訊號)

  • look at all the historical session logs and executed commands to summarize a mark down document like Mindmap showing (1) the features, status, decisions, dependencies, and effort (tokens releated to its development); (2) the plans, tasks, s… — session:0925455d (2026-04-19)
  • check if it is working? why precc savings –all doesn’t work? — session:ebd81938 (2026-04-13)
  • i tried that url it doesn’t work? — session:ebd81938 (2026-04-08)
  • why I can’t see the “last: “ messages? — session:ebd81938 (2026-04-08)
  • not yet. I would wait to get more data from telemetry to update the website. But now you need to investigate on those “unmeasured” cases, why we cannot measure them? — session:ebd81938 (2026-04-07)
  • regarding the live usage statistics https://precc.cc/en/#live-usage-statistics, we need to report the percentages based on the duration of releases, i.e., how much saving was made by which release (otherwise it is easy to mislead readers to… — session:ebd81938 (2026-04-06)
  • https://precc.cc cannot find the server — session:ebd81938 (2026-04-05)
  • can see key_id mk_1TDiUmFxhHEidPnDw5esdOMa, but cannot reveal or see the sk_live_… — session:d65ad15f (2026-04-01)
  • PS C:\Users\y00577373> iwr -useb https://raw.githubusercontent.com/peria-ai/precc-cc/main/scripts/install.ps1 | iex — session:10175339 (2026-03-30)
  • why can’t you create peria-ai or peri-a-i organizations — session:10175339 (2026-03-28)
  • the hello_world_do example has the following errors: NPU run failed. — session:3b5e2947 (2026-03-22)

決策與理由

  • feat(bench): clean-subset metrics (exclude timeouts & infra failures) — When one arm times out or the agent fails to install, the resulting tokens/pass numbers aren’t measuring PRECC — they’re measuring tb’s source: commit:5bdd027d (commit 2026-04-17)
  • fix(bench): drop –include-hook-events (causes 401 Invalid API key) — Adding --include-hook-events to the tb agent command caused Claude Code to return api_error_status=401 on first turn, even though the source: commit:025995d9 (commit 2026-04-17)
  • feat: PRECC_MODE=benchmark toggle + pairwise benchmark harness — Problem (from reviewer): the “trivial vs semantic” error-shaping claim is rhetoric without a measurable boundary. A rewriter that saves tokens source: commit:50c5a30f (commit 2026-04-17)
  • docs: update savings.md.tpl + README to match new statusline labels — - Σ → meas: throughout - New ‘bash X% of total’ segment row in segment table source: commit:2d366031 (commit 2026-04-08)
  • feat: clearer statusline labels — meas:, drop confusing %, add bash share — Three statusline UX changes from user feedback: 1. Lifetime segment renamed from ‘Σ 8.9K (22% over 217)’ to source: commit:4cd837b7 (commit 2026-04-08)
  • docs: explain statusline cost vs token semantics in book + README — Adds a ‘Status Bar’ section to docs/book/templates/savings.md.tpl and README.md explaining: source: commit:6028b64c (commit 2026-04-08)
  • feat: v0.3.3 — companion tools default-on, install-script clarity — The single biggest change: install.sh now installs companion tools (lean-ctx, RTK, nushell, cocoindex-code) BY DEFAULT instead of source: commit:48fca046 (commit 2026-04-07)
  • feat: quote-aware chain split + sysadmin tool whitelist (54.2% → 55.5%) — Three improvements that increase measurable Bash invocation coverage: 1. Quote-aware top-level chain split source: commit:f6580598 (commit 2026-04-07)
  • fix: command_class env stripping, skill validation, ssh/journalctl/kubectl diet rules — 1. command_class strips env prefixes and noise: - RUST_BACKTRACE=1 cargo test → “cargo test” source: commit:f4220343 (commit 2026-04-07)
  • feat: multi-mode adaptive compression with failure learning — New modules: - mode.rs: CompressionMode enum (basic/diet/nushell/lean-ctx/rtk/adaptive-expand) source: commit:81475afc (commit 2026-04-07)
  • test: comprehensive tests for ccc and compress modules (319 → 386 tests) — ccc.rs: +20 tests covering edge cases for is_eligible (flags, whitespace, empty input), extract_pattern (no path, multiple flags, boundary length), source: commit:448430e2 (commit 2026-03-20)
  • feat(gdb): re-enable Pillar 2 GDB hook suggestion — - Add open_history_readonly() to db.rs (same pattern as heuristics) - Add count_recent_failures() to gdb.rs: queries failure_fix_pairs for source: commit:a8428025 (commit 2026-02-26)
  • fix(mining): correct summary counters and orphaned events on –force re-mine — Three bugs fixed: 1. mine_session returned Skipped for sessions with no Bash events even source: commit:3ef089d8 (commit 2026-02-22)
  • 1. Compiled Rust Binary vs Shell ScriptDecision: Replace the rtk-rewrite.sh shell script hook with a compiled Rust binary (precc-hook). Alternatives considered: source: doc:ALTERNATIVES.md
  • 2. SQLite vs Key-Value StoreDecision: Use SQLite for both history.db and heuristics.db. Alternatives considered: source: doc:ALTERNATIVES.md
  • 3. Workspace of 4 Crates vs MonolithDecision: Structure the project as a Cargo workspace with 4 crates: precc-core, precc-hook, precc-cli, precc-learner. Alternatives considered: source: doc:ALTERNATIVES.md
  • 4. GDB Hook Integration vs Standalone CLIDecision: Implement GDB debugging as a CLI command (precc debug) rather than as an automatic hook rewrite. Alternatives considered: source: doc:ALTERNATIVES.md
  • 5. Background Daemon vs On-Demand MiningDecision: Support both modes — precc-learner daemon for continuous mining, precc ingest for on-demand. Alternatives considered: source: doc:ALTERNATIVES.md
  • 6. Confidence ThresholdsDecision: Three-tier confidence system: auto-apply (≥ 0.7), suggest (0.3-0.7), hidden (< 0.3). Alternatives considered: source: doc:ALTERNATIVES.md
  • 7. RTK Subsumption StrategyDecision: Port RTK’s rewriting logic into precc-core as the final pipeline stage, rather than running both hooks in sequence. Alternatives considered: source: doc:ALTERNATIVES.md
  • 8. Skill Storage FormatDecision: TOML files for built-in skills, SQLite rows for mined/user skills. Alternatives considered: source: doc:ALTERNATIVES.md
  • 9. Session Log FormatDecision: Read Claude Code’s native JSONL format directly rather than converting to a custom format. Rationale: Claude Code already writes detailed session logs in JSONL format at ~/.claude/projects/*/. Creating a custom format would mean: source: doc:ALTERNATIVES.md

關鍵指標隨時間變化

指標單位首值最新值Δ樣本數最近來源
atx0.11.25+1.152commit:4f65556d
buildms3480+4772commit:f84bab49
hookms53-22commit:f81e4543
precctokens42387-3362commit:e3bc282e
savedms4.86.3+1.52commit:ec17f16c

各工作階段工作量(按權杖數前 10)

工作階段首次 → 最近訊息數輸入輸出快取寫入快取讀取
ebd819382026-04-04 → 2026-04-1345174547686622246909501020430414
781fe4842026-04-16 → 2026-04-17143413416035963739362259708120
101753392026-03-28 → 2026-03-30131811761024692430047110606429
5761d7ca2026-03-26 → 2026-03-28118043631370562196522116605673
550c7bab2026-03-20 → 2026-03-2210641466104943205973292991217
905ff1692026-04-18 → 2026-04-196501698496929157266863432376
d65ad15f2026-03-31 → 2026-04-0475255878099184564558334554
3b5e29472026-03-22 → 2026-03-2311628961280681526203102403205
0925455d2026-04-19 → 2026-04-19440830262128122605432943523
435418852026-03-31 → 2026-03-31566735382683109632841667559

命令參考

所有PRECC命令的完整參考。


precc init

初始化PRECC並向Claude Code註冊鉤子。

precc init

Options:
  (none)

Effects:
  - Registers PreToolUse:Bash hook with Claude Code
  - Creates ~/.local/share/precc/ data directory
  - Initializes heuristics.db with built-in skills
  - Prompts for telemetry consent

precc ingest

挖掘會話日誌中的失敗-修復模式。

precc ingest [FILE] [--all] [--force]

Arguments:
  FILE            Path to a session log file (.jsonl)

Options:
  --all           Ingest all session logs from ~/.claude/logs/
  --force         Re-process files that were already ingested

Examples:
  precc ingest session.jsonl
  precc ingest --all
  precc ingest --all --force

precc skills

管理自動化技能。

precc skills list

precc skills list

List all active skills (built-in and mined).

precc skills show

precc skills show NAME

Show detailed information about a specific skill.

Arguments:
  NAME            Skill name (e.g., cargo-wrong-dir)

precc skills export

precc skills export NAME

Export a skill definition as TOML.

Arguments:
  NAME            Skill name

precc skills edit

precc skills edit NAME

Open a skill definition in $EDITOR.

Arguments:
  NAME            Skill name

precc skills advise

precc skills advise

Analyze recent sessions and suggest new skills based on repeated patterns.

precc skills cluster

precc skills cluster

Group similar mined skills to identify redundant or overlapping patterns.

precc report

生成分析報告。

precc report [--email]

Options:
  --email         Send the report via email (requires mail setup)

precc savings

顯示token節省。

precc savings [--all]

Options:
  --all           Show detailed per-command breakdown (Pro)

precc compress

壓縮上下文文件以減少token使用。

precc compress [DIR] [--dry-run] [--revert]

Arguments:
  DIR             Directory or file to compress (default: current directory)

Options:
  --dry-run       Preview changes without modifying files
  --revert        Restore files from backup

precc license

管理您的PRECC許可證。

precc license activate

precc license activate KEY --email EMAIL

Arguments:
  KEY             License key (XXXX-XXXX-XXXX-XXXX)

Options:
  --email EMAIL   Email address associated with the license

precc license status

precc license status

Display current license status, plan, and expiration.

precc license deactivate

precc license deactivate

Deactivate the license on this machine.

precc license fingerprint

precc license fingerprint

Display the device fingerprint for this machine.

precc mail

電子郵件功能。

precc mail setup

precc mail setup

Interactive SMTP configuration. Saves to ~/.config/precc/mail.toml.

precc mail report

precc mail report EMAIL

Send a PRECC analytics report to the specified email address.

Arguments:
  EMAIL           Recipient email address

precc mail send

precc mail send EMAIL FILE

Send a file as an email attachment.

Arguments:
  EMAIL           Recipient email address
  FILE            Path to the file to send

precc update

將PRECC更新到最新版本。

precc update [--force] [--version VERSION] [--auto]

Options:
  --force             Force update even if already on latest
  --version VERSION   Update to a specific version
  --auto              Enable automatic updates

precc telemetry

管理匿名遙測。

precc telemetry consent

Opt in to anonymous telemetry.

precc telemetry revoke

precc telemetry revoke

Opt out of telemetry. No further data will be sent.

precc telemetry status

precc telemetry status

Show current telemetry consent status.

precc telemetry preview

precc telemetry preview

Display the telemetry payload that would be sent (without sending it).

precc geofence

IP地理圍欄合規(Pro)。

precc geofence check

precc geofence check

Check if the current machine is in an allowed region.

precc geofence refresh

precc geofence refresh

Refresh the IP geolocation cache.

precc geofence clear

precc geofence clear

Clear the geofence cache.

precc geofence info

precc geofence info

Display geofence configuration and current status.

precc gif

從bash腳本錄製動畫GIF(Pro)。

precc gif SCRIPT LENGTH [INPUTS...]

Arguments:
  SCRIPT          Path to a bash script
  LENGTH          Maximum recording duration (e.g., 30s, 2m)
  INPUTS...       Optional input lines for interactive prompts

Examples:
  precc gif demo.sh 30s
  precc gif interactive.sh 60s "yes" "my-project"

precc gha

分析失敗的GitHub Actions運行(Pro)。

precc gha URL

Arguments:
  URL             GitHub Actions run URL

Example:
  precc gha https://github.com/org/repo/actions/runs/12345678

precc cache-hint

顯示當前項目的緩存提示信息。

precc cache-hint

precc trial

開始Pro試用。

precc trial EMAIL

Arguments:
  EMAIL           Email address for the trial

precc nushell

啓動帶有PRECC集成的Nushell會話。

precc nushell

常見問題

PRECC安全嗎?

是的。PRECC使用Claude Code官方的PreToolUse鉤子機制——Anthropic專門爲此目的設計的擴展點。該鉤子:

  • 完全離線運行(熱路徑中無網絡調用)
  • 在5毫秒內完成
  • 是fail-open的:如果出現任何問題,原始命令將不受修改地運行
  • 只修改命令,從不自己執行它們
  • 將數據存儲在本地SQLite數據庫中

PRECC能與其他AI編碼工具一起使用嗎?

PRECC專爲Claude Code設計。它依賴於Claude Code提供的PreToolUse鉤子協議。它不適用於Cursor、Copilot、Windsurf或其他AI編碼工具。

遙測發送什麼數據?

遙測僅在選擇加入後啓用。啓用後發送:

  • PRECC版本、操作系統和架構
  • 彙總計數(攔截的命令、激活的技能)
  • 平均鉤子延遲

發送命令文本、文件路徑、項目名稱或任何個人身份信息。您可以在選擇加入前使用 precc telemetry preview 預覽確切的數據。詳見遙測

如何卸載PRECC?

??faq_uninstall_a_intro??

  1. 移除鉤子註冊:

    # Delete the hook entry from Claude Code's settings
    # (precc init added it; removing it disables PRECC)
    
  2. 刪除二進制文件:

    rm ~/.local/bin/precc ~/.local/bin/precc-hook ~/.local/bin/precc-learner
    
  3. 刪除數據(可選):

    rm -rf ~/.local/share/precc/
    rm -rf ~/.config/precc/
    

我的許可證過期了。會發生什麼?

PRECC恢復到社區版。所有核心功能繼續正常工作:

  • 內置技能保持活躍
  • 鉤子管道正常運行
  • precc savings 顯示摘要視圖
  • precc ingest 和會話挖掘正常工作

Pro功能在續訂前不可用:

  • precc savings --all(詳細分類)
  • precc compress
  • precc gif
  • precc gha
  • precc geofence
  • 電子郵件報告

鉤子似乎沒有運行。如何調試?

??faq_debug_a_intro??

  1. 檢查鉤子是否已註冊:

    precc init
    
  2. 手動測試鉤子:

    echo '{"tool_input":{"command":"cargo build"}}' | precc-hook
    
  3. 檢查二進制文件是否在PATH中:

    which precc-hook
    
  4. 檢查 ~/.claude/settings.json 中的Claude Code鉤子配置。

PRECC會減慢Claude Code嗎?

不會。鉤子在5毫秒內完成(p99)。與Claude推理和生成回覆所花費的時間相比,這是不可察覺的。

我可以在CI/CD中使用PRECC嗎?

PRECC是爲交互式Claude Code會話設計的。在CI/CD中,沒有Claude Code實例可以掛鉤。但是,precc gha 可以從任何環境分析失敗的GitHub Actions運行。

挖掘的技能與內置技能有何不同?

內置技能隨PRECC提供,涵蓋常見的錯誤目錄模式。挖掘的技能從您的特定會話日誌中學習——它們捕獲您工作流程中獨特的模式。兩者都存儲在SQLite中,並由鉤子管道以相同方式評估。

我可以與團隊共享技能嗎?

可以。使用 precc skills export NAME 將任何技能導出爲TOML並共享文件。團隊成員可以將其放在 skills/ 目錄中或導入到他們的啓發式數據庫中。

其他語言