Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

简介

什么是PRECC?

PRECC (Claude Code 预测性错误纠正) 是一个Rust工具,通过官方的PreToolUse钩子机制拦截Claude Code的bash命令。它在错误发生之前修复它们,节省token并消除重试循环。

对社区用户免费。

问题

Claude Code在可预防的错误上浪费大量token:

  • 目录错误 – 在没有 Cargo.toml 的父目录中运行 cargo build,然后在读取错误后重试。
  • 重试循环 – 失败的命令产生冗长的输出,Claude读取、推理并重试。每个循环消耗数百个token。
  • 冗长输出findls -R 等命令输出数千行,Claude必须处理这些内容。

四大支柱

上下文修复 (cd-prepend)

检测到 cargo buildnpm test 等命令在错误的目录中运行时,在执行前添加 cd /正确/路径 &&

GDB调试

检测附加GDB进行更深入调试的机会,提供结构化的调试信息而不是原始的核心转储。

会话挖掘

挖掘Claude Code会话日志中的失败-修复对。当同样的错误再次发生时,PRECC已经知道修复方法并自动应用。

自动化技能

内置和挖掘技能库,匹配命令模式并重写它们。技能定义为TOML文件或SQLite行,便于检查、编辑和共享。

工作原理(30秒版本)

  1. Claude Code即将运行一个bash命令。
  2. PreToolUse钩子将命令作为JSON通过stdin发送给 precc-hook
  3. precc-hook 在3毫秒内通过管道(技能、目录修正、压缩)处理命令。
  4. 修正后的命令作为JSON通过stdout返回。
  5. Claude Code执行修正后的命令。

琐碎错误被合并;重写原因随钩子响应一起返回,因此每次修正都可审计,而非悄然发生。

安全边界

PRECC仅在语义等价可证明保留或用户可验证时才进行重写。破坏性命令(rmgit push --forcegit reset --hard)即使匹配技能也绝不重写。每次变换必须是有界的——重写后的命令仍须包含原始命令的核心token。无界重写会被自动还原。每次应用的重写都会被记录并显示,以便您审计、禁用或撤销。

自适应压缩

如果命令在压缩后失败,PRECC会自动在重试时跳过压缩,以便Claude获得完整的未压缩输出来调试。

实时使用统计

当前版本 :

指标
钩子调用次数
节省的token
节省比率%
RTK重写
CD修正
钩子延迟 ms (p50)
独立用户

实测节省(实际数据)

各版本节省情况

这些数字会从匿名遥测数据自动更新。

链接

安装

快速安装 (Linux / macOS)

curl -fsSL https://peria.ai/install.sh | bash

这会下载适用于您平台的最新版本二进制文件,验证SHA256校验和,并将其放置在 ~/.local/bin/ 中。

安装后,初始化PRECC:

precc init

precc init 在Claude Code中注册PreToolUse钩子,创建数据目录,并初始化技能数据库。

安装选项

SHA256验证

默认情况下,安装程序会根据已发布的SHA256校验和验证二进制文件。要跳过验证(不推荐):

curl -fsSL https://peria.ai/install.sh | bash -s -- --no-verify

自定义安装前缀

安装到自定义位置:

curl -fsSL https://peria.ai/install.sh | bash -s -- --prefix /opt/precc

附加工具 (–extras)

PRECC附带可选的附加工具。使用 --extras 安装它们:

curl -fsSL https://peria.ai/install.sh | bash -s -- --extras

这将安装:

工具用途
RTK命令重写工具包
lean-ctxCLAUDE.md和提示文件的上下文压缩
nushell用于高级管道的结构化Shell
cocoindex-code代码索引以加快上下文解析

Windows (PowerShell)

irm https://peria.ai/install.ps1 | iex

然后初始化:

precc init

手动安装

  1. GitHub Releases 下载适用于您平台的发布二进制文件。
  2. 根据版本中的 .sha256 文件验证SHA256校验和。
  3. 将二进制文件放置在 PATH 中的目录中(例如 ~/.local/bin/)。
  4. 运行 precc init

更新

precc update

强制更新到特定版本:

precc update --force --version 0.3.0

启用自动更新:

precc update --auto

验证安装

$ precc --version
precc 0.3.0

$ precc savings
Session savings: 0 tokens (no commands intercepted yet)

如果找不到 precc,请确保 ~/.local/bin 在您的 PATH 中。

快速入门

5分钟内启动PRECC。

步骤1:安装

curl -fsSL https://peria.ai/install.sh | bash

步骤2:初始化

$ precc init
[precc] Hook registered with Claude Code
[precc] Created ~/.local/share/precc/
[precc] Initialized heuristics.db with 8 built-in skills
[precc] Ready.

步骤3:验证Hook已激活

$ precc skills list
  # Name               Type      Triggers
  1 cargo-wrong-dir    built-in  cargo build/test/clippy outside Rust project
  2 git-wrong-dir      built-in  git * outside a repo
  3 go-wrong-dir       built-in  go build/test outside Go module
  4 make-wrong-dir     built-in  make without Makefile in cwd
  5 npm-wrong-dir      built-in  npm/npx/pnpm/yarn outside Node project
  6 python-wrong-dir   built-in  python/pytest/pip outside Python project
  7 jj-translate       built-in  git * in jj-colocated repo
  8 asciinema-gif      built-in  asciinema rec

步骤4:正常使用Claude Code

打开Claude Code并照常工作。PRECC在后台静默运行。当Claude发出一个会失败的命令时,PRECC会在执行前修正它。

示例:错误目录的Cargo Build

假设你的项目在 ~/projects/myapp/,Claude发出:

cargo build

~/projects/(高了一级,那里没有 Cargo.toml)。

没有PRECC: Claude收到错误 could not find Cargo.toml in /home/user/projects or any parent directory,读取、推理,然后用 cd myapp && cargo build 重试。代价:浪费约2,000个token。

使用PRECC: Hook检测到缺失的 Cargo.toml,在 myapp/ 中找到它,并将命令重写为:

cd /home/user/projects/myapp && cargo build

Claude永远看不到错误。零token浪费。

步骤5:查看节省情况

会话结束后,查看PRECC节省了多少token:

$ precc savings
Session Token Savings
=====================
Total estimated savings: 4,312 tokens

Breakdown:
  Pillar 1 (cd prepends):       2,104 tokens  (3 corrections)
  Pillar 4 (skill activations):   980 tokens  (2 activations)
  RTK rewrites:                 1,228 tokens  (5 rewrites)

后续步骤

  • 技能 – 查看所有可用技能以及如何创建自己的技能。
  • Hook管道 – 了解底层发生了什么。
  • 节省 – 详细的token节省分析。

许可证

PRECC提供两个层级:Community(免费)和Pro。

Community层(免费)

Community层包括:

  • 所有内置技能(错误目录修正、jj翻译等)
  • 支持完整Pillar 1和Pillar 4的Hook管道
  • 基本的 precc savings 摘要
  • 使用 precc ingest 进行会话挖掘
  • 无限本地使用

Pro层

Pro解锁额外功能:

  • 详细节省分析precc savings --all 逐命令分析
  • GIF录制precc gif 用于创建终端动画GIF
  • IP地理围栏合规 – 适用于受监管的环境
  • 电子邮件报告precc mail report 发送分析报告
  • GitHub Actions分析precc gha 用于调试失败的工作流
  • 上下文压缩precc compress 用于CLAUDE.md优化
  • 优先支持

激活许可证

$ precc license activate XXXX-XXXX-XXXX-XXXX --email you@example.com
[precc] License activated for you@example.com
[precc] Plan: Pro
[precc] Expires: 2027-04-03

检查许可证状态

$ precc license status
License: Pro
Email:   you@example.com
Expires: 2027-04-03
Status:  Active

GitHub Sponsors激活

如果您通过GitHub Sponsors赞助PRECC,您的许可证将通过您的GitHub邮箱自动激活。无需密钥——只需确保您的赞助者邮箱匹配:

$ precc license status
License: Pro (GitHub Sponsors)
Email:   you@example.com
Status:  Active (auto-renewed)

设备指纹

每个许可证都绑定到设备指纹。使用以下命令查看:

$ precc license fingerprint
Fingerprint: a1b2c3d4e5f6...

如果需要将许可证转移到新机器,请先停用:

precc license deactivate

然后在新机器上激活。

许可证过期?

当Pro许可证到期时,PRECC会恢复到Community层。所有内置技能和核心功能继续工作。只有Pro特有功能变为不可用。详情请参阅FAQ

钩子管道

precc-hook 二进制文件是PRECC的核心。它位于Claude Code和shell之间,在5毫秒内处理每个bash命令。

Claude Code如何调用钩子

Claude Code支持PreToolUse钩子——可以在执行前检查和修改工具输入的外部程序。当Claude即将运行bash命令时,它通过stdin将JSON发送给 precc-hook 并从stdout读取响应。

管道阶段

Claude Code
    |
    v
+---------------------------+
| 1. Parse JSON stdin       |  Read the command from Claude Code
+---------------------------+
    |
    v
+---------------------------+
| 2. Skill matching         |  Query heuristics.db for matching skills (Pillar 4)
+---------------------------+
    |
    v
+---------------------------+
| 3. Directory correction   |  Resolve correct working directory (Pillar 1)
+---------------------------+
    |
    v
+---------------------------+
| 4. GDB check              |  Detect debug opportunities (Pillar 2)
+---------------------------+
    |
    v
+---------------------------+
| 5. RTK rewriting          |  Apply command rewrites for token savings
+---------------------------+
    |
    v
+---------------------------+
| 6. Emit JSON stdout       |  Return modified command to Claude Code
+---------------------------+
    |
    v
  Shell executes corrected command

示例:JSON输入和输出

输入(来自Claude Code)

{
  "tool_input": {
    "command": "cargo build"
  }
}

PRECC检测到当前目录没有 Cargo.toml,但 ./myapp/Cargo.toml 存在。

输出(到Claude Code)

{
  "hookSpecificOutput": {
    "updatedInput": {
      "command": "cd /home/user/projects/myapp && cargo build"
    }
  }
}

如果不需要修改,updatedInput.command 为空,Claude Code使用原始命令。

阶段详情

阶段1:解析JSON

从stdin读取完整的JSON对象。提取 tool_input.command。如果解析失败,钩子立即退出,Claude Code使用原始命令(fail-open设计)。

阶段2:技能匹配

查询SQLite启发式数据库,寻找触发模式与命令匹配的技能。技能按优先级顺序检查。内置TOML技能和挖掘的技能都会被评估。

阶段3:目录修正

对于构建命令(cargogomakenpmpython 等),检查预期的项目文件是否存在于当前目录中。如果不存在,扫描附近目录寻找最近匹配并添加 cd <dir> && 前缀。

目录扫描使用缓存的文件系统索引,TTL为5秒,以保持高速。

阶段4:GDB检查

如果命令可能产生崩溃(例如运行调试二进制文件),PRECC可以建议或注入GDB包装器来捕获结构化的调试输出,而不是原始崩溃日志。

阶段5:RTK重写

应用RTK(重写工具包)规则,缩短冗长命令、抑制嘈杂输出或重构命令以提高token效率。

阶段6:输出JSON

将修改后的命令序列化回JSON并写入stdout。如果没有更改,输出信号Claude Code使用原始命令。

性能

整个管道在5毫秒(p99)内完成。关键优化:

  • SQLite使用WAL模式实现无锁并发读取
  • 预编译的正则表达式模式用于技能匹配
  • 缓存的文件系统扫描(5秒TTL)
  • 热路径中无网络调用
  • Fail-open:任何错误都回退到原始命令

手动测试钩子

你可以直接调用钩子:

$ echo '{"tool_input":{"command":"cargo build"}}' | precc-hook
{"hookSpecificOutput":{"updatedInput":{"command":"cd /home/user/myapp && cargo build"}}}

技能

技能是PRECC用来检测和纠正命令的模式匹配规则。它们可以是内置的(作为TOML文件分发)或从会话日志中挖掘的。

内置技能

技能触发条件动作
cargo-wrong-dir在Rust项目外运行 cargo build/test/clippy在命令前添加 cd 到最近的 Cargo.toml 目录
git-wrong-dir在git仓库外运行 git *在命令前添加 cd 到最近的 .git 目录
go-wrong-dir在Go模块外运行 go build/test在命令前添加 cd 到最近的 go.mod 目录
make-wrong-dir当前目录没有Makefile时运行 make在命令前添加 cd 到最近的Makefile目录
npm-wrong-dir在Node项目外运行 npm/npx/pnpm/yarn在命令前添加 cd 到最近的 package.json 目录
python-wrong-dir在Python项目外运行 python/pytest/pip在命令前添加 cd 到最近的Python项目
jj-translate在jj共存仓库中运行 git *重写为等效的 jj 命令
asciinema-gifasciinema rec重写为 precc gif

列出技能

$ precc skills list
  # Name               Type      Triggers
  1 cargo-wrong-dir    built-in  cargo build/test/clippy outside Rust project
  2 git-wrong-dir      built-in  git * outside a repo
  3 go-wrong-dir       built-in  go build/test outside Go module
  4 make-wrong-dir     built-in  make without Makefile in cwd
  5 npm-wrong-dir      built-in  npm/npx/pnpm/yarn outside Node project
  6 python-wrong-dir   built-in  python/pytest/pip outside Python project
  7 jj-translate       built-in  git * in jj-colocated repo
  8 asciinema-gif      built-in  asciinema rec
  9 fix-pytest-path    mined     pytest with wrong test path

显示技能详情

$ precc skills show cargo-wrong-dir
Name:        cargo-wrong-dir
Type:        built-in
Source:      skills/builtin/cargo-wrong-dir.toml
Description: Detects cargo commands run outside a Rust project and prepends
             cd to the directory containing the nearest Cargo.toml.
Trigger:     ^cargo\s+(build|test|clippy|run|check|bench|doc)
Action:      prepend_cd
Marker:      Cargo.toml
Activations: 12

将技能导出为TOML

$ precc skills export cargo-wrong-dir
[skill]
name = "cargo-wrong-dir"
description = "Prepend cd for cargo commands outside a Rust project"
trigger = "^cargo\\s+(build|test|clippy|run|check|bench|doc)"
action = "prepend_cd"
marker = "Cargo.toml"
priority = 10

编辑技能

$ precc skills edit cargo-wrong-dir

这将在您的 $EDITOR 中打开技能定义。保存后,技能会自动重新加载。

Advise 命令

precc skills advise 分析您最近的会话,并根据重复模式建议新技能:

$ precc skills advise
Analyzed 47 commands from the last session.

Suggested skills:
  1. docker-wrong-dir: You ran `docker compose up` outside the project root 3 times.
     Suggested trigger: ^docker\s+compose
     Suggested marker: docker-compose.yml

  2. terraform-wrong-dir: You ran `terraform plan` outside the infra directory 2 times.
     Suggested trigger: ^terraform\s+(plan|apply|init)
     Suggested marker: main.tf

Accept suggestion [1/2/skip]?

聚类技能

$ precc skills cluster

将相似的挖掘技能分组,帮助识别冗余或重叠的模式。

挖掘技能与内置技能

内置技能随PRECC一起分发,定义在 skills/builtin/*.toml 中。它们涵盖了最常见的目录错误。

挖掘技能由 precc ingestprecc-learner 守护进程从您的会话日志创建。它们存储在 ~/.local/share/precc/heuristics.db 中,特定于您的工作流程。详情请参阅挖掘

节省

PRECC追踪每次拦截的估计token节省。使用 precc savings 查看PRECC阻止了多少浪费。

快速摘要

$ precc savings
Session Token Savings
=====================
Total estimated savings: <span data-stat="session_tokens_saved">8,741</span> tokens

Breakdown:
  Pillar 1 (cd prepends):         <span data-stat="session_p1_tokens">3,204</span> tokens  (<span data-stat="session_p1_count">6</span> corrections)
  Pillar 4 (skill activations):   <span data-stat="session_p4_tokens">1,560</span> tokens  (<span data-stat="session_p4_count">4</span> activations)
  RTK rewrites:                   <span data-stat="session_rtk_tokens">2,749</span> tokens  (<span data-stat="session_rtk_count">11</span> rewrites)
  Lean-ctx wraps:                 <span data-stat="session_lean_tokens">1,228</span> tokens  (<span data-stat="session_lean_count">2</span> wraps)

详细分类(Pro)

$ precc savings --all
Session Token Savings (Detailed)
================================
Total estimated savings: <span data-stat="session_tokens_saved">8,741</span> tokens

Command-by-command:
  #  Time   Command                          Saving   Source
  1  09:12  cargo build                      534 tk   cd prepend (cargo-wrong-dir)
  2  09:14  cargo test                       534 tk   cd prepend (cargo-wrong-dir)
  3  09:15  git status                       412 tk   cd prepend (git-wrong-dir)
  4  09:18  npm install                      824 tk   cd prepend (npm-wrong-dir)
  5  09:22  find . -name "*.rs"              387 tk   RTK rewrite (output truncation)
  6  09:25  cat src/main.rs                  249 tk   RTK rewrite (lean-ctx wrap)
  7  09:31  cargo clippy                     534 tk   cd prepend (cargo-wrong-dir)
  ...

Pillar Breakdown:
  Pillar 1 (context resolution):   <span data-stat="session_p1_tokens">3,204</span> tokens  <span data-stat="session_p1_pct">36.6</span>%
  Pillar 2 (GDB debugging):            0 tokens   0.0%
  Pillar 3 (mined preventions):        0 tokens   0.0%
  Pillar 4 (automation skills):    <span data-stat="session_p4_tokens">1,560</span> tokens  <span data-stat="session_p4_pct">17.8</span>%
  RTK rewrites:                    <span data-stat="session_rtk_tokens">2,749</span> tokens  <span data-stat="session_rtk_pct">31.5</span>%
  Lean-ctx wraps:                  <span data-stat="session_lean_tokens">1,228</span> tokens  <span data-stat="session_lean_pct">14.1</span>%

如何估算节省

每种修正类型都有基于没有PRECC时会发生什么的估计token成本:

修正类型估计节省原因
cd prepend~500 tokens错误输出 + Claude推理 + 重试
技能激活~400 tokens错误输出 + Claude推理 + 重试
RTK rewrite~250 tokensClaude需要阅读的冗长输出
Lean-ctx wrap~600 tokens大文件内容被压缩
挖掘预防~500 tokens已知的失败模式被避免

这些是保守估计。实际节省通常更高,因为Claude对错误的推理可能很冗长。

累计节省

节省数据在PRECC数据库中跨会话持久化。随着时间推移,您可以跟踪总体影响:

$ precc savings
Session Token Savings
=====================
Total estimated savings: <span data-stat="session_tokens_saved">8,741</span> tokens

Lifetime savings: <span data-stat="total_tokens_saved">142,389</span> tokens across <span data-stat="total_sessions">47</span> sessions

Status Bar

After installation, PRECC wires a statusLine entry into ~/.claude/settings.json so the Claude Code status bar shows live session metrics:

$0.42 spent | 1.2M in/out | 📊 last cmd: −1.2K | PRECC: 7 fixes | 5.8ms avg | this session: 320 saved over 7 cmds (~$0.05) | lifetime: 8.9K saved over 217 cmds (~$2.85)

Each segment:

SegmentSourceMeaningResets on session restart?
$0.42 spentClaude Code’s cost.total_cost_usdCumulative session cost reported by Claude CodeYes
1.2M in/outClaude Code’s total_input_tokens + total_output_tokensNon-cached input + output tokens across the sessionYes
📊 last cmd: −1.2KPRECC measurement of the most recent Bash commandReal ground-truth saving from re-running the originalNo (persists across sessions)
PRECC: 7 fixesPRECC session aggregate from metrics.logNumber of corrections this session — fix count only, no fake token estimateYes
5.8ms avgPRECC hook latency p50Time PRECC spent processing each tool callYes
bash 18% of totalPRECC post_observations.log filtered by session windowShare of session tokens that came from Bash output — clarifies why PRECC’s savings are naturally a fraction of total cost (PRECC only optimizes Bash output)Yes
this session: 320 saved over 7 cmds (~$0.05)~/.local/share/precc/.lifetime_summary.json minus the per-session baseline at ~/.local/share/precc/sessions/<session_id>.savings_baselineReal per-session delta. Baseline is captured the first time PRECC sees this session_id; subsequent refreshes compute current_lifetime − baseline so the value reflects savings accrued in this session only. Hidden when delta is zero (start of session)Yes (baseline re-snapshots)
lifetime: 8.9K saved over 217 cmds (~$2.85)~/.local/share/precc/.lifetime_summary.json + current session’s cost.total_cost_usd / total_used_tokens rateCumulative tokens saved and re-measured commands since PRECC was first installed, plus an estimated USD value computed from the current session’s per-token rate. Cost estimate is conservative — it uses (input+output) as the denominator while the cost includes cache tokens, so the per-token rate is overstated and the resulting savings figure is lower than actualNo

The lifetime: segment is placed last so it’s the first to be truncated if Claude Code’s UI clips the bar at the right edge.

Why cost and token count don’t divide

The displayed 1.2M in/out is not the denominator that produced $0.42 spent. Claude Code’s cost.total_cost_usd is computed from the API’s full token breakdown — base input, output, plus cache reads and cache creations. The session-wide cumulative cache token counts are not exposed in the statusline schema, so PRECC can only show the visible (non-cache) portion.

On long sessions with heavy file rereads, cache reads can be 10× the visible token count. That’s why pairing the two as a ratio would mislead — PRECC shows them as independent segments instead.

Why PRECC doesn’t compute the cost

The cost number is authoritative. PRECC reads cost.total_cost_usd verbatim from the JSON Claude Code pipes into the status command on stdin. That’s the same number Claude Code charges against your subscription/usage budget. You can verify it any time with the built-in /cost slash command — both should agree.

What drives the cost

For Claude Opus 4.6:

Token typeStandard (≤200k context)1M context tier
Input$15 / MTok$30 / MTok
Output$75 / MTok$150 / MTok
Cache write$18.75 / MTok$37.50 / MTok
Cache read$1.50 / MTok$3 / MTok

The biggest drivers on long sessions are usually:

  1. Output tokens — most expensive per-token type, especially on the 1M context tier
  2. Repeated cache reads — cheap individually but accumulate fast across many turns
  3. Cache creations — written once per file read, ~1.25× the base input rate

PRECC reduces the visible-token cost by compressing Bash output (the 📊 last cmd: segment shows the per-command saving), but it cannot reduce cache reads of files Claude has already loaded.

Stable session counts

The “PRECC: N fixes” segment counts events since the persisted session start, written to ~/.local/share/precc/sessions/<session_id>.start on the first statusline refresh of each session. This makes the count monotonic — it cannot drop mid-session even if cost.total_duration_ms is missing on a particular refresh (which would otherwise collapse the window to “since now” and silently drop nearly all events).

Auto-refreshed lifetime snapshot

The lifetime: segment reads ~/.local/share/precc/.lifetime_summary.json, which is rewritten:

  • On every PostToolUse measurement (so it stays current as commands accumulate)
  • On every precc savings invocation

The this session: segment reads the same lifetime file but subtracts a per-session baseline persisted to ~/.local/share/precc/sessions/<session_id>.savings_baseline on the first refresh of each session.

No need to manually refresh anything — the files update themselves.

Suppressing the status bar

If you’d rather keep your existing status bar, set your own statusLine command in ~/.claude/settings.json. PRECC’s installer will detect the custom value and leave it alone on subsequent updates.

To suppress only the per-interaction 📊 PRECC line (in additionalContext), set PRECC_QUIET=1 in your shell environment.

压缩

precc compress 缩小 CLAUDE.md 和其他上下文文件,以减少 Claude Code 加载时的 token 使用量。这是 Pro 功能。

基本用法

$ precc compress .
[precc] Scanning directory: .
[precc] Found 3 context files:
         CLAUDE.md (2,847 tokens -> 1,203 tokens, -57.7%)
         ARCHITECTURE.md (4,112 tokens -> 2,044 tokens, -50.3%)
         ALTERNATIVES.md (3,891 tokens -> 1,967 tokens, -49.5%)
[precc] Total: 10,850 tokens -> 5,214 tokens (-51.9%)
[precc] Files compressed. Use --revert to restore originals.

试运行

预览将要更改的内容而不修改文件:

$ precc compress . --dry-run
[precc] Dry run -- no files will be modified.
[precc] CLAUDE.md: 2,847 tokens -> 1,203 tokens (-57.7%)
[precc] ARCHITECTURE.md: 4,112 tokens -> 2,044 tokens (-50.3%)
[precc] ALTERNATIVES.md: 3,891 tokens -> 1,967 tokens (-49.5%)
[precc] Total: 10,850 tokens -> 5,214 tokens (-51.9%)

还原

原始文件会自动备份。要恢复它们:

$ precc compress --revert
[precc] Restored 3 files from backups.

压缩了什么

压缩器应用多种转换:

  • 删除冗余空白和空行
  • 缩短冗长的措辞同时保留含义
  • 压缩表格和列表
  • 去除注释和装饰性格式
  • 保留所有代码块、路径和技术标识符

压缩后的输出仍然是人类可读的——它不是压缩化或混淆的。

针对特定文件

$ precc compress CLAUDE.md
[precc] CLAUDE.md: 2,847 tokens -> 1,203 tokens (-57.7%)

报告

precc report 生成一个分析仪表板,总结 PRECC 活动和 token 节省情况。

生成报告

$ precc report
PRECC Report -- 2026-04-03
==========================

Sessions analyzed: 12
Commands intercepted: 87
Total token savings: 42,389

Top skills by activation:
  1. cargo-wrong-dir     34 activations   17,204 tokens saved
  2. npm-wrong-dir       18 activations    9,360 tokens saved
  3. git-wrong-dir       12 activations    4,944 tokens saved
  4. RTK rewrite         15 activations    3,750 tokens saved
  5. python-wrong-dir     8 activations    4,131 tokens saved

Savings by pillar:
  Pillar 1 (context resolution):  28,639 tokens  67.6%
  Pillar 4 (automation skills):    7,000 tokens  16.5%
  RTK rewrites:                    3,750 tokens   8.8%
  Lean-ctx wraps:                  3,000 tokens   7.1%

Recent corrections:
  2026-04-03 09:12  cargo build -> cd myapp && cargo build
  2026-04-03 09:18  npm test -> cd frontend && npm test
  2026-04-03 10:05  git status -> cd repo && git status
  ...

通过电子邮件发送报告

将报告发送到电子邮件地址(需要邮件设置,见 Email):

$ precc report --email
[precc] Report sent to you@example.com

收件人地址从 ~/.config/precc/mail.toml 读取。您也可以使用 precc mail report EMAIL 发送到特定地址。

报告数据

报告从本地 PRECC 数据库 ~/.local/share/precc/history.db 生成。除非您明确通过电子邮件发送报告,否则没有数据离开您的机器。

挖掘

PRECC挖掘Claude Code会话日志以学习失败-修复模式。当它再次看到同样的错误时,会自动应用修复。

导入会话日志

导入单个文件

$ precc ingest ~/.claude/logs/session-2026-04-03.jsonl
[precc] Parsing session-2026-04-03.jsonl...
[precc] Found 142 commands, 8 failure-fix pairs
[precc] Stored 8 patterns in history.db
[precc] 2 new skill candidates identified

导入所有日志

$ precc ingest --all
[precc] Scanning ~/.claude/logs/...
[precc] Found 23 session files (14 new, 9 already ingested)
[precc] Parsing 14 new files...
[precc] Found 47 failure-fix pairs across 14 sessions
[precc] Stored 47 patterns in history.db
[precc] 5 new skill candidates identified

强制重新导入

要重新处理已导入的文件:

$ precc ingest --all --force
[precc] Re-ingesting all 23 session files...

挖掘的工作原理

  1. PRECC读取会话JSONL日志文件。
  2. 它识别命令对,其中第一个命令失败,第二个是纠正后的重试。
  3. 它提取模式(出了什么问题)和修复(Claude做了什么不同的事)。
  4. 模式存储在 ~/.local/share/precc/history.db 中。
  5. 当模式达到置信阈值(多次出现)时,它成为 heuristics.db 中的挖掘技能。

示例模式

Failure: pytest tests/test_auth.py
Error:   ModuleNotFoundError: No module named 'myapp'
Fix:     cd /home/user/myapp && pytest tests/test_auth.py
Pattern: pytest outside project root -> prepend cd

precc-learner 守护进程

precc-learner 守护进程在后台运行,自动监视新的会话日志:

$ precc-learner &
[precc-learner] Watching ~/.claude/logs/ for new sessions...
[precc-learner] Processing session-2026-04-03-1412.jsonl... 3 new patterns

守护进程使用文件系统通知(Linux上的inotify,macOS上的FSEvents),因此在会话结束时立即做出反应。

从模式到技能

挖掘的模式在满足以下条件时升级为技能:

  • 跨会话至少出现3次
  • 一致的修复模式(每次相同类型的纠正)
  • 未检测到误报

您可以通过以下方式查看技能候选:

$ precc skills advise

有关管理技能的详细信息,请参见 Skills

数据存储

  • 失败-修复对: ~/.local/share/precc/history.db
  • 升级的技能: ~/.local/share/precc/heuristics.db

两者都是WAL模式的SQLite数据库,用于安全的并发访问。

电子邮件

PRECC可以通过电子邮件发送报告和文件。这需要一次性的SMTP设置。

设置

$ precc mail setup
SMTP host: smtp.gmail.com
SMTP port [587]: 587
Username: you@gmail.com
Password: ********
From address [you@gmail.com]: you@gmail.com
[precc] Mail configuration saved to ~/.config/precc/mail.toml
[precc] Sending test email to you@gmail.com...
[precc] Test email sent successfully.

配置文件

配置存储在 ~/.config/precc/mail.toml

[smtp]
host = "smtp.gmail.com"
port = 587
username = "you@gmail.com"
password = "app-password-here"
from = "you@gmail.com"
tls = true

您可以直接编辑此文件:

$EDITOR ~/.config/precc/mail.toml

对于Gmail,请使用应用密码而不是您的账户密码。

发送报告

$ precc mail report team@example.com
[precc] Generating report...
[precc] Sending to team@example.com...
[precc] Report sent.

发送文件

$ precc mail send colleague@example.com output.log
[precc] Sending output.log to colleague@example.com...
[precc] Sent (14.2 KB).

SSH中继支持

如果您的机器无法直接访问SMTP服务器(例如,在企业防火墙后面),PRECC支持通过SSH隧道中继:

[smtp]
host = "localhost"
port = 2525

[ssh_relay]
host = "relay.example.com"
user = "you"
remote_port = 587
local_port = 2525

PRECC将在发送前自动建立SSH隧道。

GIF录制

precc gif 从bash脚本创建终端会话的动画GIF录制。这是Pro功能。

基本用法

$ precc gif script.sh 30s
[precc] Recording script.sh (max 30s)...
[precc] Running: echo "Hello, world!"
[precc] Running: cargo build --release
[precc] Running: cargo test
[precc] Recording complete.
[precc] Output: script.gif (1.2 MB, 24s)

第一个参数是包含要运行的命令的bash脚本。第二个参数是最大录制时长。

脚本格式

脚本是标准的bash文件:

#!/bin/bash
echo "Building project..."
cargo build --release
echo "Running tests..."
cargo test
echo "Done!"

输入模拟

对于交互式命令,提供输入值作为额外参数:

$ precc gif interactive-demo.sh 60s "yes" "my-project" "3"

每个额外参数在脚本提示输入时作为一行stdin输入。

输出选项

输出文件默认以脚本命名(script.gif)。GIF使用深色终端主题,标准80x24尺寸。

为什么使用GIF而不是asciinema?

内置技能 asciinema-gif 自动将 asciinema rec 重写为 precc gif。GIF文件更具可移植性——它们可以在GitHub README、Slack和电子邮件中内联显示,无需播放器。

GitHub Actions 分析

precc gha 分析失败的GitHub Actions运行并建议修复方案。这是Pro功能。

用法

传入失败的GitHub Actions运行的URL:

$ precc gha https://github.com/myorg/myrepo/actions/runs/12345678
[precc] Fetching run 12345678...
[precc] Run: CI / build (ubuntu-latest)
[precc] Status: failure
[precc] Failed step: Run cargo test

[precc] Log analysis:
  Error: test result: FAILED. 2 passed; 1 failed
  Failed test: tests::integration::test_database_connection
  Cause: thread 'tests::integration::test_database_connection' panicked at
         'called Result::unwrap() on an Err value: Connection refused'

[precc] Suggested fix:
  The test requires a database connection but the CI environment does not
  start a database service. Add a services block to your workflow:

    services:
      postgres:
        image: postgres:15
        ports:
          - 5432:5432
        env:
          POSTGRES_PASSWORD: test

功能说明

  1. 解析GitHub Actions运行URL以提取所有者、仓库和运行ID。
  2. 通过GitHub API获取运行日志(如果设置了 GITHUB_TOKEN 则使用,否则公开访问)。
  3. 识别失败步骤并提取相关错误行。
  4. 分析错误并根据常见CI失败模式建议修复方案。

支持的失败模式

  • 缺少服务容器(数据库、Redis等)
  • 运行器OS或架构不正确
  • 缺少环境变量或密钥
  • 依赖安装失败
  • 测试超时
  • 权限错误
  • 缓存未命中导致构建缓慢

地理围栏

PRECC包含用于受监管环境的IP地理围栏合规性检查。这是Pro功能。

概述

一些组织要求开发工具仅在批准的地理区域内运行。PRECC的地理围栏功能验证当前机器的IP地址是否在允许的区域列表中。

检查合规性

$ precc geofence check
[precc] Current IP: 203.0.113.42
[precc] Region: US-East (Virginia)
[precc] Status: COMPLIANT
[precc] Policy: us-east-1, us-west-2, eu-west-1

如果机器在允许的区域之外:

$ precc geofence check
[precc] Current IP: 198.51.100.7
[precc] Region: AP-Southeast (Singapore)
[precc] Status: NON-COMPLIANT
[precc] Policy: us-east-1, us-west-2, eu-west-1
[precc] Warning: Current region is not in the allowed list.

刷新地理围栏数据

$ precc geofence refresh
[precc] Fetching updated IP geolocation data...
[precc] Updated. Cache expires in 24h.

查看地理围栏信息

$ precc geofence info
Geofence Configuration
======================
Policy file:    ~/.config/precc/geofence.toml
Allowed regions: us-east-1, us-west-2, eu-west-1
Cache age:      2h 14m
Last check:     2026-04-03 09:12:00 UTC
Status:         COMPLIANT

清除缓存

$ precc geofence clear
[precc] Geofence cache cleared.

配置

地理围栏策略在 ~/.config/precc/geofence.toml 中定义:

[geofence]
allowed_regions = ["us-east-1", "us-west-2", "eu-west-1"]
check_on_init = true
block_on_violation = false

设置 block_on_violation = true 以阻止PRECC在允许区域外运行。

遥测

PRECC支持可选的匿名遥测以帮助改进工具。除非您明确同意,否则不会收集任何数据。

选择加入

$ precc telemetry consent
[precc] Telemetry enabled. Thank you for helping improve PRECC.
[precc] You can revoke consent at any time with: precc telemetry revoke

选择退出

$ precc telemetry revoke
[precc] Telemetry disabled. No further data will be sent.

检查状态

$ precc telemetry status
Telemetry: disabled
Last sent: never

预览将发送的数据

在选择加入之前,您可以查看将收集的确切数据:

$ precc telemetry preview
Telemetry payload (this session):
{
  "version": "0.3.0",
  "os": "linux",
  "arch": "x86_64",
  "skills_activated": 12,
  "commands_intercepted": 87,
  "pillars_used": [1, 4],
  "avg_hook_latency_ms": 2.3,
  "session_count": 1
}

收集的数据

  • PRECC版本、操作系统和架构
  • 汇总计数:拦截的命令、激活的技能、使用的支柱
  • 平均钩子延迟
  • 会话数

不收集的数据

  • 不收集命令文本或参数
  • 不收集文件路径或目录名
  • 不收集项目名称或仓库URL
  • 不收集个人身份信息(PII)
  • 不收集IP地址(服务器不记录它们)

环境变量覆盖

无需运行命令即可禁用遥测(适用于CI或共享环境):

export PRECC_NO_TELEMETRY=1

这优先于同意设置。

数据目的地

遥测数据通过HTTPS发送到 https://telemetry.peria.ai/v1/precc。数据仅用于了解使用模式和确定开发优先级。

思维导图

本页由 mindmap.db 自动生成 — 这是一个 SQLite 快照,记录了所有 PRECC 开发会话和 git 提交。每一行都可追溯到其来源(commit:<sha>session:<id>doc:<path>)。

概览

  • 已分析会话: 22
  • 消息: 14023
  • 工具调用: 5072
  • 提交: 205
  • 时间范围: 2026-03-20T07:04:14.787Z → 2026-04-19T11:50:10.153Z
  • 工作量(令牌):
    • 输入: 27928
    • 输出: 2750669
    • 缓存写入: 43349705
    • 缓存读取: 1936351239

功能

范围标题状态提交数令牌首次最近来源
benchfeat(bench): SWE-bench Verified/Lite driver scaffoldingstabilizing443442992026-04-172026-04-17commit:5bdd027d
benchmark_gate.shfeat: benchmark_gate.sh + pin tb dataset to 0.1.1shipped143442992026-04-172026-04-17commit:99fa9a74
realfeat: real lean-ctx (not stub), wider campaign, doc updatesshipped2298211522026-04-072026-04-17commit:6095720a
precc_mode=benchmarkfeat: PRECC_MODE=benchmark toggle + pairwise benchmark harnessshipped143442992026-04-172026-04-17commit:50c5a30f
addfeat: add precc update self-update commandshipped14425571072026-03-092026-04-17commit:e5542fba
negotiablefeat: negotiable rewrites, skill decay, explain/undo — response to criticshipped143442992026-04-172026-04-17commit:6fda67e4
statuslinefeat: statusline shows actual session token consumption + coststabilizing3254249152026-04-082026-04-13commit:4f65556d
publicfeat: public repo commits attributed to Ce-cyber-artshipped1253821192026-04-102026-04-10commit:0e4840e4
shortfeat: short install URL https://peria.ai/install.shshipped1253821192026-04-092026-04-09commit:615d3d06
rewritefeat: rewrite Pillar 2b (ccc) and Pillar 3 (compress) in Rust for single-binary deploymentshipped2381180742026-03-202026-04-08commit:78621579
shortenfeat: shorten statusline segments to fit narrower terminalsshipped1253821192026-04-082026-04-08commit:ef2c88b4
dropfeat: drop fake token estimate, append cost estimate to lifetime segmentstabilizing2253821192026-04-082026-04-08commit:2702f3f9
updatefeat: update pricing to $5/6mo + $10/yr, add webhook serverstabilizing9381180742026-02-252026-04-08commit:2d366031
clearerfeat: clearer statusline labels — meas:, drop confusing %, add bash shareshipped1253821192026-04-082026-04-08commit:4cd837b7
stablefeat: stable machine_hash for telemetry dedupstabilizing2253821192026-04-082026-04-08commit:3073f428
lifetimefeat: lifetime savings segment in statuslineshipped1253821192026-04-082026-04-08commit:9af422e8
preccfeat: precc analyze frequencies — data-driven rule gap discoveryshipped3253821192026-04-072026-04-08commit:d6f24c50
per-interactionfeat: per-interaction PRECC savings line in PostToolUseshipped1253821192026-04-082026-04-08commit:e3bc282e
webhookfeat: webhook auto-regenerates stats.json on telemetry POSTstabilizing2291341862026-03-312026-04-08commit:912b75f3
per-emailfeat: per-email aggregation for telemetryshipped1253821192026-04-082026-04-08commit:14c95e7d
v0.3.3feat: v0.3.3 — companion tools default-on, install-script clarityshipped1253821192026-04-072026-04-07commit:48fca046
measurementfeat: measurement campaign script — real per-mode measurementsshipped1253821192026-04-072026-04-07commit:36760587
quote-awarefeat: quote-aware chain split + sysadmin tool whitelist (54.2% → 55.5%)shipped1253821192026-04-072026-04-07commit:f6580598
;feat: ; chain support + ssh inner-command parsing for measurementshipped1253821192026-04-072026-04-07commit:10093218
expandfeat: expand is_safe_to_rerun coverage + measurement timeout/cacheshipped1253821192026-04-072026-04-07commit:c5a7ea79
multi-modefeat: multi-mode adaptive compression with failure learningshipped1253821192026-04-072026-04-07commit:81475afc
measuredfeat: measured savings in telemetry, detailed live stats, update nudgeshipped1253821192026-04-062026-04-06commit:06907091
scientificfeat: scientific token savings measurement, telemetry dedup, 28-language docsshipped1253821192026-04-062026-04-06commit:78a20ef2
v0.3.2feat: v0.3.2 — hook safety, adaptive compression, on-demand metrics importshipped1253821192026-04-052026-04-05commit:a0c0c882
self-hostedfeat: self-hosted telemetry endpoint at peria.ai, install UX improvementsshipped125657032026-04-042026-04-04commit:8212a18e
auto-updatefeat: auto-update consent prompt on init and manual updateshipped119243022026-04-022026-04-02commit:818be6dd
useperf: use pre-built binaries for lean-ctx and nushell installationstabilizing4101702522026-03-092026-03-31commit:8c612e55
authorizefeat: authorize peria.ai server for license key generationshipped211863642026-03-312026-03-31commit:53dfe832
licensefeat: license keys, SMTP mail-agent, updated business plan and demosstabilizing2101702522026-03-092026-03-31commit:b07c9dfb
lean-ctxfeat: lean-ctx integration for deep output compressionshipped111863642026-03-312026-03-31commit:07361e62
integratefeat: integrate three-pillar savings from precc-cc (cocoindex-code, token-saver, ClawHub)shipped2101702522026-03-202026-03-31commit:af4205f1
windowsfeat: Windows build via CI, deploy triggers workflowstabilizing225336922026-03-292026-03-29commit:7404761b
monthlyfeat: monthly usage report via email for Pro usersshipped125336922026-03-282026-03-28commit:77ad78bc
nushellfeat: nushell what-if analysis, skill clustering, comment blocker, bash unwrap (v0.2.6)shipped123379412026-03-272026-03-27commit:803df684
geofencefeat: geofence compliance guard, 3rd-party skill Claude interaction tracking (v0.2.5)shipped123379412026-03-262026-03-26commit:0c9fc765
stripefeat: Stripe payment integration, context pressure, GHA analysisshipped224570882026-03-212026-03-22commit:8eb16f78
contextfeat: context pressure warning, GHA analysis, statusline context %shipped121661412026-03-202026-03-20commit:894621ba
statusline,feat: statusline, squash deploy, ClaWHub metadata, SHA256 checksumsshipped121661412026-03-202026-03-20commit:7ab15883
gumroadfeat: Gumroad license verification via API (v0.2.2)shipped102026-03-132026-03-13commit:75c5e480
per-userfeat: per-user email-based license keys with Gumroad webhook (v0.2.2)shipped102026-03-132026-03-13commit:6d056958
posttoolusefeat: PostToolUse observability + comprehensive test coverage (v0.2.1)shipped102026-03-122026-03-12commit:6e33b7e4
multi-toolfeat: multi-tool hook dispatch, subagent propagation & Read/Grep filters (v0.2.0)shipped102026-03-122026-03-12commit:1bf5a108
skillfeat: skill advisor, sharing credits, telemetry & Rust actionbook (v0.1.9)shipped102026-03-122026-03-12commit:d41d310e
firefeat: fire anonymous update-check ping on precc update (opt-out via PRECC_NO_TELEMETRY=1)shipped102026-03-102026-03-10commit:7acce69d
enforcefeat: enforce license tier gates (Free/Pro) on ingest, mined skills, gif, mail, savingsshipped102026-03-102026-03-10commit:a7bd23e3
translatefeat: translate git commands to jj (Jujutsu) in colocated reposshipped102026-03-092026-03-09commit:d8a29e48
rtkfeat(rtk): sync rewrite rules with upstream RTK v0.27.2shipped102026-03-092026-03-09commit:ad7dca0e
applyfeat: apply skill portfolio per command for maximum token savingsshipped102026-03-092026-03-09commit:b2490073
pitchfeat(pitch): add bilingual EN/ZH PowerPoint pitch deckshipped202026-02-272026-02-28commit:8876c4b7
hookperf(hook): skip heuristics.db open via plain-text prefix cacheshipped102026-02-272026-02-27commit:89537483
initfeat(init): embed builtin skills in binary via include_str!shipped102026-02-262026-02-26commit:3a837b13
clifeat(cli): add precc skills export commandshipped202026-02-262026-02-26commit:59beea8d
gdbfeat(gdb): re-enable Pillar 2 GDB hook suggestionshipped102026-02-262026-02-26commit:a8428025
skillsfeat(skills): add git wrong-dir skill and context mappingstabilizing202026-02-252026-02-25commit:352474e1
metricsfeat(metrics): record hook latency, rtk_rewrite, cd_prepend via append-logshipped102026-02-252026-02-25commit:9bf31d12
demofeat(demo): add investor demo suiteshipped102026-02-252026-02-25commit:c818a0ac
securityfeat(security): SQLCipher encryption, binary hardening, multi-platform CIshipped102026-02-252026-02-25commit:efd3dfc8
ingestfeat(ingest): add –force flag to re-mine already-recorded sessionsshipped102026-02-222026-02-22commit:85cc8f6f

依赖关系(precc-core 模块)

  • advisordb, promote, skills
  • dietlean_ctx
  • metricsdb
  • miningskills
  • mode_selectordb, mode
  • multi_probediet, lean_ctx, mode, nushell, post_observe, rtk
  • nushelllean_ctx, mining, rtk
  • promotedb, skills
  • rtklean_ctx
  • sharingdb, license, skills
  • skill_advisormining, nushell
  • skillsdb
  • telemetrydb, license, mining

计划与任务

计划(请求设计/架构的提示)

  • [proposed] indeed the measurement needs to be based on precc-cc’s established KPI’s. If the two ideas are so close, perhaps you can draft a plan to integrate them (algorithmatically) step-by-step, then start to use Rust (consistent with Precc) to impl… — session:905ff169 (2026-04-18)
  • [proposed] 西班牙语网站上有人评价:中文翻譯(繁體): — session:781fe484 (2026-04-16)
  • [proposed] That’s a really solid framing — using pre-tool-call hooks as quality gates instead of just optimization is a big shift in mindset. You’re essentially moving from “make the model cheaper” to “make the system more correct,” whic… — session:ebd81938 (2026-04-05)
  • [proposed] Plan the integration of both tools, make sure we don’t take their credit and maintain a clear interface so that once it evolves, we can get smaller changes to integrate with their future changes — session:43541885 (2026-03-31)
  • [proposed] for the benchmark, we need to prepare a table to record the comparison for existing historical scenarios, as a “what-if” analysis because there is no way to measure the results for future usages. For this requirement, plan out a step-by-ste… — session:5761d7ca (2026-03-27)
  • [proposed] while bash could be improved using RTK, would its replacement with nushell a better choice for Claude Code? If so, plan an option for replacing bash with nushell to gain better accuracy and hence potentially more token savings by some small… — session:5761d7ca (2026-03-27)

任务(TaskCreate / TodoWrite 条目)

  • completed: 89
  • in_progress: 3
  • deleted: 2

最近 30 项任务:

  • [completed] Re-ingest and review residual pending — Run precc mindmap build after the fix, then classify the actually-pending tasks (done-but-unclosed vs genuinely-unfinished). — session:0925455d (2026-04-19)
  • [completed] Fold TaskCreate/TaskUpdate + dedupe TodoWrite — Replay TaskCreate/TaskUpdate events per (session_id, taskId) to derive final status. For TodoWrite, keep only the last call per session. — session:0925455d (2026-04-19)
  • [completed] Run ingest and produce MINDMAP.md — Execute ingest on local sessions + git, then render output to docs/MINDMAP.md. — session:0925455d (2026-04-19)
  • [completed] Wire precc mindmap CLI subcommand — Add ingest/render subcommands to precc-cli. — session:0925455d (2026-04-19)
  • [completed] Write mindmap render module — Query DB and render nested markdown mindmap with KPIs, features, plans, blockers. — session:0925455d (2026-04-19)
  • [completed] Write mindmap ingest module — Parse JSONL sessions + git log, extract messages/tokens/commands/decisions into SQLite. — session:0925455d (2026-04-19)
  • [completed] Design SQLite mindmap schema — Tables: sessions, messages, commands, features, plans, tasks, kpis, decisions, dependencies. Every row traces to source (session_id+uuid or commit sha). — session:0925455d (2026-04-19)
  • [in_progress] Step 4: HeaderSlicePass + kernel corpus — Shallow-clone Linux kernel, adapt filter for kernel conventions (Fixes: tag, selftests/ and kunit test-surface detection, .c/.h classification). Measure how many recent fix commits ship with a test an… — session:905ff169 (2026-04-19)
  • [completed] Step 6: concurrency extraction — Add Pipeline::run_parallel_applies that parallelizes applies() via std::thread::scope when pass count ≥ threshold. Falls back to serial below threshold (thread-spawn overhead > savings). Benchmark s… — session:905ff169 (2026-04-19)
  • [completed] [parallel] AST-aware #[test] extractor — Use syn (Rust) or tree-sitter-rust (Python) to detect added #[test] fns in a commit diff and emit a test-only patch. Gates fail→pass verification on this repo. Not blocking; parallel work for the Ru… — session:905ff169 (2026-04-19)
  • [completed] Step 7: precc skvm report tooling — Wire had_solid_hit into metrics log. Add precc skvm report that surfaces pass activation counts, cache hit rate, hook-latency percentiles. Read from metrics.db + skvm_solid_cache. Closes the observa… — session:905ff169 (2026-04-19)
  • [completed] Wire SolidificationPass into live hook — Add stage_solidification_lookup (front, short-circuits on hit) and stage_solidification_record (end) to Pipeline. Gate behind PRECC_SOLIDIFY. Add had_solid_hit flag. Open cache via db::open_metrics fo… — session:905ff169 (2026-04-19)
  • [completed] Step 3: solidification cache — skvm::solid module: Cache (SQLite-backed) with lookup/record, Key with normalization, SolidificationPass at pipeline front. Gated by PRECC_SOLIDIFY=1. Tests with in-memory DB. No wiring into live hook… — session:905ff169 (2026-04-19)
  • [completed] Wire CdPrependPass into hook’s stage_context — Replace the direct context::resolve/apply calls in precc-hook::Pipeline::stage_context with CdPrependPass via HookIR. Verify no hook tests regress; full cargo test green. — session:905ff169 (2026-04-19)
  • [completed] Step 2: migrate cd_prepend through Pass trait — Re-express the existing cd-prepend stage as a Pass impl that reuses the current context resolution. Diff-test: on a fixture corpus, the new pass must produce byte-identical output to the legacy path. … — session:905ff169 (2026-04-19)
  • [completed] Step 5 preview: CrateSlicePass sketch — Implement CrateSlicePass in precc-core::skvm::passes::crate_slice. Detects cargo &lt;build\|test\|check\|clippy&gt; without -p, reads cached cargo metadata, narrows to -p when unambiguous. Wire a minimal K… — session:905ff169 (2026-04-19)
  • [completed] Step 1: Pass trait + HookIR — precc-core::skvm::{pass, ir}. Pass trait with name/capability/applies/run. HookIR holds command, cwd, and mutable output. Capability enum: Detect|Rewrite|Slice|Verify. No behavior change; no passes re… — session:905ff169 (2026-04-19)
  • [completed] Step 0: baseline harness — Add precc-core::skvm::baseline module + precc report --skvm-baseline subcommand. Snapshots K1 (hook latency p50/p99), K3 (token savings total), activation counts from metrics.db into a named baselin… — session:905ff169 (2026-04-19)
  • [completed] Build K3-only replay corpus — For each of the 82 fix-surface commits, derive ground-truth set of changed crates and emit realistic cargo commands. CrateSlicePass evaluation will read this corpus and measure narrowing precision/rec… — session:905ff169 (2026-04-18)
  • [deleted] Run verifier over 33 candidates — Execute verifier, collect verdicts. Apply size gate to verified set. Emit precc_self_corpus.jsonl. — session:905ff169 (2026-04-18)
  • [deleted] Write fail-at-parent verifier — Per candidate: git worktree at parent, apply only test-file diff, cargo test (expect added tests FAIL), reset + apply full commit, cargo test (expect PASS). Per-worktree CARGO_TARGET_DIR to avoid tras… — session:905ff169 (2026-04-18)
  • [completed] Classify test surface of 33 candidates — Split candidates into pure_test_path (tests/ only) vs mixed_file_test (production + #[test] in same file). Reports count by class. Cheap, no cargo. — session:905ff169 (2026-04-18)
  • [completed] Run first Terminal-bench batch (5 tasks) — Execute scripts/benchmark.sh –tasks 5 using OAuth token from subscription as ANTHROPIC_API_KEY. Verify arm A (vanilla) works, then arm B (PRECC), then compare.json. — session:781fe484 (2026-04-17)
  • [completed] Add precc explain and precc undo — explain –since 1h: lists recent rewrites with diff + skill + confidence (reads stash + rewrite_log). undo <id>: re-disables the skill that produced rewrite id. — session:781fe484 (2026-04-16)
  • [completed] Confidence decay on retry-after-rewrite — post_observe: if same command class is retried within 60s after a PRECC rewrite, decrement skill confidence by 0.05 (or count as false-correction event). Below SUGGEST_THRESHOLD (0.3) skill auto-disab… — session:781fe484 (2026-04-16)
  • [completed] Add precc skills disable/enable per-project — CLI commands to disable a skill in the current project (writes to .precc/disabled-skills file at project root). Hook reads this list and skips matching skills. — session:781fe484 (2026-04-16)
  • [completed] Make every rewrite visible via additionalContext — In precc-hook, whenever the pipeline produces a non-trivial rewrite (cd-prepend, skill, RTK, lean-ctx, nushell, diet), append a one-line summary “PRECC rewrote: <orig> -> <new> [reason]” to additional… — session:781fe484 (2026-04-16)
  • [completed] Soften overstated claims in intro — Replace “Claude never sees the error. No tokens wasted.” with measured language matching README. Update strings_intro.sql and re-translate the new key for all 28 langs. — session:781fe484 (2026-04-16)
  • [completed] Fix per-language html lang and dir — build-book.sh must rewrite book.toml language= and text-direction= per language so generated pages have correct lang/dir attributes. RTL for ar, fa. — session:781fe484 (2026-04-16)
  • [completed] Rebuild book and verify — Run scripts/build-book.sh to regenerate introduction.md per language, verify first lines now show translations — session:781fe484 (2026-04-16)

阻塞项(用户报告的失败/卡住信号)

  • look at all the historical session logs and executed commands to summarize a mark down document like Mindmap showing (1) the features, status, decisions, dependencies, and effort (tokens releated to its development); (2) the plans, tasks, s… — session:0925455d (2026-04-19)
  • check if it is working? why precc savings –all doesn’t work? — session:ebd81938 (2026-04-13)
  • i tried that url it doesn’t work? — session:ebd81938 (2026-04-08)
  • why I can’t see the “last: “ messages? — session:ebd81938 (2026-04-08)
  • not yet. I would wait to get more data from telemetry to update the website. But now you need to investigate on those “unmeasured” cases, why we cannot measure them? — session:ebd81938 (2026-04-07)
  • regarding the live usage statistics https://precc.cc/en/#live-usage-statistics, we need to report the percentages based on the duration of releases, i.e., how much saving was made by which release (otherwise it is easy to mislead readers to… — session:ebd81938 (2026-04-06)
  • https://precc.cc cannot find the server — session:ebd81938 (2026-04-05)
  • can see key_id mk_1TDiUmFxhHEidPnDw5esdOMa, but cannot reveal or see the sk_live_… — session:d65ad15f (2026-04-01)
  • PS C:\Users\y00577373> iwr -useb https://raw.githubusercontent.com/peria-ai/precc-cc/main/scripts/install.ps1 | iex — session:10175339 (2026-03-30)
  • why can’t you create peria-ai or peri-a-i organizations — session:10175339 (2026-03-28)
  • the hello_world_do example has the following errors: NPU run failed. — session:3b5e2947 (2026-03-22)

决策与理由

  • feat(bench): clean-subset metrics (exclude timeouts & infra failures) — When one arm times out or the agent fails to install, the resulting tokens/pass numbers aren’t measuring PRECC — they’re measuring tb’s source: commit:5bdd027d (commit 2026-04-17)
  • fix(bench): drop –include-hook-events (causes 401 Invalid API key) — Adding --include-hook-events to the tb agent command caused Claude Code to return api_error_status=401 on first turn, even though the source: commit:025995d9 (commit 2026-04-17)
  • feat: PRECC_MODE=benchmark toggle + pairwise benchmark harness — Problem (from reviewer): the “trivial vs semantic” error-shaping claim is rhetoric without a measurable boundary. A rewriter that saves tokens source: commit:50c5a30f (commit 2026-04-17)
  • docs: update savings.md.tpl + README to match new statusline labels — - Σ → meas: throughout - New ‘bash X% of total’ segment row in segment table source: commit:2d366031 (commit 2026-04-08)
  • feat: clearer statusline labels — meas:, drop confusing %, add bash share — Three statusline UX changes from user feedback: 1. Lifetime segment renamed from ‘Σ 8.9K (22% over 217)’ to source: commit:4cd837b7 (commit 2026-04-08)
  • docs: explain statusline cost vs token semantics in book + README — Adds a ‘Status Bar’ section to docs/book/templates/savings.md.tpl and README.md explaining: source: commit:6028b64c (commit 2026-04-08)
  • feat: v0.3.3 — companion tools default-on, install-script clarity — The single biggest change: install.sh now installs companion tools (lean-ctx, RTK, nushell, cocoindex-code) BY DEFAULT instead of source: commit:48fca046 (commit 2026-04-07)
  • feat: quote-aware chain split + sysadmin tool whitelist (54.2% → 55.5%) — Three improvements that increase measurable Bash invocation coverage: 1. Quote-aware top-level chain split source: commit:f6580598 (commit 2026-04-07)
  • fix: command_class env stripping, skill validation, ssh/journalctl/kubectl diet rules — 1. command_class strips env prefixes and noise: - RUST_BACKTRACE=1 cargo test → “cargo test” source: commit:f4220343 (commit 2026-04-07)
  • feat: multi-mode adaptive compression with failure learning — New modules: - mode.rs: CompressionMode enum (basic/diet/nushell/lean-ctx/rtk/adaptive-expand) source: commit:81475afc (commit 2026-04-07)
  • test: comprehensive tests for ccc and compress modules (319 → 386 tests) — ccc.rs: +20 tests covering edge cases for is_eligible (flags, whitespace, empty input), extract_pattern (no path, multiple flags, boundary length), source: commit:448430e2 (commit 2026-03-20)
  • feat(gdb): re-enable Pillar 2 GDB hook suggestion — - Add open_history_readonly() to db.rs (same pattern as heuristics) - Add count_recent_failures() to gdb.rs: queries failure_fix_pairs for source: commit:a8428025 (commit 2026-02-26)
  • fix(mining): correct summary counters and orphaned events on –force re-mine — Three bugs fixed: 1. mine_session returned Skipped for sessions with no Bash events even source: commit:3ef089d8 (commit 2026-02-22)
  • 1. Compiled Rust Binary vs Shell ScriptDecision: Replace the rtk-rewrite.sh shell script hook with a compiled Rust binary (precc-hook). Alternatives considered: source: doc:ALTERNATIVES.md
  • 2. SQLite vs Key-Value StoreDecision: Use SQLite for both history.db and heuristics.db. Alternatives considered: source: doc:ALTERNATIVES.md
  • 3. Workspace of 4 Crates vs MonolithDecision: Structure the project as a Cargo workspace with 4 crates: precc-core, precc-hook, precc-cli, precc-learner. Alternatives considered: source: doc:ALTERNATIVES.md
  • 4. GDB Hook Integration vs Standalone CLIDecision: Implement GDB debugging as a CLI command (precc debug) rather than as an automatic hook rewrite. Alternatives considered: source: doc:ALTERNATIVES.md
  • 5. Background Daemon vs On-Demand MiningDecision: Support both modes — precc-learner daemon for continuous mining, precc ingest for on-demand. Alternatives considered: source: doc:ALTERNATIVES.md
  • 6. Confidence ThresholdsDecision: Three-tier confidence system: auto-apply (≥ 0.7), suggest (0.3-0.7), hidden (< 0.3). Alternatives considered: source: doc:ALTERNATIVES.md
  • 7. RTK Subsumption StrategyDecision: Port RTK’s rewriting logic into precc-core as the final pipeline stage, rather than running both hooks in sequence. Alternatives considered: source: doc:ALTERNATIVES.md
  • 8. Skill Storage FormatDecision: TOML files for built-in skills, SQLite rows for mined/user skills. Alternatives considered: source: doc:ALTERNATIVES.md
  • 9. Session Log FormatDecision: Read Claude Code’s native JSONL format directly rather than converting to a custom format. Rationale: Claude Code already writes detailed session logs in JSONL format at ~/.claude/projects/*/. Creating a custom format would mean: source: doc:ALTERNATIVES.md

关键指标随时间变化

指标单位首值最新值Δ样本数最近来源
atx0.11.25+1.152commit:4f65556d
buildms3480+4772commit:f84bab49
hookms53-22commit:f81e4543
precctokens42387-3362commit:e3bc282e
savedms4.86.3+1.52commit:ec17f16c

各会话工作量(按令牌数前 10)

会话首次 → 最近消息数输入输出缓存写入缓存读取
ebd819382026-04-04 → 2026-04-1345174547686622246909501020430414
781fe4842026-04-16 → 2026-04-17143413416035963739362259708120
101753392026-03-28 → 2026-03-30131811761024692430047110606429
5761d7ca2026-03-26 → 2026-03-28118043631370562196522116605673
550c7bab2026-03-20 → 2026-03-2210641466104943205973292991217
905ff1692026-04-18 → 2026-04-196501698496929157266863432376
d65ad15f2026-03-31 → 2026-04-0475255878099184564558334554
3b5e29472026-03-22 → 2026-03-2311628961280681526203102403205
0925455d2026-04-19 → 2026-04-19440830262128122605432943523
435418852026-03-31 → 2026-03-31566735382683109632841667559

命令参考

所有PRECC命令的完整参考。


precc init

初始化PRECC并向Claude Code注册钩子。

precc init

Options:
  (none)

Effects:
  - Registers PreToolUse:Bash hook with Claude Code
  - Creates ~/.local/share/precc/ data directory
  - Initializes heuristics.db with built-in skills
  - Prompts for telemetry consent

precc ingest

挖掘会话日志中的失败-修复模式。

precc ingest [FILE] [--all] [--force]

Arguments:
  FILE            Path to a session log file (.jsonl)

Options:
  --all           Ingest all session logs from ~/.claude/logs/
  --force         Re-process files that were already ingested

Examples:
  precc ingest session.jsonl
  precc ingest --all
  precc ingest --all --force

precc skills

管理自动化技能。

precc skills list

precc skills list

List all active skills (built-in and mined).

precc skills show

precc skills show NAME

Show detailed information about a specific skill.

Arguments:
  NAME            Skill name (e.g., cargo-wrong-dir)

precc skills export

precc skills export NAME

Export a skill definition as TOML.

Arguments:
  NAME            Skill name

precc skills edit

precc skills edit NAME

Open a skill definition in $EDITOR.

Arguments:
  NAME            Skill name

precc skills advise

precc skills advise

Analyze recent sessions and suggest new skills based on repeated patterns.

precc skills cluster

precc skills cluster

Group similar mined skills to identify redundant or overlapping patterns.

precc report

生成分析报告。

precc report [--email]

Options:
  --email         Send the report via email (requires mail setup)

precc savings

显示token节省。

precc savings [--all]

Options:
  --all           Show detailed per-command breakdown (Pro)

precc compress

压缩上下文文件以减少token使用。

precc compress [DIR] [--dry-run] [--revert]

Arguments:
  DIR             Directory or file to compress (default: current directory)

Options:
  --dry-run       Preview changes without modifying files
  --revert        Restore files from backup

precc license

管理您的PRECC许可证。

precc license activate

precc license activate KEY --email EMAIL

Arguments:
  KEY             License key (XXXX-XXXX-XXXX-XXXX)

Options:
  --email EMAIL   Email address associated with the license

precc license status

precc license status

Display current license status, plan, and expiration.

precc license deactivate

precc license deactivate

Deactivate the license on this machine.

precc license fingerprint

precc license fingerprint

Display the device fingerprint for this machine.

precc mail

电子邮件功能。

precc mail setup

precc mail setup

Interactive SMTP configuration. Saves to ~/.config/precc/mail.toml.

precc mail report

precc mail report EMAIL

Send a PRECC analytics report to the specified email address.

Arguments:
  EMAIL           Recipient email address

precc mail send

precc mail send EMAIL FILE

Send a file as an email attachment.

Arguments:
  EMAIL           Recipient email address
  FILE            Path to the file to send

precc update

将PRECC更新到最新版本。

precc update [--force] [--version VERSION] [--auto]

Options:
  --force             Force update even if already on latest
  --version VERSION   Update to a specific version
  --auto              Enable automatic updates

precc telemetry

管理匿名遥测。

precc telemetry consent

Opt in to anonymous telemetry.

precc telemetry revoke

precc telemetry revoke

Opt out of telemetry. No further data will be sent.

precc telemetry status

precc telemetry status

Show current telemetry consent status.

precc telemetry preview

precc telemetry preview

Display the telemetry payload that would be sent (without sending it).

precc geofence

IP地理围栏合规(Pro)。

precc geofence check

precc geofence check

Check if the current machine is in an allowed region.

precc geofence refresh

precc geofence refresh

Refresh the IP geolocation cache.

precc geofence clear

precc geofence clear

Clear the geofence cache.

precc geofence info

precc geofence info

Display geofence configuration and current status.

precc gif

从bash脚本录制动画GIF(Pro)。

precc gif SCRIPT LENGTH [INPUTS...]

Arguments:
  SCRIPT          Path to a bash script
  LENGTH          Maximum recording duration (e.g., 30s, 2m)
  INPUTS...       Optional input lines for interactive prompts

Examples:
  precc gif demo.sh 30s
  precc gif interactive.sh 60s "yes" "my-project"

precc gha

分析失败的GitHub Actions运行(Pro)。

precc gha URL

Arguments:
  URL             GitHub Actions run URL

Example:
  precc gha https://github.com/org/repo/actions/runs/12345678

precc cache-hint

显示当前项目的缓存提示信息。

precc cache-hint

precc trial

开始Pro试用。

precc trial EMAIL

Arguments:
  EMAIL           Email address for the trial

precc nushell

启动带有PRECC集成的Nushell会话。

precc nushell

常见问题

PRECC安全吗?

是的。PRECC使用Claude Code官方的PreToolUse钩子机制——Anthropic专门为此目的设计的扩展点。该钩子:

  • 完全离线运行(热路径中无网络调用)
  • 在5毫秒内完成
  • 是fail-open的:如果出现任何问题,原始命令将不受修改地运行
  • 只修改命令,从不自己执行它们
  • 将数据存储在本地SQLite数据库中

PRECC能与其他AI编码工具一起使用吗?

PRECC专为Claude Code设计。它依赖于Claude Code提供的PreToolUse钩子协议。它不适用于Cursor、Copilot、Windsurf或其他AI编码工具。

遥测发送什么数据?

遥测仅在选择加入后启用。启用后发送:

  • PRECC版本、操作系统和架构
  • 汇总计数(拦截的命令、激活的技能)
  • 平均钩子延迟

发送命令文本、文件路径、项目名称或任何个人身份信息。您可以在选择加入前使用 precc telemetry preview 预览确切的数据。详见遥测

如何卸载PRECC?

??faq_uninstall_a_intro??

  1. 移除钩子注册:

    # Delete the hook entry from Claude Code's settings
    # (precc init added it; removing it disables PRECC)
    
  2. 删除二进制文件:

    rm ~/.local/bin/precc ~/.local/bin/precc-hook ~/.local/bin/precc-learner
    
  3. 删除数据(可选):

    rm -rf ~/.local/share/precc/
    rm -rf ~/.config/precc/
    

我的许可证过期了。会发生什么?

PRECC恢复到社区版。所有核心功能继续正常工作:

  • 内置技能保持活跃
  • 钩子管道正常运行
  • precc savings 显示摘要视图
  • precc ingest 和会话挖掘正常工作

Pro功能在续订前不可用:

  • precc savings --all(详细分类)
  • precc compress
  • precc gif
  • precc gha
  • precc geofence
  • 电子邮件报告

钩子似乎没有运行。如何调试?

??faq_debug_a_intro??

  1. 检查钩子是否已注册:

    precc init
    
  2. 手动测试钩子:

    echo '{"tool_input":{"command":"cargo build"}}' | precc-hook
    
  3. 检查二进制文件是否在PATH中:

    which precc-hook
    
  4. 检查 ~/.claude/settings.json 中的Claude Code钩子配置。

PRECC会减慢Claude Code吗?

不会。钩子在5毫秒内完成(p99)。与Claude推理和生成回复所花费的时间相比,这是不可察觉的。

我可以在CI/CD中使用PRECC吗?

PRECC是为交互式Claude Code会话设计的。在CI/CD中,没有Claude Code实例可以挂钩。但是,precc gha 可以从任何环境分析失败的GitHub Actions运行。

挖掘的技能与内置技能有何不同?

内置技能随PRECC提供,涵盖常见的错误目录模式。挖掘的技能从您的特定会话日志中学习——它们捕获您工作流程中独特的模式。两者都存储在SQLite中,并由钩子管道以相同方式评估。

我可以与团队共享技能吗?

可以。使用 precc skills export NAME 将任何技能导出为TOML并共享文件。团队成员可以将其放在 skills/ 目录中或导入到他们的启发式数据库中。

其他语言