A Prompt Pack is a hash-only policy artifact that commits an OpenClaw job to a specific set of prompts and routing rules, then binds execution approvals to that commitment. Instead of trusting whatever text happens to be in the live prompt, you approve a stable digest and require the runtime to prove it used that exact pack.

In Claw EA, the Prompt Pack digest is referenced by a WPC and optionally pinned in the CST so the run fails closed if the prompts change. Model calls are routed through clawproxy to produce gateway receipts, and the job output is packaged as a proof bundle for audit.

Step-by-step runbook

  1. Author the Prompt Pack content and compute a digest. Treat the pack as the minimal set of prompt text plus a small amount of structured metadata (version, intended tools, model class, and redaction expectations). Only the digest is used for approvals so the approval record is stable and reviewable.

  2. Create or update a WPC that references the Prompt Pack digest. The WPC is the permission boundary: it ties “what this job is allowed to do” to a signed, hash-addressed policy artifact served by clawcontrols. Reviewers approve the WPC and the Prompt Pack digest together as the execution contract.

  3. Issue a CST with scope hash and optional policy hash pinning. The CST is issued by clawscope and can carry a scope hash that constrains what the runtime may request. If you pin the policy hash, the token becomes unusable for any run that cannot present the matching WPC and Prompt Pack commitment.

  4. Run the job in OpenClaw with the WPC enforced at the execution layer. OpenClaw is the baseline agent runtime, and it already separates sandboxing from tool policy. Keep local tool policy and sandboxing tight, then use the WPC to make remote approvals and policy checks deterministic.

  5. Route model calls through clawproxy for gateway receipts. When the agent calls a model, clawproxy emits signed gateway receipts for the model call. This lets you verify what model endpoint was contacted and bind the receipts to the job context you approved.

  6. Collect the proof bundle and publish to Trust Pulse as needed. A proof bundle packages the gateway receipts plus related metadata (job identity, policy references, and integrity fields) so auditors can verify the run. If you use Trust Pulse, you can store and view the artifact for later review.

Threat model

Prompt-only controls fail in predictable ways because prompts are easy to edit, easy to inject into, and hard to audit after the fact. A permissioned execution layer uses policy-as-code so the runtime can fail closed on mismatches and produce verifiable evidence.

Threat What happens Control
Silent prompt drift between approval and execution A developer “tweaks” wording, a template expands differently, or a runtime fetch returns a new prompt revision. The job still runs, but the approved intent is no longer what executed. Approve a Prompt Pack digest and reference it from a WPC. Optionally pin the policy hash in the CST so the run fails closed if the WPC or Prompt Pack commitment does not match.
Prompt injection that changes tool intent Untrusted input convinces the agent to perform a high-impact action even though the system prompt said “be careful.” Post-incident, you cannot prove which tool policy was effectively in force. Put tool and action constraints in the WPC, not just in text. Keep OpenClaw tool policy and sandbox settings restrictive, and require WPC verification before execution proceeds.
Receipt gap, no way to verify model calls After a bad output, you cannot prove which model was called, what parameters were used, or whether a different provider was substituted. Route model calls through clawproxy and collect gateway receipts. Verify receipts during audit and include them in the proof bundle.
Token replay across jobs A CST leaked from logs or CI is reused to run a different job with a different payload. The run appears authorized because the token is still valid. Use marketplace anti-replay binding (job-scoped CST binding). Bind the CST to the job identity so it cannot be reused for a different run context.

Policy-as-code example

This example shows a minimal “Prompt Pack commitment” embedded as a reference inside a WPC. The key idea is that reviewers approve hashes and invariants, not free-form text that can drift.

{
  "kind": "wpc",
  "version": "1",
  "work": {
    "name": "prompt-pack:customer-support-triage",
    "purpose": "classify + draft response, no outbound actions"
  },
  "prompt_pack": {
    "digest_alg": "sha256",
    "digest_b64u": "m9f...E2Q",
    "declared_inputs": ["ticket_text", "account_tier"],
    "declared_outputs": ["label", "draft_reply"]
  },
  "execution": {
    "models": [{
      "route": "openrouter_via_fal_through_clawproxy",
      "family": "text",
      "max_output_tokens": 600
    }],
    "tools": {
      "allow": ["read_ticket", "write_draft"],
      "deny": ["send_email", "exec", "browser"]
    }
  },
  "verification": {
    "require_gateway_receipts": true,
    "require_policy_hash_match": true
  }
}

Validation rules (fail closed): if the presented Prompt Pack digest does not match the WPC reference, execution stops. If policy hash pinning is used in the CST and the runtime cannot prove it is operating under that policy hash, the run stops.

If the run is configured to require gateway receipts and clawproxy is unreachable, the run should be treated as non-compliant and halted or quarantined, depending on your operating procedure. The goal is to avoid “best effort” execution for workloads that require approvals.

What proof do you get?

You get gateway receipts for each model call, emitted by clawproxy and signed so they can be verified later. These receipts are the evidence that model traffic went through the controlled path, not a direct provider call that bypasses controls.

You also get a proof bundle that packages the receipts with binding metadata, including what WPC was used and what job context the CST was bound to. In audits, you verify that the Prompt Pack digest, WPC reference, and receipts all agree, and that the run is not a replay.

If you store the bundle in Trust Pulse, the artifact can be viewed later for investigation and review. Keep the Prompt Pack itself in your internal repository; the approval and audit path only needs the digest and the proof bundle.

Rollback posture

Prompt Pack rollbacks should be operationally boring: swap a digest, update the WPC reference, and invalidate old approvals. Do not “hot edit” prompts and hope people notice, since that defeats permissioned execution approvals.

Action Safe rollback Evidence
Bad prompt revision produces unsafe outputs Revert to the last known-good Prompt Pack digest and update the WPC to reference it. Issue new CSTs pinned to the reverted policy hash. New proof bundles show the reverted digest and the new WPC reference. Old runs remain auditable and distinguishable by digest.
Discovered tool overreach in policy Tighten the WPC tool allow list and deny risky tools explicitly. Re-run OpenClaw tool policy and sandbox settings to ensure local blast radius is still limited. Proof bundles show the updated WPC hash and continued presence of gateway receipts. OpenClaw configuration audits can be run separately for local posture.
Suspected CST leakage Revoke affected CSTs via clawscope operations and rotate issuance practices. Require job-scoped CST binding so stolen tokens cannot authorize new jobs. Subsequent runs require new CSTs and produce proof bundles bound to the new job identities. Investigation uses issuance and revocation records plus proof bundles.
Need to pause all runs under a prompt family Stop issuing CSTs for that scope hash and require a new WPC version for resumption. Treat this as a controlled change, not an ad hoc prompt edit. The absence of valid CSTs blocks new runs. Later runs show the new WPC hash and Prompt Pack digest in their proof bundles.

FAQ

Why is permissioned execution required instead of a strong system prompt?

A system prompt is not an enforcement boundary; it is mutable text that can drift and can be overridden by injection. Permissioned execution ties approvals to WPC constraints and a Prompt Pack digest so the runtime can detect changes and fail closed.

What exactly is “hash-only” about a Prompt Pack?

The approval object is the digest, not the raw prompt text. You can keep the full Prompt Pack in your repository, but the execution layer only needs the digest and a deterministic way to confirm the runtime used it.

How does a CST relate to the Prompt Pack approval?

A CST can carry a scope hash and optionally pin a policy hash, which makes the token unusable if the runtime cannot present the intended policy commitment. This is how you prevent a valid token from being used with a different WPC or different Prompt Pack digest.

What do gateway receipts prove and what do they not prove?

Gateway receipts prove that model calls went through clawproxy and provide verifiable metadata for those calls. They do not prove the model’s internal reasoning, and they do not guarantee that a user did not feed malicious input, so you still need tool policy and sandboxing.

Can Prompt Packs be used with Microsoft prompt tooling?

Yes, you can treat prompt assets authored in Microsoft tooling as inputs to your Prompt Pack, then approve and run by digest. The binding and verification still happen at the execution layer using WPC, CST, gateway receipts, and proof bundles, independent of how the prompts were authored.

Sources