Claw EA turns Telegram into a controlled command surface for enterprise agents by running them on OpenClaw as the baseline runtime, then forcing real permissions at execution time instead of relying on prompt text. You use a WPC (Work Policy Contract) to define what the agent is allowed to do, a CST (scoped token) to bind each run to that policy, and clawproxy to emit gateway receipts for every model call.
In Telegram, approvals show up as explicit “approve or deny” steps in the chat, and the run produces a proof bundle you can verify later. This is a good fit for on-call and field workflows where Telegram is already adopted, but it is not a substitute for a full enterprise collaboration suite’s compliance controls.
Step-by-step runbook
1) Create a Telegram bot and lock down who can talk to it. Configure the bot via the official API, then restrict usage to known user IDs and specific group chats where possible. Treat the bot token as a production secret and rotate it on any suspicion of exposure.
2) Stand up the OpenClaw Gateway with the Telegram channel enabled. Keep the Gateway private and run openclaw security audit regularly to catch common footguns like open group policies or unsafe filesystem permissions. Use Docker sandboxing for tool execution unless you have a documented reason not to.
3) Write and publish a WPC that matches your Telegram operating model. Put concrete constraints in the WPC: which Telegram chats are allowed, whether a mention is required, what tools can run, and which actions require an approval message. Publish the signed WPC to the WPC registry (served by clawcontrols), and treat the policy hash as the immutable identifier.
4) Issue a CST for the agent run, pinned to the policy. Use clawscope to issue a CST (scoped token) with a scope hash aligned to your tool and channel permissions, optionally pinning to the WPC policy hash. Use short TTLs for Telegram agents, and issue a new CST per job or on-call shift.
5) Route model calls through clawproxy for receipts. Configure the OpenClaw provider path so model traffic is proxied via clawproxy, which emits gateway receipts for each model call. If you use OpenRouter via fal, keep it behind clawproxy so the receipts cover the call boundary.
6) Implement “approval-in-chat” as a hard gate, not a suggestion. In Telegram, have the agent post a structured approval request (what action, which target, and the policy hash), then wait for an explicit /approve or /deny from an allowlisted approver. The execution layer should refuse to proceed without the approval event bound to the current job context.
7) Export and store proof for audit. At the end of a run, collect the proof bundle that includes gateway receipts and metadata needed for verification. Store it internally and optionally publish a Trust Pulse artifact for easier viewing during audits or incident reviews.
Threat model
Telegram is convenient, but it is a high-risk input channel because untrusted text is the primary interface. The control plane has to assume adversarial prompts, forwarded messages, and accidental operator commands.
| Threat | What happens | Control |
|---|---|---|
| Prompt injection via chat content | A user pastes “ignore your rules” content and the agent attempts a sensitive tool action. | Permissioned execution with a WPC and OpenClaw tool policy. The model can ask, but the runtime denies calls outside the allowlist. |
| Impersonation or wrong-chat activation | The bot acts on commands from the wrong user, or in an unapproved group chat. | WPC chat and user allowlists, plus “require mention” semantics at the channel layer. Separate CST per job to prevent reuse across contexts. |
| Credential misuse after token leakage | A leaked CST or bot token is reused to run unauthorized actions. | CST short TTL and revocation via clawscope. Marketplace anti-replay binding with job-scoped CST binding to reduce token replay across jobs. |
| Silent model changes or disputed outputs | Teams cannot prove what model was called or what inputs produced an instruction. | Gateway receipts from clawproxy for each model call, assembled into a proof bundle for verification and audit. |
| Excessive tool blast radius | A “helpful” agent starts writing files, running shell commands, or accessing broad network resources. | OpenClaw sandboxing for tool execution and minimal tool allowlists. Use “elevated” mode only with explicit gating and a written operational runbook. |
Policy-as-code example
This is a simplified, JSON-like WPC sketch for a Telegram-operated agent. The goal is to make Telegram a request and approval surface, while keeping execution bounded by explicit tool and data permissions.
{
"wpc_version": "v1",
"policy_name": "telegram-oncall-agent",
"channel": {
"type": "telegram",
"allowed_chat_ids": ["-1001234567890"],
"allowed_user_ids": ["11111111", "22222222"],
"require_mention": true
},
"session": {
"job_scoped": true,
"max_ttl_seconds": 3600
},
"tools": {
"allow": ["http_get", "ticket_create", "ticket_comment", "runbook_search"],
"deny": ["shell_exec", "filesystem_write", "secrets_dump"]
},
"approvals": [
{
"action": "ticket_create",
"required_from_user_ids": ["22222222"],
"telegram_command": "/approve"
},
{
"action": "any_external_change",
"required_from_user_ids": ["22222222"],
"telegram_command": "/approve"
}
],
"model_calls": {
"must_proxy_via": "clawproxy",
"receipts_required": true
}
}
Prompt-only controls fail here because the model can be convinced to “agree” to constraints while still calling tools. A WPC makes the constraint enforceable at the tool boundary, and the CST binds the running agent to that exact policy hash.
What proof do you get?
Every model call routed through clawproxy produces gateway receipts that you can verify later, even if the chat transcript is incomplete. Those receipts are then packaged into a proof bundle along with run metadata so you can answer: what was called, under which policy, and in which job context.
Because runs can be bound to a job-scoped CST, you get practical anti-replay properties for audits. If you publish to Trust Pulse, you can keep an external, marketplace-stored artifact for viewing and review without re-running the job.
Rollback posture
Telegram agents should be operated like production automation: assume you will need to stop runs quickly and prove what happened. Rollback is a mix of token revocation, channel disablement, and tightening the WPC and tool policy for the next run.
| Action | Safe rollback | Evidence |
|---|---|---|
| Compromised run or suspicious behavior | Revoke the CST in clawscope and stop accepting new jobs for that agent. | CST revocation record and the proof bundle for the run in question. |
| Wrong-chat activation | Disable the Telegram channel entry for the agent and rotate the Telegram bot token. | Chat IDs in the WPC, plus Telegram transcript correlation with the run’s proof bundle metadata. |
| Tool policy too permissive | Update the WPC to remove tools, then issue a new CST pinned to the new policy hash. | Policy hash change history, plus receipts showing attempted tool calls being denied in the next run. |
| Disputed model output | Re-verify the proof bundle and review gateway receipts for the exact model call sequence. | Gateway receipts emitted by clawproxy, bundled and verifiable. |
FAQ
Why use Telegram as a control plane at all?
Telegram is fast for on-call and field teams, and it works well for approvals and status updates. It is weaker for enterprise retention and compliance than dedicated enterprise collaboration platforms, so treat it as an operational surface, not your system of record.
What makes this permissioned execution instead of “a careful prompt”?
A careful prompt is advisory and can be overridden by chat content. A WPC plus CST makes constraints enforceable at execution time, so the runtime can deny tool calls even when the model tries to proceed.
How do approvals work in the chat UI?
The agent posts a structured approval request message that includes the action, target, and the policy hash, then pauses. Only an allowlisted approver can send /approve in the same chat to unlock the next step, and that approval is bound to the current job context.
Do you provide a native Telegram connector?
Claw EA focuses on secure execution, policy, and proof for OpenClaw agents. Telegram integration is typically done via OpenClaw’s channel support and the Telegram official API, and deeper enterprise workflows can be implemented via an MCP server or an enterprise buildout.
What can auditors verify after the fact?
They can verify gateway receipts for model calls and review the proof bundle metadata that ties the run to a specific WPC and CST context. This is designed to reduce “we think the model did X” uncertainty during incident review.