Claw EA turns Google Chat into a permissioned control plane for enterprise agents, where actions are executed only under a Work Policy Contract (WPC) and every model call is receipted. OpenClaw is the baseline agent runtime, and Claw EA adds policy-as-code, scoped tokens (CST), gateway receipts, and proof bundles so your security team can verify what happened after the fact.

Prompt-only rules are not enough in chat environments because any user message can carry prompt injection, tool-jailbreak attempts, or “act-as-admin” social engineering. The execution layer must be permissioned so the agent cannot exceed a machine-enforced tool policy even when the prompt tries to override it.

Step-by-step runbook

1) Define the Google Chat surface you will allow: which spaces, which user identities, and what triggers count as valid work. If you are using a Google Chat app, this is implemented via official API handling for messages, mentions, slash commands, and interactive cards.

2) Start from a minimal OpenClaw tool profile and enable sandboxing for channel sessions, then deny elevated execution by default. This constrains what the agent can do even if a chat prompt tries to escalate into filesystem or shell actions.

3) Write a WPC that describes allowed tools, data handling, and approval requirements, then publish it to the WPC registry so it is signed and hash-addressed (served by clawcontrols). In Claw EA runs, pin the policy by hash so the runtime fails closed if the policy changes unexpectedly.

4) Configure Claw EA to mint a CST for each job with a scope hash and optional policy hash pinning (CST is issued by clawscope). Bind the CST to the job so replaying a token in a different chat or later run is rejected.

5) Route model calls through clawproxy, including OpenRouter via fal when you use that path, so you get gateway receipts for every model call. Keep the Google Chat side “thin” and treat it as an input and approval surface, not the enforcement point.

6) Implement approvals in the chat UI: the agent posts an approval request (for example, “Send message to space X” or “Create ticket with text Y”), and only after approval do you start a job that has the CST pinned to the WPC. This can be implemented via official API interactive cards, or via an MCP server that fronts your internal approval system.

Threat model

Google Chat is a high-risk input channel because many people can type into the same space and past messages get quoted. Treat every message as untrusted input and assume prompt injection is routine, not exceptional.

Threat What happens Control
Prompt injection in a space User message instructs the agent to ignore policy, exfiltrate data, or run unsafe tools. Permissioned execution: OpenClaw tool policy plus sandboxing; WPC defines allowed tools and required approvals so “ignore instructions” cannot expand permissions.
Tool misuse via “helpful” automation Agent takes action on the wrong target (wrong space, wrong customer, wrong system) because context is ambiguous. WPC requires explicit target binding (space IDs, project IDs) and forces approvals for cross-space posting or external side effects.
Credential leakage in chat Secrets or tokens get pasted into the conversation or echoed back by the agent. Redaction discipline plus narrow tool outputs; keep secrets out of chat and use short-lived CSTs with pinned scope hash for execution identity.
Model call disputes Teams cannot prove which model was called, what inputs were sent, or whether a run respected policy. Gateway receipts from clawproxy and proof bundles for each run, suitable for verification and later audit.
Replay of an approval or token An attacker reuses an earlier approval artifact or token to re-trigger an action later. Marketplace anti-replay binding with job-scoped CST binding, plus policy hash pinning to prevent “approve once, run different code later.”

Policy-as-code example

This example shows the intent for a Google Chat controlled agent: accept work only from specific spaces, require mention to reduce accidental triggers, and force approvals for external side effects. The WPC is signed and hash-addressed, and the run pins to that hash.

{
  "wpc": {
    "name": "google-chat-agent-prod",
    "version": "2026-02-11",
    "channel": "google_chat",
    "inputs": {
      "allowed_spaces": ["spaces/AAA...", "spaces/BBB..."],
      "allowed_senders": ["users/alice@corp", "users/oncall@corp"],
      "require_explicit_trigger": true
    },
    "tools": {
      "allow": ["summarize", "draft_reply", "lookup_internal_doc"],
      "deny": ["shell.exec", "filesystem.write", "browser.remote_control"],
      "sandbox": { "mode": "all", "workspaceAccess": "ro" }
    },
    "approvals": [
      { "when": "post_to_other_space", "required": true },
      { "when": "external_ticket_create", "required": true }
    ],
    "data_handling": {
      "no_secrets_in_chat": true,
      "redact_outputs": ["tokens", "api_keys"]
    }
  },
  "execution": {
    "cst": {
      "scope_hash": "sha256:...",
      "policy_hash_pinning": "sha256:... (WPC hash)"
    }
  }
}

In practice, the agent can still draft text and propose actions in Google Chat, but it cannot execute the gated actions until an approval event results in a new job with the pinned WPC. This prevents “approve the idea” from becoming “approve any future action.”

What proof do you get?

For each run triggered from Google Chat, Claw EA can produce a proof bundle that ties together identity, policy, and model activity. The proof bundle includes gateway receipts, which are signed receipts emitted by clawproxy for model calls, so you can verify the call sequence and metadata without trusting the chat transcript.

Operationally, you can attach a short “run receipt” summary back into the Chat thread (for example: run ID, WPC hash, and a verification status), and store the full proof bundle for audit. If you publish the bundle to the marketplace, you can view it as a Trust Pulse artifact for audit and review.

Rollback posture

Rollbacks in chat need to be fast because mistakes propagate in front of users. Aim for a posture where you can stop execution immediately, then prove what happened using receipts and the pinned WPC hash.

Action Safe rollback Evidence
Bad agent behavior in a space Disable the Google Chat app in the space and stop issuing new jobs for that channel. Proof bundle for the last runs; WPC hash shows the exact policy in force.
Policy bug (too-permissive tools) Publish a tighter WPC and require policy hash pinning so old jobs cannot silently use the new policy. WPC registry record plus run metadata showing which policy hash was pinned.
Suspected token misuse Move to shorter CST lifetimes and rotate job-scoped binding; revocation workflows can be implemented if you need immediate invalidation. Job-scoped CST binding and run logs showing which CST scope hash was accepted.
Dispute about a model-generated message Freeze changes, replay from the proof bundle, and verify gateway receipts for the exact model calls used. Gateway receipts and the proof bundle verification output.

FAQ

How does Google Chat change agent security compared to a web app?

Chat adds untrusted, multi-party input and encourages users to “try stuff” until it works. That is why prompt-only controls fail and why you need permissioned execution under a WPC with enforced tool policy.

Can approvals happen directly inside Google Chat?

Yes, approvals can be implemented using interactive Chat UI elements via official API, or by routing approval events through an MCP server that talks to your internal system. The key is that approval results in a job that pins the WPC hash and mints a job-scoped CST, rather than toggling a prompt flag.

What do I show an auditor after an incident?

Provide the proof bundle for the run, including gateway receipts for the model calls, and the WPC hash that was pinned during execution. This makes it clear what the agent was allowed to do and what model traffic actually occurred.

Do I need to sandbox if I already have a WPC?

Yes, because policy describes intent, but sandboxing reduces blast radius when a tool behaves unexpectedly or a configuration is wrong. OpenClaw separates sandboxing, tool policy, and elevated execution, and you should keep elevated paths closed unless you have a concrete reason.

What if we want additional controls like egress allowlists or cost budgets?

Egress allowlists enforced outside clawproxy and automatic cost budget enforcement are optional or planned items. If you need them now, they can be implemented as an enterprise buildout around the execution environment and job scheduler.

Sources

Ready to put this workflow into production?

Get a scoped deployment plan with Work Policy Contracts, approval gates, and cryptographic proof bundles for your team.

Talk to Sales Review Trust Layer