Attorney-client privilege
ends when data
hits an LLM.

Law firms and legal departments use AI for contract review, case research, and drafting. TapPass ensures confidential client data never reaches the model unprotected.

The risks of ungoverned AI in legal practice

Legal confidentiality is absolute. One leaked client name in an LLM prompt can trigger malpractice claims.

📜

Privileged information in prompts

Your contract review agent sends client agreements, including party names and deal terms, to an external LLM. Privilege may be waived.

🔀

Cross-client contamination

An associate uses the same AI tool for client A (acquiring) and client B (target). The model's context window leaks deal information between adverse parties.

🎯

Opposing counsel injection

A document from opposing counsel contains hidden instructions that manipulate your AI's contract analysis. Favourable terms flagged as risky.

📋

No audit trail for AI-assisted work

Courts increasingly require disclosure of AI use. Without logging, you can't prove what the AI contributed to a brief.

👥

Associates using consumer AI

Junior associates paste client documents into ChatGPT. No governance, no policy enforcement, no way to know it happened.

💰

Unbounded AI costs per matter

AI agents making hundreds of LLM calls per case with no budget controls. Client billing disputes follow.

Runtime governance for legal AI

Protect privileged information, isolate client matters, and create a complete audit trail of every AI interaction.

🔒

Client data protection

Detect and redact client names, case numbers, party names, and financial terms before reaching the LLM.

  • PII tokenisation preserves document structure
  • Secret scanning catches credentials
  • Configurable per practice group
🏛️

Matter-level isolation

Session-scoped taint tracking ensures client A's data never bleeds into client B's AI session.

  • Per-matter agent boundaries
  • Chinese wall enforcement via policy
  • Conflict check integration
📋

AI disclosure audit trail

Hash-chained, tamper-evident logs of every AI interaction. Prove what the AI contributed and whether human review occurred.

  • Court-ready documentation
  • Per-matter audit export
  • Human approval gates for filings
🌐

Firm-wide governance

Route all AI tools through a single governance layer. Associates keep using Copilot or ChatGPT, but every call is governed.

  • Works with existing tools
  • Shadow mode tests without blocking
  • Per-attorney API keys with audit
⚔️

Document injection defence

Scan incoming documents for hidden instructions before your AI processes them.

  • Tool result scanning (indirect injection)
  • External data taint labelling
  • Exfiltration path detection
💰

Per-matter cost tracking

Track AI costs per agent, per matter, per attorney. Budget limits and alerts.

  • Budget enforcement with hard caps
  • Token usage per matter for billing
  • Cost anomaly detection

Privilege isn't just a doctrine. It's a technical requirement.

Govern every AI interaction. Full audit trail. Zero client data leakage.