Patient data and LLMs
don't mix unless
you govern them.

Hospitals, clinics, and health-tech companies deploy AI agents for triage, documentation, and clinical decision support. TapPass ensures patient data never reaches the model unprotected.

Why healthcare AI needs a governance layer

Patient data is GDPR Art. 9 special category data, the highest protection level. AI agents in healthcare operate in a regulatory minefield.

🩺

Patient PII in LLM prompts

Your documentation agent sends patient names, diagnoses, and treatment plans to an external LLM. GDPR Art. 9 requires explicit consent and strict processing safeguards.

πŸ“‹

No auditability for clinical AI

The EU AI Act classifies medical AI as high-risk. You need full logging of every AI decision: what data went in, what came out.

πŸ’Š

Prompt injection in clinical tools

A patient enters malicious text in a symptom field. Your triage agent follows the injected instruction instead of analysing symptoms.

πŸ”€

Cross-patient data leakage

AI agents with shared context windows leak patient A's history into patient B's session. Without session isolation, no data separation.

🌍

Health data leaving the EU

Routing patient data to a US-hosted LLM violates GDPR transfer rules. Health data carries the highest protection level.

πŸ”¬

Research–clinical contamination

AI agents processing both clinical and research data risk cross-contamination. De-identified data becomes re-identifiable in LLM context.

Runtime governance for healthcare AI

Detect, block, and audit every patient data touchpoint, every LLM call, every tool invocation.

πŸ”

Patient data detection

Detects names, MRNs, ICD codes, medications, and dozens of obfuscation techniques before data reaches the LLM.

  • GDPR Art. 9 special category awareness
  • PII tokenisation so the model never sees real data
  • Configurable: block, redact, or tokenise
πŸ“‹

EU AI Act compliance logging

Hash-chained audit trail captures every AI decision. Classification, detections, data categories, timing.

  • Art. 12 record-keeping, tamper-evident
  • Art. 14 human oversight with approval gates
  • SIEM export for hospital SOC
πŸ₯

Clinical safety guardrails

Healthcare guardrail rules prevent AI agents from making unsupported clinical claims or overriding human judgment.

  • Human approval for clinical decisions
  • Output scanning for medical claims
  • Behavioural pacts enforce agent scope
πŸ”’

Session isolation

Session-scoped taint tracking ensures patient A's data never leaks into patient B's session.

  • Per-patient session boundaries
  • Taint propagation tracking
  • Memory poisoning detection
πŸ‡ͺπŸ‡Ί

EU-only data routing

Force all patient data to EU-hosted providers. Self-hosted deployment for maximum control.

  • Classification-based provider routing
  • Self-hosted deployment option
  • Zero data leaves your network
πŸ—‘

GDPR erasure (Art. 17)

Cryptographic erasure with tombstones. Patient's right to erasure while preserving audit chain integrity.

  • Signed erasure receipts
  • Hash chain recomputation
  • Configurable retention policies

Patient data deserves more than a privacy policy.

Scan, classify, and audit every AI interaction with patient data. EU AI Act ready.