OnceOnly
Client-side input safety demo

Prompt Anonymizer

Remove personal data from AI prompts — locally in your browser.

Runs 100% in your browser. No data leaves this page.
Your data never leaves the browser Stable mapping: John → [PERSON_1] Restore: paste LLM output → one click
1) Paste your prompt
Auto-anonymizes as you type. Nothing leaves your browser.
2) Copy safe prompt
What to mask
Optional (heuristic): enable if you need to discuss people/orgs/projects without leaking identities.
Advanced
Not a bug: it adds a mapping ID so you can restore placeholders in the LLM output later.
Runs locally (no server). Mapping is saved only in your browser.
Show mapping table
Type Placeholder Original
No mapping yet.

3) Restore entities (optional)

Paste the LLM output and restore real entities using the latest mapping.

LLM output (paste)
Restored output
Use a specific mapping ID
Doing this manually?

OnceOnly applies anonymization automatically before your AI agent executes any task: enforce input policies, mask PII at runtime, audit prompt safety, and prevent accidental leaks.

onceonly.execute(
  input=prompt,
  pii_masking="advanced"
)
Protect your AI agents with OnceOnly

How to use ChatGPT with sensitive data

If you paste internal logs, customer tickets, or API snippets into an LLM, you may unintentionally leak PII and secrets. A safer flow is to anonymize first (locally), then restore only what you need after the model responds.

Hide API keys from LLM prompts

Secrets like sk-…, OAuth tokens, and webhook signing keys often appear in debug logs. Mask them as stable placeholders ([API_KEY_1]) so you can still reason about the output without exposing credentials.

GDPR compliance for AI prompts

GDPR and similar regimes require minimizing personal data. Client-side anonymization helps you reduce what you send to processors, especially when prompts include emails, phone numbers, or customer identifiers.

Is it safe to send customer data to OpenAI?

Treat any prompt as potential disclosure. The safest approach is to avoid sending personal data at all. If you must, use masking and strict policies so sensitive fields are never transmitted.

How to anonymize prompts before AI agent execution

Humans make mistakes under pressure. OnceOnly bridges input safety to execution safety by enforcing prompt policies and PII masking automatically at runtime, with audit logs for review.