Show mapping table
| Type | Placeholder | Original |
|---|---|---|
| No mapping yet. | ||
3) Restore entities (optional)
Paste the LLM output and restore real entities using the latest mapping.
OnceOnly applies anonymization automatically before your AI agent executes any task: enforce input policies, mask PII at runtime, audit prompt safety, and prevent accidental leaks.
onceonly.execute(
input=prompt,
pii_masking="advanced"
)
Protect your AI agents with OnceOnly
How to use ChatGPT with sensitive data
If you paste internal logs, customer tickets, or API snippets into an LLM, you may unintentionally leak PII and secrets. A safer flow is to anonymize first (locally), then restore only what you need after the model responds.
Hide API keys from LLM prompts
Secrets like sk-…, OAuth tokens, and webhook signing keys often appear in debug logs. Mask them as stable placeholders ([API_KEY_1]) so you can still reason about the output without exposing credentials.
GDPR compliance for AI prompts
GDPR and similar regimes require minimizing personal data. Client-side anonymization helps you reduce what you send to processors, especially when prompts include emails, phone numbers, or customer identifiers.
Is it safe to send customer data to OpenAI?
Treat any prompt as potential disclosure. The safest approach is to avoid sending personal data at all. If you must, use masking and strict policies so sensitive fields are never transmitted.
How to anonymize prompts before AI agent execution
Humans make mistakes under pressure. OnceOnly bridges input safety to execution safety by enforcing prompt policies and PII masking automatically at runtime, with audit logs for review.