The average developer building with AI today is juggling credentials for OpenAI, Anthropic, ElevenLabs, Replicate, Perplexity, and half a dozen others. Each one is a bearer token — meaning whoever has the key has full, unrestricted access to your account. No password. No 2FA. Just the key.
Managing those keys securely isn't complicated once you understand the threat model. But most developers never stop to think about it. They're in flow, moving fast, and the fastest path — pasting a key into chat, exporting it in a shell command, dropping it in .env — is exactly the wrong path.
Secure API key management isn't about being paranoid. It's about understanding which surfaces are logged and keeping secrets off all of them.
This guide covers the full picture: where keys leak, the right architecture to prevent it, key rotation and audit trails, and how to integrate secure practices into your daily Cursor, Claude, and OpenClaw workflows using ipeaky.
Every Surface Where API Keys Leak
Before you can plug the holes, you need to know where they are. There are five primary leak surfaces in AI-assisted development — and most developers are exposed on at least three of them.
1. AI Chat History
When you paste an API key into a Cursor chat, a Claude.ai conversation, or a ChatGPT prompt, it is stored. Not cached — stored. Associated with your account. Retained according to the provider's data policy, which you agreed to and didn't read.
The risk isn't just the provider seeing your key. It's the downstream chain: if your AI account is compromised, every key you ever pasted is now in an attacker's hands. If there's a provider breach, the same. Context windows also mean keys travel further than you realize — a key pasted in a conversation might appear in a summarized context window, a fine-tuning dataset, or a retrieved memory chunk months later.
2. Shell History
Every command you type is written to ~/.zsh_history or ~/.bash_history. This includes the inline export pattern that feels so convenient:
export OPENAI_API_KEY=sk-proj-abc123xyz # Now it's in your shell history forever
Shell history persists across sessions, syncs across devices if you use dotfile managers, and is readable by any process running as your user. On shared or managed machines, it may also be captured by endpoint detection tools.
3. Command Arguments
Passing keys as command-line arguments is even worse than shell history, because the key is visible in real time to anyone who can run ps aux:
curl -H "Authorization: Bearer sk-proj-abc123xyz" https://api.openai.com/v1/models # Visible in ps aux to all users on the machine while this runs
On macOS, process listings are world-readable by default. On Linux it depends on kernel configuration, but many systems — including most dev machines — expose process arguments to all local users. Your key, in a command argument, is broadcasting itself to the entire box.
4. .env Files in Version Control
This is the most common breach vector, and the most embarrassing. You create a .env file. You forget to add it to .gitignore, or you add it and then commit a different way, or a teammate's IDE creates a local copy that gets staged automatically.
GitHub's secret scanning helps — but it's reactive. By the time a scan fires, automated bots watching public repo push events have already harvested the key. The window between "accidentally committed" and "key abused" is measured in seconds, not minutes.
5. AI Agent Traces and Logs
This one is new and underappreciated. Modern AI agents — including OpenClaw, Continue.dev, and custom LangChain pipelines — log their tool calls. If your agent reads a file that contains an API key, or if a key is passed as a tool parameter, it will appear in the trace log.
Agent frameworks are designed for observability. Tracing is a feature. But tracing a key is a liability — and most developers don't audit their agent logs with the same care they'd apply to their production database.
The Right Architecture: Stdin-Only, Encrypted at Rest, Never in Args
There's a clean, simple architecture that eliminates all five leak surfaces. It's not new — it's the same pattern that password managers and secret management services use. The principles:
Read via stdin, never via args. When a key is provided via stdin, it doesn't appear in the process argument list, it doesn't appear in shell history (unless you pipe it explicitly from history), and it's not visible to ps. Every secure secret input path uses stdin.
Store with restrictive file permissions. chmod 600 means only the owning user can read the file. No group access, no world access. The key exists on disk, but only accessible to your session.
Never in environment variables set inline. The pattern KEY=val command puts the key in shell history. Instead, source a credentials file at runtime: the key is in memory for the process duration, not in the history log.
Never in files that touch version control. Credentials live in a dedicated directory outside your project tree — never inside a repo, never in a location that could accidentally be staged.
Mask on display, validate on demand. Any tool that shows keys should mask them. Any tool that claims a key is valid should actually test it against the API, not just confirm the file exists.
Key Rotation Practices and Audit Trails
Storing keys securely is step one. Keeping them fresh is step two. API keys should be rotated on a regular schedule, and immediately when any of these events occur:
Rotate immediately when: you accidentally pasted a key into chat, a coworker's machine was compromised, you revoked access for a contractor who had the key, a provider announces a breach, or you suspect unusual usage on your account.
Rotate on schedule: Every 90 days is a reasonable baseline for high-value keys (OpenAI, Anthropic, payment APIs). Every 180 days for lower-risk services. Set a calendar reminder — rotation discipline is what keeps old, forgotten keys from becoming active liabilities.
An audit trail means you know, at any point in time, which keys exist, when they were last stored, and whether they're still valid. Without this, you accumulate stale keys that you've forgotten about — keys that are still active on the provider side, still billed against your account, and still vulnerable if any machine that ever had them is compromised.
# What a healthy key audit looks like: $ ipeaky list OPENAI_API_KEY = sk-proj-7r**** [valid] ANTHROPIC_API_KEY = sk-ant-a**** [valid] ELEVENLABS_API_KEY = 3rIC**** [valid] REPLICATE_API_KEY = r8_abc**** [untested] # Test all keys in one pass: $ ipeaky test OPENAI_API_KEY ✓ OpenAI key is valid.
The goal of an audit trail isn't to track your own behavior — it's to give you confidence that your key inventory is accurate and healthy. Untested keys should be retested or rotated. Keys you don't recognize should be revoked.
How ipeaky Solves Each Problem
ipeaky is a pure-bash, zero-dependency CLI that implements the secure architecture above as the default and only path. There's nothing to configure, no database to set up, no cloud account to create. It runs wherever bash runs.
Installation (30 seconds)
curl -fsSL https://raw.githubusercontent.com/christiancattaneo/ipeaky/main/install.sh | bash
That's it. The installer drops a single bash script into /usr/local/bin/ipeaky (or ~/.local/bin/ipeaky on systems where you don't have root). No package manager required. No dependencies beyond bash and curl.
Storing a Key (stdin-only, never logged)
$ ipeaky store OPENAI_API_KEY Key: •••••••••••••••• ✓ Stored securely → ~/.openclaw/credentials/ipeaky-keys.env (chmod 600)
The key is read via stdin — not as a command argument. It never appears in your shell history, in ps aux, or in any process log. It's written directly to a credentials file with chmod 600 permissions. The file is outside your project directory, so it will never accidentally end up in version control.
Listing Keys (masked display)
$ ipeaky list OPENAI_API_KEY = sk-proj-7r**** ANTHROPIC_API_KEY = sk-ant-a**** ELEVENLABS_API_KEY = 3rIC****
Masked output shows enough to confirm which key is present without ever displaying the full value. Safe to run in a terminal that's being screen-shared. Safe to paste in a bug report. The actual key value never touches stdout.
Testing Keys (validate without exposing)
$ ipeaky test OPENAI_API_KEY ✓ OpenAI key is valid. $ ipeaky test ANTHROPIC_API_KEY ✓ Anthropic key is valid.
Built-in validation calls the provider's API to confirm the key works. This is how you catch rotated-but-not-updated keys before they cause production failures. The test reads the key from the credentials file — it never asks you to type or paste anything.
Integration Patterns for Cursor, Claude, and OpenClaw
The value of ipeaky isn't just secure storage — it's that the keys become available to your tools without ever traveling through chat. Here's how to wire it up for the three most common AI development environments.
Cursor
Cursor's AI features use API keys configured in settings, but for your own project's API calls (the code you're writing), the integration is straightforward. Source your credentials file in your shell profile so every Cursor terminal session has the keys available:
# Add to ~/.zshrc or ~/.bashrc source ~/.openclaw/credentials/ipeaky-keys.env # Now in any Cursor terminal: echo $OPENAI_API_KEY # works, without ever pasting into chat
More importantly: when Cursor's AI asks you to help debug an authentication error, you never need to paste the key into the chat. The agent can run the test command and report the result — the key stays in the file, not in the conversation.
Claude (via API or Claude.ai)
If you're building an application that calls Claude, store your Anthropic key in ipeaky and source it in your scripts. The critical pattern is: never share the key value in the prompt you send to Claude when asking it to help write the integration code.
# Test your Anthropic key before building: $ ipeaky test ANTHROPIC_API_KEY ✓ Anthropic key is valid. # Source in your build scripts: source ~/.openclaw/credentials/ipeaky-keys.env node your-claude-app.js # ANTHROPIC_API_KEY is in the environment
The discipline here is: when you ask Claude to help you write API integration code, use a placeholder in the prompt (YOUR_API_KEY), not the real value. The real value never enters the conversation.
OpenClaw Agents
OpenClaw has native ipeaky integration. When an OpenClaw agent needs an API key — for ElevenLabs TTS, for OpenAI calls, for any configured service — it reads from the ipeaky credentials file directly. It never asks you to type the key into chat.
# Store all keys once at setup: $ ipeaky store ELEVENLABS_API_KEY $ ipeaky store OPENAI_API_KEY # OpenClaw agents source the credentials file automatically. # The keys are in the agent's environment — not in the conversation log.
This is the core value proposition: the AI agent that needs your API key to do its job should never need to ask you for it in a message. It should read it from a secure file. ipeaky is what makes that possible.
For multi-agent setups where you're spawning sub-agents, the same credentials file is sourced by all agent processes running under your user account. One setup, all agents covered, no key ever in a prompt.
The Standard Approach Going Forward
Secure API key management isn't a nice-to-have for developers who use AI agents daily — it's a baseline requirement. The attack surface has grown enormously: more keys, more tools with read access to your filesystem, longer context windows that can accidentally slurp credentials, and agent traces that log everything.
The good news is that the solution is simple, and the setup takes 30 seconds. Store keys via stdin. Keep them in a chmod 600 file outside your project directories. Source the file at runtime. Test keys before you need them. Rotate on a schedule and immediately after any incident.
ipeaky wraps all of that into a single CLI that makes the secure path the easy path. When secure is also the fastest option, you stop needing to remember to do the right thing — it just happens.
Install ipeaky in 30 seconds
Pure bash. Zero dependencies. Makes secure key management the default.
Learn more at ipeaky.com →