We all know better. Don’t hardcode secrets. Use a vault. Rotate your keys. We’ve been saying this for years.
And then the agentic coding boom happened.
Suddenly every tool wants an API key. OpenAI, Anthropic, Gemini, Groq, Mistral, Replicate—the list grows weekly. And where do those keys end up? Right there in .zshrc, in plain text, because you needed it working right now and you were going to fix it later.
| |
I caught myself doing exactly this. Two API keys, sitting in my dotfiles, probably backed up to Time Machine, possibly in shell history, definitely in my terminal scrollback. Let’s fix this properly.
The Problem
Plain text API keys in shell configs are bad for reasons you already know:
- Shell history —
~/.zsh_historyrecords commands, and sometimes youecho $OPENAI_API_KEYto debug something - Backup snapshots — Time Machine, cloud backups, dotfile repos all capture the file
- Shoulder surfing —
cat ~/.zshrcduring a screen share or a pairing session - Terminal scrollback — the key is sitting in your terminal buffer right now
And this isn’t just a theoretical risk. Attackers actively scan repos and backups for unprotected credentials — and when they find stolen API keys, they rack up thousands of dollars in charges. The platform bills the original owner.
The “I’ll rotate it later” never comes. Meanwhile these keys have billing attached to them.
The Fix: 1Password CLI
If you use 1Password, you already have a secret manager with biometric unlock, audit logging, and team sharing. The op CLI lets you pull secrets into your shell without ever writing them to disk.
Step 1: Install the CLI
| |
Enable the CLI integration in 1Password desktop app: Settings > Developer > Connect with 1Password CLI. This lets the CLI authenticate via the desktop app (Touch ID on Mac) instead of requiring a separate login.
Step 2: Store Your Keys
| |
Step 3: Replace Hardcoded Values
In your .zshrc (or .bashrc, .profile, whatever you use):
| |
That’s it. Three steps. The keys now live in 1Password, protected by your master password and biometric auth.
One catch: this triggers a 1Password biometric prompt every time you open a terminal. If that bothers you (it bothered me), see Shell Startup Speed for the lazy-loading version that only prompts when you actually run a command.
Step 4: Rotate the Old Keys
This is the step people skip. Do it now. The old keys have been in plaintext. Assume they’re compromised.
- OpenAI: platform.openai.com/api-keys
- Google AI: aistudio.google.com/apikey
- Anthropic: console.anthropic.com/settings/keys
Generate new keys, update the 1Password items with op item edit, and you’re done.
The Details Worth Knowing
Why --no-newline?
op read appends a trailing newline by default. API keys with a stray newline cause cryptic authentication failures—the kind where the key “looks right” but every request returns 401. The --no-newline flag strips it.
Why 2>/dev/null?
If 1Password is locked or the CLI isn’t authenticated, op read writes an error to stderr. The redirect silences that so you don’t get a wall of errors every time you open a terminal without 1Password unlocked. The variable simply becomes empty.
The tradeoff: a misconfigured vault path also fails silently. Test it once after setup, and you’re fine.
What About Shell Startup Speed?
The eager approach above runs op read at shell init, which means every new terminal triggers a 1Password biometric prompt. If you open terminals frequently, this gets old fast.
The fix is lazy loading with command-specific triggers. In zsh, the preexec hook fires right before a command executes and receives the command string — perfect for deciding which secrets to load when:
| |
This gives you three properties:
- No startup cost — terminal opens instantly, no biometric prompt
- Least privilege —
codexonly loadsOPENAI_API_KEY, not every secret you have - Load once — each key is fetched at most once per session (the
${(P)key}guard skips keys that are already set)
Adding a new tool is one line in _op_cmd_keys. Adding a new key is one line in _op_refs.
If you have multiple 1Password accounts (personal + work), add --account=my.1password.com to the op read calls to avoid vault name collisions.
For even more granularity:
op run— inject secrets into a specific command rather than the global environment:
| |
op inject— when you have a dozen keys, individualop readcalls add up. Withop inject, you define all your secrets in a single template and load them in one shot:
| |
| |
This is substantially faster than N individual op read calls — the CLI resolves all references in a single authentication round-trip.
- Scoped injection — skip the global environment entirely and inject a key for exactly one command’s lifetime:
| |
The key exists only in that command’s process environment. Nothing touches your shell, nothing lingers after the process exits. This is the most paranoid option, and it’s great for CI scripts or one-off runs.
What About macOS Keychain?
macOS Keychain (security find-generic-password) works too and has zero startup overhead since it’s always unlocked when you’re logged in. I use it for some tokens:
| |
The advantage of 1Password over Keychain: cross-device sync, team sharing, audit logs, and a UI that doesn’t make you question your life choices. Use whichever fits your workflow. The point is to stop storing secrets in plain text.
The Agentic Boom Made This Worse
A year ago, most developers had maybe one or two API keys. Now? I know people with six or more AI service keys in their shell config. Coding agents need them. MCP servers need them. Every new tool in the ecosystem asks you to “just export your API key” and the docs always show the hardcoded version because it’s simpler to explain.
MCP servers are the newest vector here. Tools like Claude Code, Cursor, and Windsurf use configuration files (claude_desktop_config.json, mcp.json) that store API keys for tool servers. The LLM itself never sees the secret values — the MCP server process does — but only if you inject them properly. Hardcoding keys in MCP configs is the same mistake as hardcoding them in .zshrc, just in a newer file. The op CLI works here too: use op run or environment variable references in your MCP server configs instead of raw keys.
This is a tooling culture problem. The default getting-started experience for almost every AI API is:
| |
We should normalize showing the secure version in documentation. Until that happens, take five minutes and move your keys to a vault. Your future self (and your billing page) will thank you.
TL;DR
| |
Install op, store your keys, replace the exports, rotate the old keys. Five minutes. Zero excuses.
Further Reading
- Securing MCP Servers with 1Password — 1Password’s take on stopping credential exposure in agent configurations
- Secure Environment Variables for LLMs, MCPs, and AI Tools — William Callahan’s walkthrough of using 1Password CLI and Doppler for AI tool secrets
- Where MCP Fits and Where It Doesn’t — 1Password on the security model of MCP and credential boundaries
- 1Password CLI: Secret References — official docs on the
op://URI scheme - 1Password CLI:
op inject— batch-load secrets from template files - 1Password Shell Plugins — native integrations for CLI tools like
gh,aws, andstripe