Your .env file is not private from your AI agent. It's private from git. Those are different boundaries, and most developers are only protecting one of them.
When a coding agent opens your project, it reads the working directory to build context. That context goes into the inference call, which leaves your machine. .gitignore doesn't touch that path. Neither does your CLAUDE.md.
CLAUDE.md won't protect you
Adding "never read .env files" to CLAUDE.md is the most common approach and the least reliable one. CLAUDE.md is advisory. Claude follows it most of the time, but under complex tasks, long context windows, or ambiguous instructions, it can and does deviate. A GitHub issue confirmed in April 2026: Claude reads and echoes .env contents into the conversation even when CLAUDE.md explicitly prohibits it.
The reliable protection is a deny rule in settings.json. Deny rules are enforced at the system level before Claude processes any file. The difference between "please don't read this" and "you physically cannot read this."
The three paths secrets leave
Most developers protect against the obvious one. The other two are where it actually happens.
Direct file read
The agent scans your project, opens .env, and the contents become part of the conversation context. Blockable with deny rules in settings.json.
Runtime output capture
The agent runs your tests or starts your server. A failed HTTP request logs the full Authorization: Bearer sk-live-abc123... header. A database timeout dumps the connection string with the password in it. Claude captures all command output. Your secrets are now in the conversation, even though the agent never opened .env directly.
Search and grep
The agent uses grep to search the codebase for a function name. The search hits a config file containing credentials. The grep output includes the matched lines with secrets visible. No file read required.
The deny rules that actually work
Add these to ~/.claude/settings.json for global protection across every project. This applies before Claude sees any file.
{
"permissions": {
"deny": [
// Environment and secrets files
"Read(**/.env*)",
"Read(**/.dev.vars*)",
"Write(**/.env*)",
// Key material
"Read(**/*.pem)",
"Read(**/*.key)",
"Read(**/secrets/**)",
"Read(**/credentials/**)",
// Cloud and system credential files
"Read(**/.aws/**)",
"Read(**/.ssh/**)",
"Read(**/.npmrc)",
"Read(**/.pypirc)",
// Database and app credentials
"Read(**/config/database.yml)",
"Read(**/config/credentials.json)",
// Dangerous write and exec operations
"Write(**/.ssh/**)",
"Write(.github/workflows/*)",
"Bash(rm -rf *)",
"Bash(sudo *)",
"Bash(curl * | sh)",
"Bash(wget *)",
"Bash(npm publish *)"
]
}
}
The ** wildcard applies to every subdirectory. Claude physically cannot read any of these files regardless of what instructions it's given or what's in CLAUDE.md.
MCP configs and CLI auth files
MCP server configurations hold credentials too: search API keys, database connection strings, OAuth tokens for whatever services you've wired in. Those configs live in mcp.json or in agent settings directories the agent can read. They don't fall under the .env deny pattern above, which means they need their own entries.
Same applies to CLI tools that store auth in project-level config files. In my case I have Fizzy, my project board CLI, which writes a YAML config that holds auth tokens. Any CLI tool that persists config to the project directory is the same category of problem: it's a credential-bearing file that the agent can reach.
Add these to the deny block and to .gitignore by name, not by assumed pattern coverage:
# MCP configs with credentials
mcp.json
.mcp.json
.claude/settings.local.json
# CLI tool auth (add yours by name)
.fizzy.yaml
.fizzy/
*.local.yaml
*.local.json
# Keep templates, not values
.env.example
.env
"Read(**/mcp.json)",
"Read(**/.mcp.json)",
"Read(**/.claude/settings.local.json)",
"Read(**/*.local.yaml)"
One thing worth watching: agents modify .gitignore during sessions without flagging it. When the agent scaffolds a new feature or fixes a config, check the diff on .gitignore before committing. Entries you had in there may have quietly dropped.
Blocking runtime leaks
Deny rules stop direct file reads. They don't stop your service key appearing in a curl error log the agent captures when a test fails. For that, use test-specific environment files with dummy values and point your test runner at those instead of .env:
# Dummy values for test environments. No real credentials.
SUPABASE_URL=http://localhost:54321
SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.test
SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.test
STRIPE_SECRET_KEY=sk_test_not_a_real_key
OPENAI_API_KEY=sk-test-dummy-key-for-mocking
AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
When the agent runs tests and something fails, the only credentials visible in the output are dummies. The actual key patterns never appear.
A brief note on key scope for Supabase specifically: the anon key respects RLS and is safe for agent-assisted development work. The service role key bypasses RLS entirely. Keep it out of any file in directories where you're running agents. If it's needed for a privileged operation, inject it at runtime through the Supabase CLI or a secrets tool, not from a .env file the agent can reach.
Pre-commit hook
A last-line catch for when something slips through. This scans staged content for credential patterns before any commit reaches the repository:
#!/bin/bash
PATTERNS=(
'sk-ant-' # Anthropic API keys
'sk-live-' # Stripe live keys
'sk_live_' # Stripe live keys, alternate format
'ghp_' # GitHub personal tokens
'gho_' # GitHub OAuth tokens
'AKIA' # AWS access keys
'xox[bpors]-' # Slack tokens
'SG\.' # SendGrid keys
'eyJ' # JWTs, base64 header
'BEGIN.*PRIVATE KEY' # Private key material
'supabase_service_role' # Supabase service keys
)
for pattern in "${PATTERNS[@]}"; do
if git diff --cached --diff-filter=ACM | grep -qE "$pattern"; then
echo "BLOCKED: Found pattern matching '$pattern'"
echo "Remove the credential before committing."
exit 1
fi
done
echo "Pre-commit check passed."
exit 0
chmod +x .git/hooks/pre-commit
Or use gitleaks as a pre-commit hook if you want maintained pattern coverage without managing the script yourself:
repos:
- repo: https://github.com/gitleaks/gitleaks
rev: v8.22.1
hooks:
- id: gitleaks
Runtime injection for production credentials
The pre-commit hook and deny rules handle what the agent can see. For credentials that shouldn't exist as files at all, runtime injection is the right model. The agent works with references, and the actual value gets resolved at process start from a store the agent can't reach.
# Secrets injected at process start, never in a file.
infisical run --env=production -- node server.js
const supabase = createClient(
process.env.SUPABASE_URL,
process.env.SUPABASE_ANON_KEY
);
The agent generates code that references process.env.SUPABASE_URL. Even if that code ends up somewhere public, the reference without the store access is useless. Infisical, which is MIT licensed, and Doppler both do this. For Supabase-specific secret management, supabase secrets set pushes values directly to the project without ever touching a local file.
Before your next session
- Deny rules for
.env*,*.pem,*.key,.aws/**, and.ssh/**in~/.claude/settings.json. - MCP config files, including
mcp.jsonand.mcp.json, in both.gitignoreand the deny list. - CLI auth files for any tool that stores tokens in the project directory, added by name to
.gitignore. .env.testwith dummy values for test runs, so agent-triggered test output exposes those instead of real keys.- A pre-commit hook, either gitleaks or the script above, scanning staged content for credential patterns.
- Production credentials kept out of files and injected at runtime through Infisical, Doppler, or
supabase secrets set. .gitignoreaudited after any session where the agent touched project config files.
The mental model worth internalizing: your agent's context window is a pipe to a third-party server. Anything it can reach from disk may go into that pipe. .gitignore handles the repository. The deny rules handle what the agent can read. You have to handle what it runs.