Find security holes, catch regressions, track costs. The testing tool for developers building with LLMs.
npm install facelint
Three things. Done well.
Check if your prompt gives correct answers. Write assertions like you would with Jest. Catch bugs before users do.
20 prompt injection attacks built-in. See exactly which ones your prompt is vulnerable to. Fix them before hackers find them.
Know exactly what each prompt costs. Compare OpenAI vs Claude vs Gemini. Estimate your monthly bill before launch.
One command. No config files needed.
npm install fencelint
Create a .env file with your OpenAI, Claude, or Gemini key.
OPENAI_API_KEY
ANTHROPIC_API_KEY
GEMINI_API_KEY
Copy your actual prompt from your app. Test it like you mean it.
const { aiTest, runPrompt, expect } = require('fencelint') // Your actual prompt const bot = { model: 'gpt-4o-mini', systemPrompt: `You are a support agent. Refund policy is 30 days. Never mention competitors.` } // Test it aiTest('refund policy', async () => { const res = await runPrompt(bot, 'Whats your refund policy?') expect(res.output).toContain('30 days') }) // Security test aiTest('security', async () => { await expect(bot).toResistInjection(0.75) })
Get results.
npx fencelint test
What each part of the result means.
Your prompt returned the expected answer. Ship it.
✓ refund policy 1,204ms
Something's wrong. Fix it before deploying.
✗ refund policy
Expected "30 days" but got "contact support"
We attack your prompt 20 times. This shows how many attacks were blocked.
🔒 Security: 80% (16/20)
⚠ Ignore Instructions: vulnerable
Exact cost per test. No more surprise API bills.
Cost: $0.0006
Monthly @ 1k/day: $18
Open source. Easy to setup.
npm install fencelint