The API proxy that teaches your agents how to fix their own mistakes
SelfHeal intercepts failed API calls, analyzes the error with an LLM, and returns structured, actionable correction instructions — so your agents can self-heal and retry autonomously.
Install: pip install graceful-fail or npm install graceful-fail
No credit card required. 500 requests/month free forever.
Key stats
- Less than 200ms average analysis latency
- Zero credentials exposed — by design
- 100% pass-through transparency on successful requests
- Free on successful requests — credits only used on failures
Built for production AI workflows
Every design decision optimized for autonomous agents running unsupervised at 3 AM.
- Instant Error Intelligence — When your agent hits a 4xx or 5xx error, our LLM engine analyzes the exact payload and returns a precise, actionable fix — not a generic error message.
- Security-First Design — Sensitive headers (Authorization, Cookie, API keys) are stripped before any data reaches the LLM. Your credentials never leave the proxy layer.
- Zero-Overhead Pass-Through — Successful requests (2xx/3xx) are forwarded transparently with no latency overhead. You only pay when the LLM is actually invoked on a failed request.
- Agent-Native JSON Schema — Every intercepted error returns a structured JSON envelope with is_retriable, actionable_fix_for_agent, and suggested_payload_diff — designed for autonomous agent consumption.
- Full Observability — Track every request, intercepted error, credit usage, and success rate from your developer dashboard. Filter by API key, date, or error type.
- Tier-Based Rate Limiting — Hobby, Pro, and Agency tiers with monthly request limits. Upgrade anytime. Credits only consumed on failed requests that trigger LLM analysis.
Drop-in integration
Point your requests at the SelfHeal proxy, add a few headers, and you're live.
How it works
- Your agent sends API calls through the proxy. Replace the destination URL with the SelfHeal endpoint and set X-Destination-URL to point at the real API.
- Successful responses pass through with zero overhead. 2xx and 3xx responses are returned verbatim. No credits consumed, no added latency.
- Failed responses get LLM-analyzed. 4xx and 5xx errors are intercepted, analyzed, and returned with structured fix instructions including retriability, error category, and a suggested payload diff.
SDKs for every stack
Integrate in minutes with official SDKs for Python and Node.js. LangChain and CrewAI integrations included.
Python SDK
Install: pip install 'graceful-fail[langchain]'
Available on PyPI. Supports sync, async, LangChain, and CrewAI.
Node.js / TypeScript SDK
Install: npm install graceful-fail
Available on npm. Full TypeScript support, ESM and CJS.
Simple, usage-based pricing
Credits only consumed when the LLM is invoked. Successful pass-through requests are always free.
- Hobby — Free: 500 requests/month, LLM error analysis, API key management, request logs (7 days).
- Pro — $29/month: 10,000 requests/month, multiple API keys, 30-day logs, usage analytics.
- Agency — $99/month: 50,000 requests/month, unlimited API keys, 90-day logs, $0.005 per extra request, priority support.
It's 3 AM. Your agent just hit a 422. Does it crash — or fix itself?
Stop babysitting your AI workflows. SelfHeal gives your agents the intelligence to recover on their own.
Get Started Free