When should you use Nango for AI tool calling?
Use Nango whenever your AI needs to take action in the real world.- You want your agent or LLM workflow to create, update, or fetch data from real APIs.
- You want to avoid building your own auth, tool infrastructure, or execution sandbox.
- You need reliability, observability, and scalability for tool calls.
Common examples
- Letting an AI agent create or update a CRM opportunity.
- Having a chatbot fetch live context from Notion, Linear, or HubSpot.
- Running automations triggered by an LLM (e.g., “schedule a meeting” or “create a task”).
For RAG-style use cases where data must be replicated locally, use Syncs instead.
Key facts
- Framework-agnostic: Works with any agentic or LLM framework (or none at all).
 Compatible with OpenAI Agents SDK, Vercel AI SDK, LangChain, LlamaIndex, and more.
- Unified API auth: Securely authorize access to third-party APIs via Nango’s Auth system.
 You can even surface Connect Links directly in your chat UI for user-driven OAuth.
- Tool execution via Actions:
- MCP-compatible:
- Expose your integrations through Nango’s built-in MCP Server.
- Follow our implement the MCP Server guide.
- Check out the MCP server demo
- Connect to existing official MCP servers (e.g. Notion, Linear) with Nango’s MCP auth.
 
- Secure, scalable, and observable:
- Tool code runs in sandboxed, elastic environments.
- Under 100ms overhead for tool executions — fast enough for human-in-the-loop workflows.
- Logs and metrics to monitor and optimize your tool calls.
 
How it works
- Authorize: Your users connect their accounts via Nango.
- Expose tools: You configure which Actions your LLM or agent can call.
- Call tools: The agent invokes Nango-hosted Actions via API or MCP.
- Observe & optimize: Track results and performance in Nango’s Logs.