What is Function calling & MCP for LLMs?
(explained with visuals and code)
Before MCPs became popular, AI workflows relied on traditional Function Calling for tool access. Now, MCP is standardizing it for Agents/LLMs.
The visual below explains how Function Calling and MCP work under the hood.
Today, let's learn:
- Function calling by building custom tools for Agents.
- How MCPs help by building a local MCP client with mcp-use and using tools from Browserbase MCP server.
In Function Calling:
- The LLM receives a prompt.
- The LLM decides the tool.
- The programmer implements a procedure to accept a tool call request from the LLM and prepare a function call. The tool call request is found in the LLM's response when you prompt it.
- A backend service executes the tool.
This Function Calling takes place within our stack:
- We host the tool.
- We implement a logic to determine the tool to invoke and its parameters.
- We execute it.
So Function Calling requires us to wire everything manually.
MCP simplifies this!
Instead of hard-wiring tools, MCP:
- Standardizes defining, hosting, and exposing tools.
- Makes it easy to discover tools, understand schemas, and use them.
- Demands approval before invoking them.
- Detaches implementation from consumption.
For instance, whenever you integrate an MCP server, you never write a line of Python code to integrate the tools.
Instead, you just integrate the MCP server and everything beyond this follows a standard protocol handled by the MCP client and the LLM:
- They identify the MCP tool.
- They prepare the input argument.
- They invoke the tool.
- They use the tool’s output to generate a response.
Everything happens through a standard (but abstracted) protocol.
So here’s the key point: MCP and Function Calling are not in conflict. They’re two sides of the same workflow.
- Function Calling helps an LLM decide what it wants to do.
- MCP ensures that tools are reliably available, discoverable, and executable, without you needing to custom-integrate everything.
For example, an agent might say, “I need to search the web,” using function calling.
That request can be routed through MCP to select from available web search tools, invoke the correct one, and return the result.
Check the workflow in the diagram below.
In this setup, to build a local MCP client, I used mcp-use because it lets us connect any LLM to MCP servers & build private MCP clients, unlike Claude/Cursor.
- Compatible with Ollama & LangChain
- Stream Agent output async
- Built-in debugging mode, etc
Find the mcp-use GitHub repo in the comments!
____
Find me → Avi Chawla
Every day, I share tutorials and insights on DS, ML, LLMs, and RAGs.