Exploring MCP Primitive Usage Beyond Designed Use Cases: Tool Calling in Python
Unlocking the Power of Agent-to-Agent Communication with MCP: A Game-Changer for Autonomous Agents and Observability
- 1. The speaker is discussing MCP (Microagent Communication Protocol) and how it can be used for agent-to-agent communication.
- 2. MCP was designed primarily for use in autonomous agents or code that interacts with other code, not for UI coding agents such as web browsers or desktop applications.
- 3. The speaker is a creator of Pyantic, a data validation library for Python, and is also involved with the MCP Python SDK.
- 4. Pantic has built two additional tools: Pantic AI, an agent framework for Python, and Pantic Logfire, an observability platform for commercial use.
- 5. The speaker clarifies that neither MCP nor Pyantic are all-encompassing tools, but they can handle a significant amount of tasks in the right context.
- 6. MCP consists of three main primitives: prompts, resources, and tool calling. The latter is particularly useful for what the speaker intends to discuss.
- 7. Tool calling is more complex than it might initially seem due to factors like dynamic tools, logging, sampling, tracing, and observability.
- 8. MCP allows tools to operate as subprocesses using standard in and standard out, which solves certain problems that an open API would not address.
- 9. The speaker shares a prototypical image of MCP, where an agent can connect to various tools without the need for pre-existing knowledge or design considerations.
- 10. Tools within the system can also act as agents, connecting to other tools over MCP or directly. This creates a challenge in managing LLM (Language Learning Model) access for each agent.
- 11. Sampling is a powerful feature of MCP that enables subagents to use an LLM from the original agent, reducing resource usage and management overhead.
- 12. Sampling is not yet widely supported but allows the server to make a request back through the client to the LLM for specific tasks.
- 13. The speaker will demonstrate sampling using Pantic AI, which has support for this feature.
- 14. An example use case presented is building a research agent to gather information about open-source packages or libraries.
- 15. One implemented tool in this example is making a BigQuery query to get download numbers for a specific package.
- 16. The speaker highlights the importance of providing type safety when accessing the MCP context within an agent validator or tool call.
- 17. Logging progress updates while the tool call is still executing provides value both for cursor-style agents and web application users.
- 18. The output from the query is formatted as XML, which is well-suited for LLM interpretation.
- 19. The speaker sets up an MCP server using FastAPI to register tools and pass user questions.
- 20. By performing inference inside a tool, context window overhead can be minimized, keeping the main agent's context lean.
- 21. The main application includes an agent definition that registers the Pippi MCP server as a tool for calling.
- 22. An example question is posed to the agent, asking how many downloads Pantic had this year, and the result is presented.
- 23. Observability of MCP calls, including the SQL query generated, can be viewed in Logfire or similar tools.
- 24. The speaker invites the audience to visit the Pantic booth for further discussion and demonstration.
Source: AI Engineer via YouTube
❓ What do you think? What are your thoughts on the ideas shared in this video? Feel free to share your thoughts in the comments!