A Model Context Protocol (MCP) server written in Go for debugging Vector Packet Processing (VPP) instances running in Kubernetes pods.
This MCP server provides tools to interact with VPP instances for debugging purposes. It executes VPP commands on Kubernetes pods and exposes VPP functionality through MCP tools that can be used by AI agents and other MCP clients.
- Kubernetes Integration: Executes VPP commands on Kubernetes pods running VPP
- Multiple Transport Modes:
- Stdio for local client-server communication
- HTTP/SSE for remote network access between machines
- 34 Debugging Tools: Comprehensive toolset for VPP and BGP debugging
- Pod management (list all CalicoVPP pods)
- Version information
- Interface statistics and addresses
- Error counters and error clearing
- Session information and statistics
- TCP statistics
- NPOL rules and policies
- CNAT translations and sessions
- Runtime statistics
- IP routing tables and FIBs
- VPP logs
- Packet trace, PCAP, and dispatch trace capture
- BGP neighbors and global information
- BGP RIB queries (IPv4/IPv6, IPs, prefixes)
- Official MCP Go SDK: Uses the official Model Context Protocol Go SDK maintained by Google
- Go Implementation: Fast, efficient, and easy to deploy
- Extensible Architecture: Easy to add more VPP debugging tools
- Remote Access: Connect from any machine to debug VPP instances on remote servers
- Go 1.24+
- kubectl installed and configured with access to your Kubernetes cluster
- VPP running in Kubernetes pods (e.g., Calico VPP dataplane)
- MCP client (like Claude Desktop, Cline, or other MCP-compatible tools)
- Clone or navigate to the project directory:
cd /home/aritrbas/vpp/vpp-mcp- Download Go dependencies:
go mod tidy- Build the server:
go build -o vpp-mcp-server main.goThe server supports two transport modes: stdio (local) and http (network).
Start the server using stdio transport (default):
./vpp-mcp-serverOr with explicit flag:
./vpp-mcp-server --transport=stdioOr run directly with Go:
go run main.goStart the server with HTTP transport for remote access:
./vpp-mcp-server --transport=http --port=8080This exposes the following endpoints:
http://localhost:8080/sse- MCP SSE endpoint for client connectionshttp://localhost:8080/health- Health check endpointhttp://localhost:8080/- Server information page
For remote access, replace localhost with the server's IP address or hostname.
Note: All VPP tools use namespace calico-vpp-dataplane and container vpp.
- Description: Get VPP version information
- Command:
vppctl show version - Parameters:
pod_name(required): Name of the Kubernetes pod running VPP
- Description: Get VPP interface information
- Command:
vppctl show int - Parameters:
pod_name(required): Name of the Kubernetes pod running VPP
- Description: Get VPP interface address information
- Command:
vppctl show int addr - Parameters:
pod_name(required): Name of the Kubernetes pod running VPP
- Description: Get VPP error counters
- Command:
vppctl show errors - Parameters:
pod_name(required): Name of the Kubernetes pod running VPP
- Description: Get VPP session information with verbose output
- Command:
vppctl show session verbose 2 - Parameters:
pod_name(required): Name of the Kubernetes pod running VPP
- Description: List rules that are referenced by policies
- Command:
vppctl show npol rules - Parameters:
pod_name(required): Name of the Kubernetes pod running VPP
- Description: List all the policies that are referenced on interfaces
- Command:
vppctl show npol policies - Parameters:
pod_name(required): Name of the Kubernetes pod running VPP
- Description: List ipsets that are referenced by rules (IPsets are just list of IPs)
- Command:
vppctl show npol ipset - Parameters:
pod_name(required): Name of the Kubernetes pod running VPP
- Description: Show the resulting policies configured for every interface in VPP. The first IPv4 address of every pod is provided to help identify which pod and interface belongs to.
- Command:
vppctl show npol interfaces - Parameters:
pod_name(required): Name of the Kubernetes pod running VPP
- Output interpretation:
tx: contains rules that are applied on packets that LEAVE VPP on a given interface. Rules are applied top to bottom.rx: contains rules that are applied on packets that ENTER VPP on a given interface. Rules are applied top to bottom.profiles: are specific rules that are enforced when a matched rule action is PASS or when no policies are configured.
- Description: Capture VPP packet traces
- Command:
vppctl trace add - Parameters:
pod_name(required): Name of the Kubernetes pod running VPPcount(optional): Number of packets to capture (default: 500)interface(optional): Interface type - phy|af_xdp|af_packet|avf|vmxnet3|virtio|rdma|dpdk|memif|vcl (default: virtio)
- Description: Capture VPP packets to pcap file
- Command:
vppctl pcap trace - Parameters:
pod_name(required): Name of the Kubernetes pod running VPPcount(optional): Number of packets to capture (default: 500)interface(optional): Interface name (e.g., host-eth0) or 'any' (default: 'any')
- Description: Capture VPP dispatch trace to pcap file
- Command:
vppctl pcap dispatch trace - Parameters:
pod_name(required): Name of the Kubernetes pod running VPPcount(optional): Number of packets to capture (default: 500)interface(optional): Interface type - phy|af_xdp|af_packet|avf|vmxnet3|virtio|rdma|dpdk|memif|vcl (default: virtio)
- Description: List all CalicoVPP pods with their IPs and nodes on which they are running
- Command:
kubectl get pods -n calico-vpp-dataplane -owide - Parameters: None required
- Description: Reset the error counters
- Command:
vppctl clear errors - Parameters:
pod_name(required): Name of the Kubernetes pod running VPP
- Description: Display global statistics reported by TCP
- Command:
vppctl show tcp stats - Parameters:
pod_name(required): Name of the Kubernetes pod running VPP
- Description: Display global statistics reported by the session layer
- Command:
vppctl show session stats - Parameters:
pod_name(required): Name of the Kubernetes pod running VPP
- Description: Display VPP logs
- Command:
vppctl show logging - Parameters:
pod_name(required): Name of the Kubernetes pod running VPP
- Description: Shows the active CNAT translations
- Command:
vppctl show cnat translation - Parameters:
pod_name(required): Name of the Kubernetes pod running VPP
- Description: Lists the active CNAT sessions from the established five tuple to the five tuple rewrites
- Command:
vppctl show cnat session - Parameters:
pod_name(required): Name of the Kubernetes pod running VPP
- Output interpretation: The output shows the
incoming 5-tuplefirst that is used to match packets along with theprotocol. Then it displays the5-tuple after dNAT & sNAT, followed by thedirectionand finally theagein seconds.directionbeing input for the PRE-ROUTING sessions and output is the POST-ROUTING sessions
- Description: Clears live running error stats in VPP
- Command:
vppctl clear run - Parameters:
pod_name(required): Name of the Kubernetes pod running VPP
- Description: Shows live running error stats in VPP
- Command:
vppctl show run - Parameters:
pod_name(required): Name of the Kubernetes pod running VPP
- Debugging workflow: Sometimes to debug an issue, you might need to run
vpp_clear_runto erase historic stats and then wait for a few seconds in the issue state / run some tests so that the error stats are repopulated and then runvpp_show_runin order to diagnose what is going on in the system - Output interpretation: A loaded VPP will typically have (1) a high Vectors/Call maxing out at 256 (2) a low loops/sec struggling around 10000. The Clocks column tells you the consumption in cycles per node on average. Beyond 1e3 is expensive.
- Description: Prints all available IPv4 VRFs
- Command:
vppctl show ip table - Parameters:
pod_name(required): Name of the Kubernetes pod running VPP
- Description: Prints all available IPv6 VRFs
- Command:
vppctl show ip6 table - Parameters:
pod_name(required): Name of the Kubernetes pod running VPP
- Description: Prints all routes in a given pod IPv4 VRF
- Command:
vppctl show ip fib index <idx> - Parameters:
pod_name(required): Name of the Kubernetes pod running VPPfib_index(required): The FIB table index
- Description: Prints all routes in a given pod IPv6 VRF
- Command:
vppctl show ip6 fib index <idx> - Parameters:
pod_name(required): Name of the Kubernetes pod running VPPfib_index(required): The FIB table index
- Description: Prints information about a specific prefix in a given pod IPv4 VRF
- Command:
vppctl show ip fib index <idx> <prefix> - Parameters:
pod_name(required): Name of the Kubernetes pod running VPPfib_index(required): The FIB table indexprefix(required): The IP prefix to query (e.g., 10.0.0.0/24)
- Description: Prints information about a specific prefix in a given pod IPv6 VRF
- Command:
vppctl show ip6 fib index <idx> <prefix> - Parameters:
pod_name(required): Name of the Kubernetes pod running VPPfib_index(required): The FIB table indexprefix(required): The IPv6 prefix to query (e.g., 2001:db8::/32)
- Description: Show BGP peers
- Command:
gobgp neighbor - Parameters:
pod_name(required): Name of the Kubernetes pod running the agent container with gobgp
- Description: Show BGP global information
- Command:
gobgp global - Parameters:
pod_name(required): Name of the Kubernetes pod running the agent container with gobgp
- Description: Show BGP IPv4 RIB information
- Command:
gobgp global rib -a 4 - Parameters:
pod_name(required): Name of the Kubernetes pod running the agent container with gobgp
- Description: Show BGP IPv6 RIB information
- Command:
gobgp global rib -a 6 - Parameters:
pod_name(required): Name of the Kubernetes pod running the agent container with gobgp
- Description: Show BGP RIB entry for a specific IP
- Command:
gobgp global rib <ip> - Parameters:
pod_name(required): Name of the Kubernetes pod running the agent container with gobgpparameter(required): The IP address to query
- Description: Show BGP RIB entry for a specific prefix
- Command:
gobgp global rib <prefix> - Parameters:
pod_name(required): Name of the Kubernetes pod running the agent container with gobgpparameter(required): The prefix to query (e.g., 10.0.0.0/24)
- Description: Show detailed information for a specific BGP neighbor
- Command:
gobgp neighbor <neighborIP> - Parameters:
pod_name(required): Name of the Kubernetes pod running the agent container with gobgpparameter(required): The neighbor IP address to query
The server executes VPP commands on existing Kubernetes pods:
- Connects to specified pods via kubectl
- Executes vppctl commands in the VPP container
- Executes gobhp commands in the agent container
- Returns results via MCP protocol
To use this server with an MCP client on the same machine, add it to your client's configuration. For example, with Claude Desktop, add to your claude_desktop_config.json:
{
"mcpServers": {
"vpp-debug": {
"command": "/home/aritrbas/vpp/vpp-mcp/vpp-mcp-server",
"cwd": "/home/aritrbas/vpp/vpp-mcp"
}
}
}For remote access from Machine Y to Machine X:
On Machine X (Server):
- Start the server with HTTP transport:
./vpp-mcp-server --transport=http --port=8080- Ensure the port is accessible (check firewall rules):
# Example: Allow port 8080 on Ubuntu/Debian
sudo ufw allow 8080/tcpOn Machine Y (Client): Configure your MCP client to connect to the HTTP endpoint. For example, with Claude Desktop:
{
"mcpServers": {
"vpp-debug-remote": {
"url": "http://<machine-x-ip>:8080/sse",
"transport": "sse"
}
}
}Replace <machine-x-ip> with the actual IP address or hostname of Machine X.
Security Considerations:
- The HTTP transport does not include authentication by default
- For production use, consider adding:
- Reverse proxy with TLS (nginx, Apache)
- API authentication (API keys, OAuth)
- Network security (VPN, SSH tunneling)
- Firewall rules to restrict access
Example with SSH Tunnel (Secure Alternative):
# On Machine Y, create SSH tunnel
ssh -L 8080:localhost:8080 user@machine-x
# Then configure client to use localhost:8080You can modify the constants in main.go to:
- Change default namespace
- Change default container name
- Add additional VPP commands as tools
vpp-mcp/
├── main.go # Main MCP server implementation
├── go.mod # Go module definition
├── go.sum # Go module checksums
├── Makefile # Build automation
├── README.md # This file
├── .gitignore # Git ignore rules
├── vpp-mcp-server # Compiled binary
├── docs/ # Documentation
│ ├── QUICK_START.md # Quick reference
│ ├── REMOTE_ACCESS.md # Remote access guide
│ └── TEST_SUMMARY.md # Test results
├── tests/ # Test scripts
│ ├── test_mcp_server.sh # Test MCP server setup in stdio transport
│ ├── demo_test.sh # Demo all tools
│ ├── test_tool.sh # Test individual tools
│ └── test_http_server.sh # Test MCP server setup in HTTP transport
└── examples/ # Example files
└── example_mcp_requests.json # JSON-RPC examples
Build the server:
go build -o vpp-mcp-server main.goBuild for different platforms:
# Linux
GOOS=linux GOARCH=amd64 go build -o vpp-mcp-server-linux main.go
# macOS
GOOS=darwin GOARCH=amd64 go build -o vpp-mcp-server-macos main.go
# Windows
GOOS=windows GOARCH=amd64 go build -o vpp-mcp-server.exe main.goTo add new VPP debugging tools:
- Define your tool input structure:
type YourToolInput struct {
Parameter string `json:"parameter"`
}- Create a tool handler function:
func (s *VPPMCPServer) handleYourTool(ctx context.Context, req *mcp.CallToolRequest, input YourToolInput) (*mcp.CallToolResult, any, error) {
genericInput := VPPCommandInput{
PodName: input.PodName,
Namespace: input.Namespace,
ContainerName: input.ContainerName,
}
return s.handleVPPCommand(ctx, genericInput, "your vppctl subcommand", "Your tool description")
}- Add the tool to the server in main():
tool := &mcp.Tool{
Name: "your_tool_name",
Description: "Tool description",
}
mcp.AddTool(vppServer.server, tool, vppServer.handleYourTool)- Test server functionality:
# stdio transport
./tests/test_mcp_server.sh
# HTTP transport
./tests/test_http_server.sh- Demo all tools:
./tests/demo_test.sh <pod-name>- Test individual tool:
./tests/test_tool.sh vpp_show_int <pod-name>This project uses:
github.com/modelcontextprotocol/go-sdk- Official Model Context Protocol Go SDK maintained in collaboration with Google- Standard Go libraries for Docker command execution
-
kubectl access fails:
- Verify kubectl is installed and configured
- Check you have access to the VPP namespace
- Ensure proper RBAC permissions for pod exec
-
vppctl commands fail:
- Verify VPP is running in the target pod
- Check if the pod name is correct
-
MCP connection issues:
- Verify the binary is built correctly (
go build) - Check MCP client configuration
- Review server logs for errors
- Verify the binary is built correctly (
-
Build issues:
- Ensure Go 1.24+ is installed
- Run
make depsto download dependencies - Check for any compilation errors
The server logs important events to help with debugging:
- Container lifecycle events
- Command execution results
- Error conditions
View logs by running the server and monitoring stdout/stderr output.
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
Planned features:
- More VPP debugging tools
- Configuration management tools
- Log analysis capabilities
- Performance monitoring tools
- Configuration file support
- Workflow support
- Workflow visualization tools
- Automated workflow execution engine