An Agent makes a call to an LLM. The LLM decides which MCP server tool should be used for a task. The Agent then makes a call to said tool. This can happen
AI is only as good as the information you provide it. Aside from the general hallucinations and wild outcomes we sometimes see from LLMs, the general gist of an Agent not performing as
Your Agent has a "mind of its own" (well, it was programmed to act a particular way). For example, Claude Code is known to downgrade your Model for particular tasks to
Three big topics when it comes to MCP:
1. How do you know the MCP Server is secure?
2. Where is it stored?
3. Is it version-controlled, or can anyone just change it
Quality output from an LLM is the make-or-break between a task that performs well and a hallucination. The level of accuracy that's output is top of mind for everyone using Agents.