Quality output from an LLM is the make-or-break between a task that performs well and a hallucination. The level of accuracy that's output is top of mind for everyone using Agents.
Architecture diagrams always look something like this:
Agent -> Gateway -> LLM (or MCP Server).
The Agents that organizations are typically referring to are Agents that perform an action via prompts
As teams and the enterprise are figuring out various ways to secure traffic from Agents to LLMs, other Agents, or MCP Servers, what about the lowest barrier to entry? Someone's local
Ensuring that Agents have the proper tools and information they need to perform a specialized action on behalf of a user or a system will be necessary for AI to meet the needs
Although the idea around Agents, MCP Servers, and Agentic workflows is just about all everyone is talking about, it's important to remember that this cohort of work has only been around