Field Note
AI Agents and Tools
Agent tools aren’t new primitives—they’re APIs with a layer that decides what to do next.
There’s a useful way to think about agent tools: they’re not that different from APIs.
At a surface level, a tool is just a structured interface. You give it inputs, it returns outputs. Same contract, same idea. The difference isn’t in the tool itself, it’s in what’s sitting on top of it.
An agent doesn’t just call a tool, it reasons over the response.
That small shift matters. Instead of hardcoding a sequence of API calls, you’re allowing a system to interpret results, decide what matters, and choose what to do next. The flow becomes less about predefined orchestration and more about conditional movement through information.
That flexibility is the appeal, but it’s also where things start to break.
Once you introduce reasoning, you lose determinism. The same input doesn’t always produce the same path. Debugging becomes less about tracing a fixed pipeline and more about understanding a decision process that may not be fully visible. It’s easy to overcomplicate simple problems, wrapping basic API calls in layers of agent logic that don’t actually add value.
So the question isn’t just can an agent use this tool, but should it.
Frameworks like LangGraph help by reintroducing structure. They don’t remove reasoning, but they constrain it. You define states, transitions, and boundaries, which makes the system more legible. It’s closer to a graph of decisions than a freeform loop of thoughts.
In that sense, agent systems aren’t replacing APIs. They’re sitting one layer above them, turning interfaces into something navigable rather than just callable.
The work is figuring out when that extra layer actually earns its keep.