Why Agentic AI Needs Old-School Agent Theory

Why Agentic AI Needs Old-School Agent Theory

A lot of "agentic AI" hype today treats LLM-powered agents as autonomous black boxes. But decades of multi-agent research already solved many of the underlying problems. Here's how.


Introduction

Everyone building "agentic AI" now — LLM + tool + loop = supposed autonomy — often treats agency as a byproduct of scale. Call an LLM repeatedly, chain tools, voilà, agent. But that ignores serious pitfalls: unpredictability, lack of accountability, brittle coordination. The paper argues that to build real agents — especially ones that must cooperate, negotiate, or adhere to norms — we should bring back tools from the multi-agent world: structured architectures, explicit communication protocols, and governance models. In short: big models plus old-school agent engineering.


The Mechanism

At the core: the authors propose marrying two traditions. On one side — large neural models / foundation models providing generative reasoning and flexible behavior. On the other — the conceptual machinery from the AAMAS community: explicit state (beliefs/desires/intentions), formal communication protocols, coordination mechanisms, norms, trust models, and more.

Concretely:

  • Use a BDI-style internal structure: the agent tracks what it believes about the world (beliefs), what it aims for (desires), and what it committed to do next (intentions). That lets you reason about whether a plan still makes sense — and abandon or replan if not.
  • Employ formal communication protocols instead of free-form natural dialogue between agents. This gives clarity: when one agent asks another for data, it's a "request for information" act, not an ambiguous chat.
  • Add mechanism design / incentive alignment: when multiple agents coexist, design rules (or payoffs) so that individual actions align with global objectives (avoid selfish degenerate behaviors).
  • Optionally embed norms, roles, reputation, and institutions — so agents don't just act, they act socially: trust, accountability, norm compliance, shared governance.

Together, this hybrid architecture aims at agents that are not just "LLM loops," but socially aware actors — predictable, auditable, and cooperative.


Comparison

Compared to vanilla LLM-agent systems:

  • More predictable behavior: Because there is explicit state and structured reasoning, you can inspect (or even debug) the chain from belief → plan → action.
  • Safer coordination under multi-agent setups: Communication protocols and incentives help avoid miscoordination, bad mutual interference, or strategic failures.
  • Accountability, governance and social compliance: Norms, roles, reputation systems enable long-term cooperation — which pure LLM agents rarely support.

That said — this is conceptual. The paper does not provide a full implementation. So the proposed hybrid is untested at scale; emergent issues may arise (performance overhead, complexity explosion, protocol brittleness).


The Playground

Below are illustrative sketches of how one might combine LLM reasoning with structured agent logic:

# Pseudocode: hybrid agent skeleton
class Agent:
    beliefs = {}        # world model
    desires = []        # goals
    intentions = []     # committed next actions

    def perceive(self, observation):
        update(self.beliefs, observation)

    def plan(self):
        # use LLM to propose possible plans
        proposals = LLM.generate_plans(self.beliefs, self.desires)
        # filter proposals via explicit constraints / norms
        valid = [p for p in proposals if check_constraints(p, self.beliefs)]
        self.intentions = select_best(valid)

    def act(self):
        execute(self.intentions.pop(0))

Example of structured inter-agent request

Agent A -> Agent B:
REQUEST_INFO: { "what_is": "arrival_time", "entity": "guest_JohnDoe" }
Agent B -> Agent A:
INFORM: { "arrival_time": "2025-12-02T15:30Z" }

Example prompt for planning with constraints

You are scheduling a flight for user X:
- Departure city: Amsterdam
- Destination: Paris
- Avoid layovers via Rome
Return flights must:
  - arrive no earlier than last day of conference
  - cost less than 300 EUR

Plan a valid flight schedule or explicitly reply: NO_VALID_FLIGHT_FOUND

Insights

  • This hybrid approach offers theoretical guardrails that pure LLM-agents lack. For critical multi-step or multi-agent tasks (scheduling, governance, negotiation, group planning) — it could significantly reduce weird emergent failures.
  • Implementation won't be trivial. You'll need plumbing: state management, protocol enforcement, constraint solvers, maybe even a light agent-runtime. That adds engineering overhead.
  • Might not be ideal for simple or toy tasks. But for multi-agent coordination, safety-sensitive automation, or long-lived assistants, this seems like a credible path toward robustness and accountability.

Conclusion

The current wave of "agentic AI" risks reinventing agency as "lots of LLM calls." This paper reminds us: real agency entails structure — beliefs, commitments, norms, coordination. If we want reliable, cooperative, accountable agents, we'd better dust off the old multi-agent playbook. For serious systems — this hybrid architecture deserves careful exploration.


Similar Posts