You should also try to make context query the first class primitive.
Context query parameter can be natural language instruction how to compact current context passed to subagent.
When invoking you can use values like "empty" (nothing, start fresh), "summary" (summarizes), "relevant information from web designer PoV" (specific one, extract what's relevant), "bullet points about X" etc.
This way LLM can decide what's relevant, express it tersly and compaction itself will not clutter current context – it'll be handled by compaction subagent in isolation and discarded on completion.
What makes it first class is the fact that it has to be built in tool that has access to context (client itself), ie. it can't be implemented by isolated MCP because you want to avoid rendering context as input parameter during tool call, you just want short query.
depends_on is also based on context query but in this case it's a map where keys are subagent conversation ids that are blockers to perform this handed over task and value is context query what to extract to inject.
Thank you for the suggestion, I will explore this in the next iteration. I'm learning how to translate how humans do context management into how agents should do them
Imposing a strict, discrete topology—like a tree or a DAG—is the only viable way to build reliable systems on top of LLMs.
If you leave agent interaction unconstrained, the probabilistic variance compounds into chaos. By encapsulating non-deterministic nodes within a rigidly defined graph structure, you regain control over the state machine. Coordination requires deterministic boundaries.
I have yet to read this article (in full), but I love trees! As an amateur AST transformation nerd. Kinda related but I’ve been trying to figure out how to generalize the lessons learned from this experiment in autogenerating massive bilingual dictionary and phrasebook datasets: https://youtu.be/nofJLw51xSk
Into a general purpose markup language + runtime for multi step LLM invocations. Although efforts so far have gotten nowhere. I have some notes on my GitHub profile readme if anyone curious: https://github.com/colbyn
(I really dislike the ‘agentic’ term since in my mind it’s just compilers and a runtime all the way down.)
But that’s more serial procedural work, what I want is full blown recursion, in some generalized way (and without liquid templating hacks that I keep restoring to), deeply needed nested LLM invocations akin to how my dataset generation pipeline works.
PS
Also I really dislike prompt text in source code. I prefer to factor in out into standalone prompt files. Using the XML format in my case.
This kind of research is underrated. I have a strong feeling that these kinds of harness improvements will lead to solving whole classes of problems reliably, and matter just as much as model training.
Not exactly a surprise Claude did this out of the box with minimal prompting considering they’ve presumably been RLing the hell out of it for agent teams: https://code.claude.com/docs/en/agent-teams
Why can’t you just give access to all tools to all subagents? That’s more general than what you’ve done. Surely it can figure out how to backtrack or keep context?
But I do like you approach and I feel this is the next step.
Historically, Claude code used sequential planning with linear dependencies using tools like TodoWrite, TodoRead. There are open source MCP equivalents of TodoWrite.
I’ve found both the open source TodoWrite and building your own TodoWrite with a backing store surprisingly effective for Planning and avoiding developer defined roles and developer defined plans/workflows that the author calls in the blog for AI-SRE usecases. It also stops the agent from looping indefinitely.
Cord is a clever model and protocol for tree-like dependencies using the Spawn and Fork model for clean context and prior context respectively.
Claude basically does this now (including deciding when to use subagents, tools, and agent teams). I built a similar thing a month ago and saw the writing on the wall.
We built something like this by hand without much difficulty for a product concept. We'd initially used LangGraph but we ditched it and built our own out of revenge for LangGraph wasting our time with what could've simply been an ordinary python function.
Never again committing to any "framework", especially when something like Claude Code can write one for you from scratch exactly for what you want.
We have code on demand. Shallow libraries and frameworks are dead.
The spawn/fork primitives are interesting but I think the harder problem in multi-agent coordination isn't the topology — it's the overhead.
In my experience with multi-agent systems, about 40% of total tokens go to coordination rather than actual task completion once you get past 3-4 agents. Status checking, conflict resolution, and duplicate work detection dominate. A tree structure helps with authority (parent delegates to children) but doesn't solve the fundamental problem of agents doing redundant work because they can't observe each other's progress in real-time.
The "context query" suggestion in this thread is the right instinct. What you really want is something like claim-before-act: agents announce what they're about to work on before starting, so others can avoid duplication. That's a coordination primitive that matters more than topology.
The other missing piece: what happens when agent count exceeds the coordination capacity of the system? Trees scale better than flat structures, but even trees break down when you have 5-6 leaf agents all needing to share state. At that point you need something closer to structured channels or topic-based routing, not just parent-child relationships.
If context window is infinite and performance isn't constrained, the subagent stuff isn't necessary. Until then, harnesses are for context management and parallelism.
This approach seems interesting, but in my experience, a single "agent" with proper context management is better than a complicated agent graph. Dealing with hand-off (+ hand back) and multiple levels of conversations just leaves too much room for critical information to get siloed.
If you have a narrow task that doesn't need full context, then agent delegation (putting an agent or inference behind a simple tool call) can be effective. A good example is to front your RAG with a search() tool with a simple "find the answer" agent that deals with the context and can run multiple searches if needed.
I think the PydanticAI framework has the right approach of encouraging Agent Delegation & sequential workflow first and trying to steer you away graphs[0]
I wonder if the “spawn” API is ever preferable over “fork”. Do we really want to remove context if we can help it? There will certainly be situations where we have to, but then what you want is good compaction for the subagent. “Clean-slate” compaction seems like it would always be suboptimal.
Is there any reason to explicitly have this binary decision.
Instead of single primitive where the parent dynamically defines the childs context. Naturally resulting in either spawn or fork or anything in between.
This is a vibeslop project with a vibeslop write-up.
Trees? Trees aren't expressive enough to capture all dependency structures. You either need directed acyclical graphs or general directed graphs (for iterative problems).
Based on the terminology you use, it seems you've conflated the graphs used in task scheduling with trees used in OS process management. The only reason process trees are trees are for OS-specific reasons (need for a single initializing root process, need to propagate process properties safely) . But here you're just solving a generic problem, trees are the wrong data structure.
- You have no metrics for what this can do
- No reason given for why you use trees (the text just jumps from graph to trees at one point)
- None of the concepts are explained, but it's clearly just the UNIX process model applied to task management (and you call this 60 year old idea "genuinely new"!)
The tasks tool is designed to validate a DAG as input, whose non-blocked tasks become cheap parallel subagent spawns using Erlang/OTP.
It works quite well. The only problem I’ve faced is getting it to break down tasks using the tool consistently. I guess it might be a matter of experimenting further with the system prompt.
Opencode getting fork was such a huge win. It's great to be able to build something out, then keep iterating by launching new forks that still have plenty of context space available, but which saw the original thing get built!
You should also try to make context query the first class primitive.
Context query parameter can be natural language instruction how to compact current context passed to subagent.
When invoking you can use values like "empty" (nothing, start fresh), "summary" (summarizes), "relevant information from web designer PoV" (specific one, extract what's relevant), "bullet points about X" etc.
This way LLM can decide what's relevant, express it tersly and compaction itself will not clutter current context – it'll be handled by compaction subagent in isolation and discarded on completion.
What makes it first class is the fact that it has to be built in tool that has access to context (client itself), ie. it can't be implemented by isolated MCP because you want to avoid rendering context as input parameter during tool call, you just want short query.
Ie. you could add something like:
depends_on is also based on context query but in this case it's a map where keys are subagent conversation ids that are blockers to perform this handed over task and value is context query what to extract to inject.If you leave agent interaction unconstrained, the probabilistic variance compounds into chaos. By encapsulating non-deterministic nodes within a rigidly defined graph structure, you regain control over the state machine. Coordination requires deterministic boundaries.
Into a general purpose markup language + runtime for multi step LLM invocations. Although efforts so far have gotten nowhere. I have some notes on my GitHub profile readme if anyone curious: https://github.com/colbyn
Here’s a working example: https://github.com/colbyn/AgenticWorkflow
(I really dislike the ‘agentic’ term since in my mind it’s just compilers and a runtime all the way down.)
But that’s more serial procedural work, what I want is full blown recursion, in some generalized way (and without liquid templating hacks that I keep restoring to), deeply needed nested LLM invocations akin to how my dataset generation pipeline works.
PS
Also I really dislike prompt text in source code. I prefer to factor in out into standalone prompt files. Using the XML format in my case.
I've been playing with a closely related idea of treating the context as a graph. Inspired by the KGoT paper - https://arxiv.org/abs/2504.02670
I call this "live context" because it's the living brain of my agents
But I do like you approach and I feel this is the next step.
Neat concept though, would be cool to see some tests of performance on some tasks.
I’ve found both the open source TodoWrite and building your own TodoWrite with a backing store surprisingly effective for Planning and avoiding developer defined roles and developer defined plans/workflows that the author calls in the blog for AI-SRE usecases. It also stops the agent from looping indefinitely.
Cord is a clever model and protocol for tree-like dependencies using the Spawn and Fork model for clean context and prior context respectively.
Never again committing to any "framework", especially when something like Claude Code can write one for you from scratch exactly for what you want.
We have code on demand. Shallow libraries and frameworks are dead.
There's a reason industries have standards. If you replace established libraries with vibecoded alternatives you will have:
- less documentation
- less tested code
- no guarantees it's doing the right thing
- a dice roll for whether it works this time on this project
- a bad time in general
In my experience with multi-agent systems, about 40% of total tokens go to coordination rather than actual task completion once you get past 3-4 agents. Status checking, conflict resolution, and duplicate work detection dominate. A tree structure helps with authority (parent delegates to children) but doesn't solve the fundamental problem of agents doing redundant work because they can't observe each other's progress in real-time.
The "context query" suggestion in this thread is the right instinct. What you really want is something like claim-before-act: agents announce what they're about to work on before starting, so others can avoid duplication. That's a coordination primitive that matters more than topology.
The other missing piece: what happens when agent count exceeds the coordination capacity of the system? Trees scale better than flat structures, but even trees break down when you have 5-6 leaf agents all needing to share state. At that point you need something closer to structured channels or topic-based routing, not just parent-child relationships.
Remarkably similar to humans.
in the short run, ive found the open ai agents one to be the best
If you have a narrow task that doesn't need full context, then agent delegation (putting an agent or inference behind a simple tool call) can be effective. A good example is to front your RAG with a search() tool with a simple "find the answer" agent that deals with the context and can run multiple searches if needed.
I think the PydanticAI framework has the right approach of encouraging Agent Delegation & sequential workflow first and trying to steer you away graphs[0]
[0]:https://ai.pydantic.dev/graph/
Is there any reason to explicitly have this binary decision.
Instead of single primitive where the parent dynamically defines the childs context. Naturally resulting in either spawn or fork or anything in between.
Trees? Trees aren't expressive enough to capture all dependency structures. You either need directed acyclical graphs or general directed graphs (for iterative problems).
Based on the terminology you use, it seems you've conflated the graphs used in task scheduling with trees used in OS process management. The only reason process trees are trees are for OS-specific reasons (need for a single initializing root process, need to propagate process properties safely) . But here you're just solving a generic problem, trees are the wrong data structure.
- You have no metrics for what this can do - No reason given for why you use trees (the text just jumps from graph to trees at one point) - None of the concepts are explained, but it's clearly just the UNIX process model applied to task management (and you call this 60 year old idea "genuinely new"!)
The tasks tool is designed to validate a DAG as input, whose non-blocked tasks become cheap parallel subagent spawns using Erlang/OTP.
It works quite well. The only problem I’ve faced is getting it to break down tasks using the tool consistently. I guess it might be a matter of experimenting further with the system prompt.
[1]: https://github.com/matteing/opal
Opencode getting fork was such a huge win. It's great to be able to build something out, then keep iterating by launching new forks that still have plenty of context space available, but which saw the original thing get built!
cord - The #1 AI-Powered Job Search Platform for people in tech