I’ve been developing an open-source version of something similar[1] and used it quite extensively (well over 1k PRs)[2]. I’m definitely believer of the “prompt to PR model”. Very liberating to not have to think about managing the agent sessions. Seems that you have built a lot of useful tooling (e.g., session videos) around this core idea.
Couple of learnings to share that I hope could be of use:
1) Execution sandboxing is just the start. For any enterprise usage you want fairly tight network egress control as well to limit chances of accidental leaks or malicious exfiltration if theres any risk of untrusted material getting into model context. Speaking as a decision maker at a tech company we do actually review stuff like this when evaluating tools.
2) Once you have proper network sandboxing, you could secure credentials much better: give agent only dummy surrogates and swap them to real creds on the way out.
3) Sandboxed agents with automatic provisioning of workspace from git can be used for more than just development tasks. In fact, it might be easier to find initial traction with a more constrained and thus predictable tasks. E.g., “ask my codebase” or “debug CI failures”.
I love the idea of emailing agents like we email humans! Thank you for sharing your learnings:
1. Network constraints vary quite a bit from one enterprise customer to another, so right now this is something we handle on a case-by-case basis with them.
2. We came to the same conclusion. For sensitive credentials like LLM API keys, we generate ephemeral keys so the real keys never touch the sandbox.
3. Totally right, we support constrained tasks too (ask mode, automated CI fixes). We've gone back and forth on whether to go vertical-first or stay generic. We're still figuring out where the sweet spot is. The constrained tasks are more reliable today, but the open-ended ones are where teams get the most leverage.
Edit: just noticed this is a semi duplicate question to https://news.ycombinator.com/item?id=47723506 so rephrasing my question - will you have computer use and will you have self-hosted runners option? (you being just the controlplane / task orchestrator, which is the hardest problem apparently...)
Additional question - what types of sandboxes you use? (just docker or also firecracker etc...)
Original comment:
Congrats on the launch!
What's the benefit over cursor cloud agents with computer use? (other than preventing vendor lock in?)
We already support computer use out of the box (linux sandboxes). Self hosted runners are not available yet, but Twill is built on a runtime agnostic layer (see https://github.com/TwillAI/agentbox-sdk) so it is feasible!
Anthropic recently killed the ability for third parties to use the Claude Code subscription, and it's assumed they're subsidising that price heavily. Which is fine, but it's a good reminder of the vendor lock-in risk. One policy change and your workflow breaks. Twill is agent-agnostic (Claude Code, Codex CLI, OpenCode), so you're not betting on any single vendor's pricing decisions.
On the cost for solo devs, yeah, if you're one person running one agent at a time on your laptop, the sub is probably the better deal today. No argument there. The cloud agent model starts to make sense when you want to fire off multiple tasks in parallel.
Yes, the difference is that Twill launches dedicated infra on each sandbox for each task. This means you can work on multiple tasks requiring a DB migration for instance.
Also you can fire and forget tasks (my favorite) and don't have to keep your laptop running at night.
See also Cowork and other upcoming Anthropic features.
See also Show HN, this exact product is frequently shown as a github link.
The paradigm shift in Ai means what you are making is (1) filling a gap until the primaries implement it, most have it in their pipeline if not already (2) how easy it is to replicate with said Ai using my preferred tech stack
Cowork does not seem to be focused on engineering, but we are fully expecting Anthropic to catch up in this category.
What Anthropic can't offer is to let you use Codex or combine it with Claude Code. That is why we think non ai-labs players have a say in this market.
To your last point, as always there is a buy vs build tradeoff which ultimately comes down to focusing on your core business which we think still remains important in the ai era
My comment about Cowork is more about pointing out a different feature set that will crossover with Code. In example they have the Task related things as an affordance, Code has this coming.
I believe there is a difference between an open source framework and a product. You would still have to manage and scale your infra, build the integration layer around it to make it accessible where your teams are, fix bugs etc...
I am not saying that build is always the bad choice, but the tradeoff did not disappear imo
I'm surprised how much you push back instead of dig in to understand more. I have heard mentor time is way down at YC since they stopped doing things that don't scale. You could be asking questions to better understand where you'd fit in with users and how to better position yourself. We are your market, how do we see the world now, post-ai?
24/7 running coding agents are pretty clearly the direction the industry is going now. I think we'll need either on-premises or cloud solutions, since obviously if you need an agent to run 24/7 then it can't live on your laptop.
Obviously cloud is better for making money, and some kind of VPC or local cloud solution is best for enterprise, but perhaps for individual devs, a self-hosted system on a home desktop computer running 24/7 (hybrid desktop / server) would be the best solution?
> 24/7 running coding agents are pretty clearly the direction the industry is going now.
This assertion needs some support for those of us that don't have a macro insight into the industry. Are you seeing this from within FAANG shops? As a solo developer? What? Honest question.
I'm speaking from my daily experience. Sometimes i don't want to close my laptop before going to bed because there are still 1-2 tasks ongoing in my AI kanban board, so I just leave my laptop open (lock but not suspend it) so that the agents keep working for a while. I don't even have things all that automated.
I anticipate that once I have some more complex agentic scaffolds set up to do things like automatically explore promising directions for the project, then leaving the AI system on overnight becomes a necessity.
The core issue for me is, I don't want to trust someone else with my code, or run my stuff on their computers. I don't see serious enterprise organizations offloading something as critical to security outside their own network perimeter.
For a solo dev running one task at a time, a beefy desktop overnight is totally viable. We see a lot of this with the Mac Mini hype
Cloud starts to matter when you want to (a) run a swarm of agents on multiple independent tasks in parallel, (b) share agents across a team, or (c) not worry about keeping a machine online
I would point out that a beefy desktop is probably faster at compiling code than a typical cloud instance simply due to more CPU performance. So maybe up to 10-ish concurrent agents it's faster to use a local desktop than a cloud instance, and then you start to get into the territory where multiple agents are compiling code at the same time, and the cloud setup starts to win. (That's assuming the codebase takes a while to compile and pegs your CPU at 100% while doing so. If the codebase is faster to compile or uses fewer threads, then the breakeven agent count is even higher.)
Other than that, I agree with what you said. I don't know what the tradeoffs for local on-premises and cloud agents are in terms of other areas like convenience, but I do think that scalability in the cloud is a big advantage.
Totally right on the compile time. CIs have the same bottleneck, and the ecosystem is working on fixing this (faster cpus, better caching) in both coding agents and CI to improve overall velocity
Jules is similar to Twill with the following differences:
- Twill is CLI-agnostic, meaning you can use Claude Code, Codex or Gemini. Jules only works with Gemini.
- We focus on the delegation experience: Twill has native integrations with your typical stack like Slack or Linear. The PRs comes back with proofs of work, such as screenshots or videos.
Claude managed agents is a general-purpose hosted runtime for Claude. While Twill focuses on SWE tasks.
And so the SWE workflow is pre-built (research, planning, verification, PR, proof of work). Twill is also agnostic to the agent, so you can use codex for instance. Additionally you have more flexibility on sandbox sizing on Twill
Yes, this is the pass@k metric from code generation research. Found the relevant paper Evaluating Large Language Models Trained on Code (Chen et al., 2021) which introduced the metric.
On the Twill web app, you can run the same task across different agents and multiple attempts (each in its own sandbox). Then you pick the best result. This is super handy for UI work where you can open the live preview for each attempt and compare.
Next step for us is adding a final pass where an agent evaluates the results and combines the best parts into one PR.
I built an internal version of this for my workplace.
Something very useful that will be harder for you most likely is code search. Having a proper index over hundreds of code repos so the agent can find where code is called from or work out what the user means when they use an acronym or slightly incorrect name.
It's quite nice to use and I'm sure someone will make a strong commercial offering. Good luck
I agree and that is why I think monorepos are making a comeback.
That said, there are workarounds, like cloning all repos and enabling LSP (coding CLIs added that feature) or using a dedicated solution for codebase indexing and add a skill/mcp.
Super fast models spamming grep commands are also fun to watch!
Run a copy of this in the same VPC. Monorepos would definitely help, but that's not the structure we have. I didn't want to rely on API limits (or stability) at GitHub for such a core feature.
Using this we've had agents find dead APIs across multiple repos that can be cleaned up and the like. Very useful.
Similar but reusing lab-native CLIs like Claude Code or Codex, which they perform RL on. And so in the long-run, we believe this approach wins over custom harnesses.
We’re focused on SWE use cases. Code is nice because there’s already a built-in verification loop: diffs, tests, CI, review, rollback. But you do quickly get to a state where the agent needs to make a risky action (db migration, or an infra operation). And this is where the permissions features from the agents are handy: allowlist, automode, etc. So you have approve/reject only the high risk actions.
And I think this risk model is valid for both technical and non-technical use cases
Couple of learnings to share that I hope could be of use:
1) Execution sandboxing is just the start. For any enterprise usage you want fairly tight network egress control as well to limit chances of accidental leaks or malicious exfiltration if theres any risk of untrusted material getting into model context. Speaking as a decision maker at a tech company we do actually review stuff like this when evaluating tools.
2) Once you have proper network sandboxing, you could secure credentials much better: give agent only dummy surrogates and swap them to real creds on the way out.
3) Sandboxed agents with automatic provisioning of workspace from git can be used for more than just development tasks. In fact, it might be easier to find initial traction with a more constrained and thus predictable tasks. E.g., “ask my codebase” or “debug CI failures”.
[1] https://airut.org [2] https://haulos.com/blog/building-agents-over-email/
I love the idea of emailing agents like we email humans! Thank you for sharing your learnings:
1. Network constraints vary quite a bit from one enterprise customer to another, so right now this is something we handle on a case-by-case basis with them.
2. We came to the same conclusion. For sensitive credentials like LLM API keys, we generate ephemeral keys so the real keys never touch the sandbox.
3. Totally right, we support constrained tasks too (ask mode, automated CI fixes). We've gone back and forth on whether to go vertical-first or stay generic. We're still figuring out where the sweet spot is. The constrained tasks are more reliable today, but the open-ended ones are where teams get the most leverage.
Additional question - what types of sandboxes you use? (just docker or also firecracker etc...)
Original comment:
Congrats on the launch!
What's the benefit over cursor cloud agents with computer use? (other than preventing vendor lock in?)
https://cursor.com/blog/agent-computer-use
Or the existing Claude Code Web?
On the cost for solo devs, yeah, if you're one person running one agent at a time on your laptop, the sub is probably the better deal today. No argument there. The cloud agent model starts to make sense when you want to fire off multiple tasks in parallel.
Also you can fire and forget tasks (my favorite) and don't have to keep your laptop running at night.
See also Show HN, this exact product is frequently shown as a github link.
The paradigm shift in Ai means what you are making is (1) filling a gap until the primaries implement it, most have it in their pipeline if not already (2) how easy it is to replicate with said Ai using my preferred tech stack
What Anthropic can't offer is to let you use Codex or combine it with Claude Code. That is why we think non ai-labs players have a say in this market.
To your last point, as always there is a buy vs build tradeoff which ultimately comes down to focusing on your core business which we think still remains important in the ai era
it's a nonbinary decision now
Google has a free, open source take on what you are building, looks more mature as well
https://googlecloudplatform.github.io/scion/overview/
My comment about Cowork is more about pointing out a different feature set that will crossover with Code. In example they have the Task related things as an affordance, Code has this coming.
I am not saying that build is always the bad choice, but the tradeoff did not disappear imo
Obviously cloud is better for making money, and some kind of VPC or local cloud solution is best for enterprise, but perhaps for individual devs, a self-hosted system on a home desktop computer running 24/7 (hybrid desktop / server) would be the best solution?
This assertion needs some support for those of us that don't have a macro insight into the industry. Are you seeing this from within FAANG shops? As a solo developer? What? Honest question.
I anticipate that once I have some more complex agentic scaffolds set up to do things like automatically explore promising directions for the project, then leaving the AI system on overnight becomes a necessity.
Cloud starts to matter when you want to (a) run a swarm of agents on multiple independent tasks in parallel, (b) share agents across a team, or (c) not worry about keeping a machine online
Other than that, I agree with what you said. I don't know what the tradeoffs for local on-premises and cloud agents are in terms of other areas like convenience, but I do think that scalability in the cloud is a big advantage.
The analysis request failed.
Hosted shell completed without parseable score_repo.py JSON output. 11 command(s), 11 output(s). (rest redacted)
One question, do you have plans for any other forms of sandboxing that are a little more "lightweight"?
Also how do you add more agent types, do you support just ACP?
For the lightweight sandbox, can you give an example?
Currently we support main coding CLIs, ACP support is not shipped yet.
- Twill is CLI-agnostic, meaning you can use Claude Code, Codex or Gemini. Jules only works with Gemini.
- We focus on the delegation experience: Twill has native integrations with your typical stack like Slack or Linear. The PRs comes back with proofs of work, such as screenshots or videos.
And so the SWE workflow is pre-built (research, planning, verification, PR, proof of work). Twill is also agnostic to the agent, so you can use codex for instance. Additionally you have more flexibility on sandbox sizing on Twill
Are there benchmarks out there that back this claim?
This is what enables Twill to self verify its work before opening a PR
Something very useful that will be harder for you most likely is code search. Having a proper index over hundreds of code repos so the agent can find where code is called from or work out what the user means when they use an acronym or slightly incorrect name.
It's quite nice to use and I'm sure someone will make a strong commercial offering. Good luck
That said, there are workarounds, like cloning all repos and enabling LSP (coding CLIs added that feature) or using a dedicated solution for codebase indexing and add a skill/mcp.
Super fast models spamming grep commands are also fun to watch!
Curious to know how you implemented it in house.
Run a copy of this in the same VPC. Monorepos would definitely help, but that's not the structure we have. I didn't want to rely on API limits (or stability) at GitHub for such a core feature.
Using this we've had agents find dead APIs across multiple repos that can be cleaned up and the like. Very useful.