Don't waste your time and money on funding bug bounties or "getting audits done". Your staff will add another big security flaw just the next day, back to square one.
My original message was more positive but after more looking into context, I am a bit more pessimistic.
Now I must admit though that I am little concerned by the fact that the vulnerability reporters tried multiple times to contact you but till no avail. This is not a good look at all and I hope you can fix it asap as you mention
I respect dax from the days of SST framework but this is genuinely such a bad look especially when they Reported on 2025-11-17, and multiple "no responses" after repeated attempts to contact the maintainers...
Sure they reported the bug now but who knows what could have / might have even been happening as OpenCode was the most famous open source coding agent and surely more cybersec must have watched it, I can see a genuine possibility where something must have been used in the wild as well from my understanding from black hat adversaries
I think this means that we should probably run models in gvisor/proper sandboxing efforts.
Even right now, we don't know how many more such bugs might persist and can lead to even RCE.
Dax, This short attention would make every adversary look for even more bugs / RCE vulnerabilities right now as we speak so you only have a very finite time in my opinion. I hope things can be done as fast as possible now to make OpenCode more safer.
the email they found was from a different repo and not monitored. this is ultimately our fault for not having a proper SECURITY.md on our main repository
the issue that was reported was fixed as soon as we heard about it - going through the process of learning about the CVE process, etc now and setting everything up correctly. we get 100s of issues reported to us daily across various mediums and we're figuring out how to manage this
i can't really say much beyond this is my own inexperience showing
I also just want to sympathize with the difficulty of spotting the real reports from the noise. For a time I helped manage a bug bounty program, and 95% of issues were long reports with plausible titles that ended up saying something like "if an attacker can access the user's device, they can access the user's device". Finding the genuine ones requires a lot of time and constant effort. Though you get a feel for it with experience.
edit: I agree with the original report that the CORS fix, while a huge improvement, is not sufficient since it doesn't protect from things like malicious code running locally or on the network.
edit2: Looks like you've already rolled out a password! Kudos.
I've been thinking about using LLMs to help triage security vulnerabilities.
If done in an auditably unlogged environment (with a limited output to the company, just saying escalate) it might also encourage people to share vulns they are worried about putting online.
Thanks for providing additional context. I appreciate the fact that you are admitting fault where it is and that's okay because its human to make errors and I have full faith from your response that OpenCode will learn from its errors.
I might try OpenCode now once its get patched or after seeing the community for a while. Wishing the best of luck for a more secure future of opencode!
I learnt this the hard way: if anyone is sending multiple emails, with seemingly very important titles and messages, and they get no reply at all, the receiver likely haven’t received your email rather than completely ghosting you.
Everyone should know this, and at least try a different channel of communication before further actions, especially from those disclosing vulnerability.
Fixed? You just change it to be off by default giving the security burden to your users. It's not fixed it's buried with minimal mitigation and you give no indication to your users that it will make your machine vulnerable if activated. Shady.
I am also baffled at how long this vulnerability was left open, but I’m glad you’re at least making changes to hopefully avoid such mistakes in the future.
Just a thought, have you tried any way to triage these reported issues via LLMs, or constantly running an LLM to check the codebase for gaping security holes? Would that be in any way useful?
Anyway, thanks for your work on opencode and good luck.
I've been curious how this project will grow over time, it seems to have taken the lead as the first open source terminal agent framework/runner, and definitely seems to be growing faster than any organization would/could/should be able to manage.
It really seems like the main focus of the project should be in how to organize the work of the project, rather than on the specs/requirements/development of the codebase itself.
What are the general recommendations the team has been getting for how to manage the development velocity? And have you looked into various anarchist organizational principles?
Good luck, and thank you for eating the accountability sandwich and being up front about what you're doing. That's not always easy to do, and it's appreciated!
Something is seriously wrong when we say "hey, respect!" to a company who develops an unauthenticated RCE feature that should glaringly shine [0] during any internal security analysis, on software that they are licensing in exchange for money [1], and then fumble and drop the ball on security reports when someone does their due diligence for them.
If this company wants to earn any respect, they need at least to publish their post-mortem about how their software development practices allowed such a serious issue to reach shipping.
This should come as a given, especially seeing that this company already works on software related to security (OpenAuth [2]).
It's called "the world wide web" and it works on the principle that a webpage served by computer A can contain links that point to other pages served by computer B.
Whether that principle should have been sustained in the special case of "B = localhost" is a valid question. I think the consensus from the past 40 years has been "yes", probably based on the amount of unknown failure possibilities if the default was reversed to "no".
owasp A01 addresses this: Violation of the principle of least privilege, commonly known as deny by default, where access should only be granted for particular capabilities, roles, or users, but is available to anyone.
Indeed, deny by default policy results in unknown failure possibilities, it's inherent to safety.
Many people seem to be running OpenCode and similar tools on their laptop with basically no privilege separation, sandboxing, fine-grained permissions settings in the tool itself. This tendency is reflected also by how many plugins are designed, where the default assumption is the tool is running unrestricted on the computer next to some kind of IDE as many authentication callbacks go to some port on localhost and the fallback is to parse out the right parameter from the callback URL. Also for some reasons these tools tend to be relative resource hogs even when waiting for a reply from a remote provider. I mean, I am glad they exist, but it seems very rough around the edges compared to how much attention these tools get nowadays.
Please run at least a dev-container or a VM for the tools. You can use RDP/ VNC/ Spice or even just the terminal with tmux to work within the confines of the container/ machine. You can mirror some stuff into the container/ machine with SSHFS, Samba/ NFS, 9p. You can use all the traditional tools, filesystems and such for reliable snapshots. Push the results separately or don't give direct unrestricted git access to the agent.
It's not that hard. If you are super lazy, you can also pay for a VPS $5/month or something like that and run the workload there.
If you are using VSCode against WSL2 or Linux and you have installed Docker, managing devcontainers is very straightforward. What I usually do is to execute "Connect to host" or "Connect to WSL", then create the project directory and ask VSCode to "Add Dev Container Configuration File". Once the configuration file is created, VSCode itself will ask you if you want to start working inside the container. I'm impressed with the user experience of this feature, to be honest.
Working with devcontainers from CLI wasn't very difficult [0], but I must confess that I only tested it once.
I have a pretty non-standard setup but with very standard tools. I didn't follow any specific guide. I have ZFS as the filesystem, for each VM a ZVOL or dataset + raw image and libvirt/ KVM on top. This can be done using e.g. Debian GNU/ Linux in a somewhat straight forward way. You can probably do something like it in WSL2 on Windows although that doesn't really sandbox stuff much or with Docker/ Podman or with VirtualBox.
If you want a dedicated virtual host, Proxmox seems to be pretty easy to install even for relative newcomers and it has a GUI that's decent for new people and seasoned admins as well.
For the remote connection I just use SSH and tmux, so I can comfortably detach and reattach without killing the tool that's running inside the terminal on the remote machine.
I hope this helps even though I didn't provide a step-by step guide.
That's why you run with "dangerously allow all." What's the point of LLMs if I have to manually approve everything? IME you only get half decent results if the agent can run tests, run builds and iterate. I'm not going to look at the wall of texts it produces on every iterations, they are mostly convincing bullshit. I'll review the code it wrote once the tests pass, but I don't want to be "in the loop".
I really like the product created by fly.io's https://sprites.dev/ for AI's sandboxes effectively. I feel like its really apt here (not sponsored lmao wish I was)
Oh btw if someone wants to run servers via qemu, I highly recommend quickemu. It provides default ssh access,sshfs, vnc,spice and all such ports to just your local device of course and also allows one to install debian or any distro (out of many many distros) using quickget.
I personally really like zed with ssh open remote. I can always open up terminals in it and use claude code or opencode or any and they provide AI as well (I dont use much AI this way, I make simple scripts for myself so I just copy paste for free from the websites) but I can recommend zed for what its worth as well.
A coworker raised an interesting point to me. The CORS fix removes exploitation by arbitrary websites (but obviously allows full access from the opencode domain), but let's take that piece out for a second...
What's the difference here between this and, for example, the Neovim headless server or the VSCode remote SSH daemon? All three listen on 127.0.0.1 and would grant execution access to another process who could speak to them.
Is there a difference here? Is the choice of HTTP simply a bad one because of the potential browser exploitation, which can't exist for the others?
> Neovim’s server defaults to named pipes or domain sockets, which do not have this issue. The documentation states that the TCP option is insecure.
Good note on pipes / domain sockets, but it doesn't appear there's a "default", and the example in the docs even uses TCP, despite the warning below it.
(EDIT: I guess outside of headless mode it uses a named pipe?)
> VS Code’s ssh daemon is authenticated.
How is it authenticated? I went looking briefly but didn't turn up much; obviously there's the ssh auth itself but if you have access to the remote, is there an additional layer of auth stopping anyone from executing code via the daemon?
From the page you linked: Nvim creates a default RPC socket at startup, given by v:servername.
You can follow the links on v:servername to read more about the startup process and figure out what that is, but tl;dr, it's a named pipe unless you override it.
If you have a localhost server that uses a client input to execute code without authentication, that’s a local code execution vulnerability at the very least. It becomes a RCE when you find a way to reach local server over the wire, such as via browser http request.
I don’t use VSCode you have mentioned so i don’t know how it is implemented but one can guess that it is implemented with some authentication in mind.
WTF, they not just made unauthenticated RCE http endpoint, they also helpfully added CORS bypass for it... all in CLI tool? That silently starts http server??
Seems that OpenCode is YC-backed as well [0] [1]. I would've thought YC would encourage better cyber security practice than OpenCode have demonstrated here.
This is pretty egregious. And outside the fact the server is now disabled by default, once it's running it is still egregious:
> When server is enabled, any web page served from localhost/127.0.0.1 can execute code
> When server is enabled, any local process can execute code without authentication
> No indication when server is running (users may be unaware of exposure)
I'm sorry this is horrible. I really want there to be a good actual open cross-provider agentic coding tool, but this seems to me to be abusive of people's trust of TUI apps - part of the reason we trust them is they typically DON'T do stuff like this.
Huh, I thought opencode was a volunteer project but it looks like it's a business with major backing from major players. Was opencode always set up like this? I could have sworn there was some project with a better governance model, guess not.
If you aren't blocking your browser from allowing sites to call to local services, you should:
> Network Boundary Shield
> The Network Boundary Shield (NBS) is a protection against attacks from an external network (the Internet) to an internal network - especially against a reconnaissance attack where a web browser is abused as a proxy.
> The main goal of NBS is to prevent attacks where a public website requests a resource from the internal network (e.g. the logo of the manufacturer of the local router); NBS will detect that a web page hosted on the public Internet is trying to connect to a local IP address. NBS only blocks HTTP requests from a web page hosted on a public IP address to a private network resource; the user can allow specific web pages to access local resources (e.g. when using Intranet services).
They keep adding features without maintaining the core. I stopped using it when they started selling plans. The main reason for Opencode was to use multiple models but it turns out context sharing across models is PIA and impractical right now. I went back to using Claude Code and Codex side by side.
Having said that, there is definitely a need for open platform to utilize multiple vendors and models. I just don’t think the big three (Anthropic, OAI and Google) will cede that control over with so much money on the line.
As someone who uses the two big C's, I can recommend ampcode[0] and Crush[1]+z.ai GLM as an addition.
Amp can do small utility scripts and changes for free (especially if you enable the ads) and Crush+GLM is pretty good at following plans done by Claude or Codex
An Ad based model although sucks, still feels like a decent model of income than companies which provide inference at loss making, interesting.
I hate the Ad models but I am pretty sure that most code gets trained in AI anyway and the code we generate would probably not be valuable metric (usually) to the ad company.
Interesting, what are your thoughts about it? Thanks for sharing this. Is the project profitable because I assume not, not sure how much advertisements costs would be there.
There's a tiny 2-line text ad above the prompt. I might have accidentally read it a few times, but meh. It's not like I look at the amp console that much anyway.
It seems to be about on par with Claude as a pair coder and I think it's a lot less verbose and concise on what it says, just sticking to the facts without any purple prose. It also seems to directly hook into ~/.claude/ just today it used a claude-only skill to analyse my codebase (using the scripts provided by the skill).
They seem to not have a lot of real world experience and/or throw caution to the wind and YOLO through security practices. I'd be weary using any of their products.
Seems `session/:id/shell` was also `session/:id/bash` and originally `session/:id/command` in some commits.
Maybe I'm using GitHub code search wrongly, but it appears this was just never part of even a pull request - the practice of just having someone pushing to `dev` (default branch) which then will be tagged should perhaps also be revisited.
(Several more commits under `wip: bash` and `feat: bash commands`)
This is such an egregious lack of respect for users, you can't trust this organisation again, and the lack of responsiveness just signals that they don't consider it a problem. Users must signal to companies that this attitude is unacceptable by dumping them.
fwiw they should probably slow down a bit, even though they seem to be winning the race. they started selling their own subscription plan last week, and promptly committed all subscriber’s emails to the public repo
> Hey - have some bad news.
> We accidentally committed your email to our repo as part of a script that was activating OpenCode Black.
> No other information was included, just the email on its own.
I liked aider initially, but I keep running into problems, as the project seems largely unmaintained. I wanted to install OpenCode yesterday, but this somewhat turns me off. Are there any good model-agnostic alternatives? I am somewhat shocked there is not a lot of good open source CLI LLM code assistants going around.
So did they fix it silently, without responding to the researcher, or they fixed the silent part where now user is made a aware that a website is trying to execute code on their machine.
It's under "Vendor Advisory", so I'm guessing it's that they fixed it, but never informed any OpenCode users that there was a massive security vulnerability.
Well I feel like they will take security more in context from here on out.
Atleast they didnt implode their communications like I see from some other companies.
To be really honest, when you bet on AI agents, I feel like soemtimes you bet on the future of the product as well which is built by the people so you are basically betting on the people.
I'd much rather bet/rely on people who are sensibile in communications in troubled times like this than who implode sometimes (I mean no offense to Coderabbit but this is what comes to my head right now)
So moments like these become the litmus test of the products basically imo by seeing how people communicate etc.
I run mine on the public internet and it’s fine, because I put it behind auth, because it’s a tool to remotely execute code with no auth and also has a fully featured webshell.
To be clear, this is a vulnerability. Just the same as exposing unauthenticated telnet is a vulnerability. User education is always good, but at some point in the process of continuing to build user-friendly footguns we need to start blaming the users. “It is what it is”, Duh.
This “vulnerability” has been known by devs in my circle for a while, it’s literally the very first intuitive question most devs ask themselves when using opencode, and then put authentication on top.
Particularly in the AI space it’s going to be more and more common to see users punching above their weight with deployments. Let em learn. Let em grow. We’ll see this pain multiply in the future if these lessons aren’t learned early.
Can you share what made this behavior obvious to you? E.g. when I first saw Open Code, it looked like yet another implementation of Claude Code, Codex-CLI, Gemini-CLI, Project Goose, etc. - all these are TUI apps for agentic coding. However, from these, only Open Code automatically started an unauthenticated web server when I simply started the TUI, so this came as a surprise to me.
I was investigating that for entirely unrelated reasons just yesterday and the answer so far seems to be "none". You can patch the server to serve the locally built frontend and it all works just fine.
On the one hand, with 1800 open issues and 800 open PRs (most of it probably AI generated slop) makes it a bit understandable for the maintainers to be slow to reply. On the other hand, the vulnerability is so baffling that I'll make sure to stay as far away as possible from this project.
people run AI tools outside a sandbox? tf? the first thing I did with claude code is put it in a sandbox.
come on people, docker and podman exist, please use them - it isolates you not only from problems like this but supply chain attacks as well.
it also has superior compatibility, any person working on your project will have all the tools available to compile it since to build & run it you use a simple Containerfile.
This doesn't actually seem that bad to me? Browsers don't let random pages on the internet hit localhost without prompting you anymore so it's not like a random website could RCE you unless you're running an old browser—and at that point that's the browser's fault for letting web pages out of the sandbox. You shouldn't have to protect localhost from getting hit with random public websites.
The rest is just code running as your user can talk to code running as your user. I don't really consider this to be a security boundary. If I can run arbitrary code by hitting a URL I accept that any program running as me can as well. Going above and beyond is praiseworthy (good for you turning on SELinux as an example) but I don't expect it by default.
> Browsers don't let random pages on the internet hit localhost without prompting you anymore
No, that's a Chrome-specific feature that Google added. It is not part of any standard, and does not exist in other browsers (e.g. Safari and Firefox).
> The rest is just code running as your user can talk to code running as your user
No, that assumes that there is only a single user on the machine, and there are either no forms of isolation or that all forms of isolation also use private network namespaces, which has not been how daemons are isolated in UNIX or by systemd. For example, if you were to ever run OpenCode as root, any local process can trivially gain root as well.
we've done a poor job handling these security reports, usage has grown rapidly and we're overwhelmed with issues
we're meeting with some people this week to advise us on how to handle this better, get a bug bounty program funded and have some audits done
Spend that money in reorganizing your management and training your staff so that everyone in your company is onboard with https://owasp.org/Top10/2025/A06_2025-Insecure_Design/ .
Now I must admit though that I am little concerned by the fact that the vulnerability reporters tried multiple times to contact you but till no avail. This is not a good look at all and I hope you can fix it asap as you mention
I respect dax from the days of SST framework but this is genuinely such a bad look especially when they Reported on 2025-11-17, and multiple "no responses" after repeated attempts to contact the maintainers...
Sure they reported the bug now but who knows what could have / might have even been happening as OpenCode was the most famous open source coding agent and surely more cybersec must have watched it, I can see a genuine possibility where something must have been used in the wild as well from my understanding from black hat adversaries
I think this means that we should probably run models in gvisor/proper sandboxing efforts.
Even right now, we don't know how many more such bugs might persist and can lead to even RCE.
Dax, This short attention would make every adversary look for even more bugs / RCE vulnerabilities right now as we speak so you only have a very finite time in my opinion. I hope things can be done as fast as possible now to make OpenCode more safer.
the issue that was reported was fixed as soon as we heard about it - going through the process of learning about the CVE process, etc now and setting everything up correctly. we get 100s of issues reported to us daily across various mediums and we're figuring out how to manage this
i can't really say much beyond this is my own inexperience showing
I also just want to sympathize with the difficulty of spotting the real reports from the noise. For a time I helped manage a bug bounty program, and 95% of issues were long reports with plausible titles that ended up saying something like "if an attacker can access the user's device, they can access the user's device". Finding the genuine ones requires a lot of time and constant effort. Though you get a feel for it with experience.
[0] https://en.wikipedia.org/wiki/Security.txt
edit: I agree with the original report that the CORS fix, while a huge improvement, is not sufficient since it doesn't protect from things like malicious code running locally or on the network.
edit2: Looks like you've already rolled out a password! Kudos.
If done in an auditably unlogged environment (with a limited output to the company, just saying escalate) it might also encourage people to share vulns they are worried about putting online.
Does that make sense from your experience?
[1] https://github.com/eb4890/echoresponse/blob/main/design.md
I might try OpenCode now once its get patched or after seeing the community for a while. Wishing the best of luck for a more secure future of opencode!
Just a thought, have you tried any way to triage these reported issues via LLMs, or constantly running an LLM to check the codebase for gaping security holes? Would that be in any way useful?
Anyway, thanks for your work on opencode and good luck.
It really seems like the main focus of the project should be in how to organize the work of the project, rather than on the specs/requirements/development of the codebase itself.
What are the general recommendations the team has been getting for how to manage the development velocity? And have you looked into various anarchist organizational principles?
Something is seriously wrong when we say "hey, respect!" to a company who develops an unauthenticated RCE feature that should glaringly shine [0] during any internal security analysis, on software that they are licensing in exchange for money [1], and then fumble and drop the ball on security reports when someone does their due diligence for them.
If this company wants to earn any respect, they need at least to publish their post-mortem about how their software development practices allowed such a serious issue to reach shipping.
This should come as a given, especially seeing that this company already works on software related to security (OpenAuth [2]).
[0] https://owasp.org/Top10/2025/ - https://owasp.org/Top10/2025/A06_2025-Insecure_Design/ - https://owasp.org/Top10/2025/A01_2025-Broken_Access_Control/ - https://owasp.org/Top10/2025/A05_2025-Injection/
[1] https://opencode.ai/enterprise
[2] https://anoma.ly/
Whether that principle should have been sustained in the special case of "B = localhost" is a valid question. I think the consensus from the past 40 years has been "yes", probably based on the amount of unknown failure possibilities if the default was reversed to "no".
Indeed, deny by default policy results in unknown failure possibilities, it's inherent to safety.
Please run at least a dev-container or a VM for the tools. You can use RDP/ VNC/ Spice or even just the terminal with tmux to work within the confines of the container/ machine. You can mirror some stuff into the container/ machine with SSHFS, Samba/ NFS, 9p. You can use all the traditional tools, filesystems and such for reliable snapshots. Push the results separately or don't give direct unrestricted git access to the agent.
It's not that hard. If you are super lazy, you can also pay for a VPS $5/month or something like that and run the workload there.
> Please run at least a dev-container or a VM for the tools.
I would like to know how to do this. Could you share your favorite how-to?
Working with devcontainers from CLI wasn't very difficult [0], but I must confess that I only tested it once.
[0] https://containers.dev/supporting
If you want a dedicated virtual host, Proxmox seems to be pretty easy to install even for relative newcomers and it has a GUI that's decent for new people and seasoned admins as well.
For the remote connection I just use SSH and tmux, so I can comfortably detach and reattach without killing the tool that's running inside the terminal on the remote machine.
I hope this helps even though I didn't provide a step-by step guide.
> I would like to know how to do this. Could you share your favorite how-to?
See: https://www.docker.com/get-started/
EDIT:
Perhaps you are more interested in various sandboxing options. If so, the following may be of interest:
https://news.ycombinator.com/item?id=46595393
Oh btw if someone wants to run servers via qemu, I highly recommend quickemu. It provides default ssh access,sshfs, vnc,spice and all such ports to just your local device of course and also allows one to install debian or any distro (out of many many distros) using quickget.
Its really intuitive for what its worth, definitely worth a try https://github.com/quickemu-project/quickemu
I personally really like zed with ssh open remote. I can always open up terminals in it and use claude code or opencode or any and they provide AI as well (I dont use much AI this way, I make simple scripts for myself so I just copy paste for free from the websites) but I can recommend zed for what its worth as well.
What's the difference here between this and, for example, the Neovim headless server or the VSCode remote SSH daemon? All three listen on 127.0.0.1 and would grant execution access to another process who could speak to them.
Is there a difference here? Is the choice of HTTP simply a bad one because of the potential browser exploitation, which can't exist for the others?
VS Code’s ssh daemon is authenticated.
Good note on pipes / domain sockets, but it doesn't appear there's a "default", and the example in the docs even uses TCP, despite the warning below it.
https://neovim.io/doc/user/api.html#rpc-connecting
(EDIT: I guess outside of headless mode it uses a named pipe?)
> VS Code’s ssh daemon is authenticated.
How is it authenticated? I went looking briefly but didn't turn up much; obviously there's the ssh auth itself but if you have access to the remote, is there an additional layer of auth stopping anyone from executing code via the daemon?
From the page you linked: Nvim creates a default RPC socket at startup, given by v:servername.
You can follow the links on v:servername to read more about the startup process and figure out what that is, but tl;dr, it's a named pipe unless you override it.
I don’t use VSCode you have mentioned so i don’t know how it is implemented but one can guess that it is implemented with some authentication in mind.
https://github.com/anomalyco/opencode/commit/7d2d87fa2c44e32...
[0]: https://www.ycombinator.com/companies/sst
[1]: https://anoma.ly/
I have no idea where you got your internal image of YC-backed companies from, but it needs massive adjusting.
[0] https://news.ycombinator.com/item?id=46555807
> When server is enabled, any web page served from localhost/127.0.0.1 can execute code
> When server is enabled, any local process can execute code without authentication
> No indication when server is running (users may be unaware of exposure)
I'm sorry this is horrible. I really want there to be a good actual open cross-provider agentic coding tool, but this seems to me to be abusive of people's trust of TUI apps - part of the reason we trust them is they typically DON'T do stuff like this.
Reported 2025-11-17, and multiple "no responses" after repeated attempts to contact the maintainers... not a good look.
https://github.com/anomalyco/opencode/issues/6355#issuecomme...
everybody is vibecoding now, and dealing with massive security issues is bad vibes.
> Network Boundary Shield
> The Network Boundary Shield (NBS) is a protection against attacks from an external network (the Internet) to an internal network - especially against a reconnaissance attack where a web browser is abused as a proxy.
> The main goal of NBS is to prevent attacks where a public website requests a resource from the internal network (e.g. the logo of the manufacturer of the local router); NBS will detect that a web page hosted on the public Internet is trying to connect to a local IP address. NBS only blocks HTTP requests from a web page hosted on a public IP address to a private network resource; the user can allow specific web pages to access local resources (e.g. when using Intranet services).
https://jshelter.org/nbs/
Having said that, there is definitely a need for open platform to utilize multiple vendors and models. I just don’t think the big three (Anthropic, OAI and Google) will cede that control over with so much money on the line.
Amp can do small utility scripts and changes for free (especially if you enable the ads) and Crush+GLM is pretty good at following plans done by Claude or Codex
[0] https://ampcode.com/
[1] https://github.com/charmbracelet/crush
I hate the Ad models but I am pretty sure that most code gets trained in AI anyway and the code we generate would probably not be valuable metric (usually) to the ad company.
Interesting, what are your thoughts about it? Thanks for sharing this. Is the project profitable because I assume not, not sure how much advertisements costs would be there.
It seems to be about on par with Claude as a pair coder and I think it's a lot less verbose and concise on what it says, just sticking to the facts without any purple prose. It also seems to directly hook into ~/.claude/ just today it used a claude-only skill to analyse my codebase (using the scripts provided by the skill).
It does take a lot of discipline to review everything instead of pile on another feature, when it's so cheap to do.
afaict, for that project they never went through PCI compliance. See original thread for more information: https://news.ycombinator.com/item?id=40228751
They seem to not have a lot of real world experience and/or throw caution to the wind and YOLO through security practices. I'd be weary using any of their products.
It saw goproxy.cn and used goproxy.cn, looks linear to me.
Maybe I'm using GitHub code search wrongly, but it appears this was just never part of even a pull request - the practice of just having someone pushing to `dev` (default branch) which then will be tagged should perhaps also be revisited.
(Several more commits under `wip: bash` and `feat: bash commands`)
https://github.com/anomalyco/opencode/commit/7505fa61b9caa17...
https://github.com/anomalyco/opencode/commit/93b71477e665600...
> Hey - have some bad news.
> We accidentally committed your email to our repo as part of a script that was activating OpenCode Black.
> No other information was included, just the email on its own.
Apparently a group of devs forked it: https://github.com/dwash96/cecli
Haven't tried yet
Meanwhile, running opencode in a podman container seems to stop this particular, err, feature.
So did they fix it silently, without responding to the researcher, or they fixed the silent part where now user is made a aware that a website is trying to execute code on their machine.
Atleast they didnt implode their communications like I see from some other companies.
To be really honest, when you bet on AI agents, I feel like soemtimes you bet on the future of the product as well which is built by the people so you are basically betting on the people.
I'd much rather bet/rely on people who are sensibile in communications in troubled times like this than who implode sometimes (I mean no offense to Coderabbit but this is what comes to my head right now)
So moments like these become the litmus test of the products basically imo by seeing how people communicate etc.
To be clear, this is a vulnerability. Just the same as exposing unauthenticated telnet is a vulnerability. User education is always good, but at some point in the process of continuing to build user-friendly footguns we need to start blaming the users. “It is what it is”, Duh.
This “vulnerability” has been known by devs in my circle for a while, it’s literally the very first intuitive question most devs ask themselves when using opencode, and then put authentication on top.
Particularly in the AI space it’s going to be more and more common to see users punching above their weight with deployments. Let em learn. Let em grow. We’ll see this pain multiply in the future if these lessons aren’t learned early.
But this leaves a very bad taste.
Guess I will stick to aider and copy-pasting.
come on people, docker and podman exist, please use them - it isolates you not only from problems like this but supply chain attacks as well.
it also has superior compatibility, any person working on your project will have all the tools available to compile it since to build & run it you use a simple Containerfile.
(rather outdated now: https://github.com/DeprecatedLuke/claude-loop)
which introduced so many bugs that people unsubscribed
The rest is just code running as your user can talk to code running as your user. I don't really consider this to be a security boundary. If I can run arbitrary code by hitting a URL I accept that any program running as me can as well. Going above and beyond is praiseworthy (good for you turning on SELinux as an example) but I don't expect it by default.
No, that's a Chrome-specific feature that Google added. It is not part of any standard, and does not exist in other browsers (e.g. Safari and Firefox).
> The rest is just code running as your user can talk to code running as your user
No, that assumes that there is only a single user on the machine, and there are either no forms of isolation or that all forms of isolation also use private network namespaces, which has not been how daemons are isolated in UNIX or by systemd. For example, if you were to ever run OpenCode as root, any local process can trivially gain root as well.