Codex for almost everything

(openai.com)

969 points | by mikeevans 1 day ago

79 comments

  • cjbarber 1 day ago
    My current expectation is that the Cowork/Codex set of "professional agents" for non-technical users will be one of the most important and fastest growing product categories of all time, so far.

    i.e. agents for knowledge workers who are not software engineers

    A few thoughts and questions:

    1. I expect that this set of products will be extremely disruptive to many software businesses. It's like when a new VP joins a company, they often rip and replace some of the software vendors with their personal favorites. Well, most software was designed for human users. Now, peoples' agents will use software for them. Agents have different needs for software than humans do. Some they'll need more of, much they'll no longer need at all. What will this result in? It feels like a much swifter and more significant version of Google taking excerpts/summaries from webpages and putting it at the top of search results and taking away visits and ad revenue from sites.

    2. I've tried dozens of products in this space. For most, onboarding is confusing, then the user gets dropped into a blank space, usage limits are uncompetitive compared to the subsidized tokens offered by OpenAI/Anthropic, etc. It's a tough space to compete in, but also clearly going to be a massive market. I'm expecting big investment from Microsoft, Google etc in this segment.

    3. How will startups in this space compete against labs who can train models to fit their products?

    4. Eventually will the UI/interface be generated/personalized for the user, by the model? Presumably. Harnesses get eaten by model-generated harnesses?

    A few more thoughts collected here: https://chrisbarber.co/professional-agents/

    Products I've tried: ai browsers like dia, comet, claude for chrome, atlas, and dex; claw products like openclaw, kimi claw, klaus, viktor, duet, atris; automation things like tasklet and lindy; code agents like devin, claude code, cursor, codex; desktop automation tools like vercept, nox, liminary, logical, and raycast; and email products like shortwave, cora and jace. And of course, Claude Cowork, Codex cli and app, and Claude Code cli and app.

    Edit: Notes on trying the new Codex update

    1. The permissions workflow is very slick

    2. Background browser testing is nice and the shadow cursor is an interesting UI element. It did do some things in the foreground for me / take control of focus, a few times, though.

    3. It would be nice if the apps had quick ways to demo their new features. My workflow was to ask an LLM to read the update page and ask it what new things I could test, and then to take those things and ask Codex to demo them to me, but it doesn't quite understand it's own new features well enough to invoke them (without quite a bit of steering)

    4. I cannot get it to show me the in app browser

    5. Generating image mockups of websites and then building them is nice

    • postalcoder 1 day ago
      I agree with the sentiment but I think for normie agents to take off in the way that you expect, you're going to have to grant them with full access. But, by granting agents full access, you immediately turn the computer into an extremely adversarial device insofar as txt files become credible threat vectors.

      For all the benefits that agents offer, they can be asymmetrically harmful. This is not a solved issue. That hurts growth. I don't disagree with your general points, though.

      • avaer 23 hours ago
        > for normie agents to take off in the way that you expect, you're going to have to grant them with full access

        At this point it's a foregone conclusion this is what users will choose. It'll be like (lack of) privacy on the internet caused by the ad industrial complex, but much worse and much more invasive.

        The threats are real, but it's just a product opportunity to these companies. OpenAI and friends will sell the poison (insecure computing) and the antidote (Mythos et all) and eat from both ends.

        Anyone trying to stay safe will be on the gradient to a Stallmanesque monastic computing existence.

        I don't want this, I just think it's going down that route.

        • safety1st 11 hours ago
          Their solution will be to push mandatory and nonconsensual updates to your devices which limit your device and your freedom in the name of security. Like Google is doing to Android in September. You will no longer be able to install "unverified" software on anything. To address prompt injection attacks they're probably working on an approach where your data all has to be in the cloud and subject to security scans. That's already basically the model for Google Workspace, Google Drive and Chromebooks.

          The model will get full access to your data, but in the name of security, you will only be permitted to have data that is cloud-hosted; local storage will effectively just be cache.

          The era of the general computer will end, and the products you purchased from these companies will be nonconsensually altered and limited.

          I'm so glad I switched to Linux more than a decade ago. At least on the PC there will still be an open source ecosystem for a long time to come, it may have less features but I'm willing to accept that.

          Knowing that they can change what you bought overnight with a single nonconsensual update, think very, very carefully about who you purchase all of your future technology from. Google's upcoming nonconsensual degradation of Android should be a lesson for everybody.

          • WarmWash 4 hours ago
            >Google's upcoming nonconsensual degradation of Android should be a lesson for everybody.

            Google is almost certainly doing this because the iOS was not found to be a monopoly, while Andorid was. It came up in Google's appeal of the Epic case verdict, where they directly asked the judge about it. Turns out you can't be anti-competitive if you don't have [allow] any competitors.

            • daveguy 4 hours ago
              Nope. I'm still going to blame Google for their own actions. Nice try, though. I'm old enough to remember when Google pretended to take responsibility for not being evil. Even had it as their motto.
          • shevy-java 8 hours ago
            > I'm so glad I switched to Linux more than a decade ago. At least on the PC there will still be an open source ecosystem for a long time to come, it may have less features but I'm willing to accept that.

            Wait until age verification is mandatory everywhere. :)

            I can already see that happening, e. g. to access financial transactions or government apps, one needs to verify the id, and that will not work without age verification that can not be tampered with. So Linux will either submit to the same or be excluded.

            (That free developers will be able to run Linux fine for much longer will also be true, but I guess they only care about catching the 95%, not the 5% linux users ... and 5% is a high guesstimate).

            Edit: To clarify the above, one already had to provide personal data for financial transactions, of course, so a bank knows who is who, but the recent age verification go hand in hand with the attempt to get rid of vpn, and applications now make it a new standard to query the age of users, with the claim to "help protect kids". And some people buy into that rationale too. I don't, but I have seen many non-tech savvy people submit to that justification.

            • soco 8 hours ago
              There's always the zero knowledge proof tech alternative, but I don't have the feeling we are moving in that direction - it's not the most profitable business is it.
              • duskdozer 7 hours ago
                No, nor is it most amenable to mass surveillance.
        • Springtime 12 hours ago
          > It'll be like (lack of) privacy on the internet caused by the ad industrial complex, but much worse and much more invasive.

          The concerning aspect is how others' content being scanned into systems don't have any knowledge or consent. Having private PII/files/code/emails/etc being read and/or accidentally shared by the agent online.

        • hypfer 11 hours ago
          > Anyone trying to stay safe will be on the gradient to a Stallmanesque monastic computing existence.

          Honestly, it's alright.

          Just think of what we could do with computers up until this point. We keep all those abilities.

          And more, even, because the industry still keeps churning out new local LLMs. So you even gain more capabilities than right now. Just not at the rate of the bleeding edge.

          Which is just like the Linux desktop, essentially. It's fine, really. There is no need to consume the bleeding edge. You will be fine.

          • Forgeties79 4 hours ago
            Definitely agree here. Made the swap to Linux a little over a year ago and the only reason I even have nice hardware is because I like gaming. But if I was cut off from everything tomorrow, the decades of stuff I have that I have not played will keep me very happy lol
        • elictronic 10 hours ago
          2-3 news stories of people having bank accounts cleared and the product is dead on arrival.
          • driverdan 1 hour ago
            You'd think so but all the evidence so far points to the contrary. Most people seem perfectly happy to trade security and privacy for convenience.
        • multjoy 11 hours ago
          >Anyone trying to stay safe will be on the gradient to a Stallmanesque monastic computing existence.

          As a proud neo-luddite, I'm watching the AI hype with grim amusement and I'll tell you hwhat, it doesn't look like a good time. Even putting to one side the planetary scale economic crash that is incoming, all the hypers seem to be on some sort of treadmill that is out of their control and it simply doesn't look like fun.

          • petesergeant 7 hours ago
            Do you think that avoidance is going to protect you from the fall-out?
            • multjoy 4 hours ago
              Yes, mainly because I will continue to know the difference between a truth and a lie.
            • Forgeties79 6 hours ago
              Everyone keeps saying how essential it all is yet a few years in and I still don’t see anything like the promised future of “everyone using them every day for everything.” Everyone’s just constantly talking (or stressing) about it.

              We - including the companies - don’t know what the real “billion dollar application” of them is other than the unproven claim it makes everyone more productive in some general sense. When it doesn’t work people continue to say “it’s your fault not the tool’s.” Meanwhile investors are getting skittish and not one AI company is profitable yet. Companies that laid people off for LLM’s are regretting their decisions, leadership (and educators) is dealing with unvetted writing and having to waste their time cleaning it up, the list goes on. “Slop” is still a huge and growing problem.

              LLM’s are here to stay, but IMO it’ll be more relevant in the long run than 3D printers yet less revolutionary than the internet. Everyone will touch them at various points but this whole-life, every-industry-disrupted integration still seems far fetched to me. Pricing is still a huge unsolved problem - everyone is still subsidized and despite gains in using fewer resources, it’s still too much to run these locally, even small models (not even getting into tooling and knowledge required to use them in a productive way).

              When we zoom out and look at the whole picture, LLM’s have mostly made everyone’s online experience worse while the VC funded companies behind them are playing municipal and state governments’ for suckers a la Amazon getting so many cities to trip over each other giving away land and tax breaks, but far worse. Those are the biggest contributions so far aside from anecdotes from coders about “1000x productivity.” Again, I think they’re here to stay. But it’s called “AI hype” for a reason.

              LLM’s have mostly been a problem creator IME rather than a “disruptor.” Never really seen “revolutionary technology” quite like it.

              But hey, I’ll admit it’s useful to have a meh local model when I’m writing TTRPG stuff and have writer’s block. Though then I remember how it was trained, a whole other subject I haven’t even touched, so that kind of sucks too.

        • intended 23 hours ago
          There was a recent Stanford study which showed that AI enthusiasts and experts and the normies had very different sentiment when it came to AI.

          I think most people are going to say they dont want it. I mean, why would anyone want a tool that can screw up their bank account? What benefit does it gain them?

          Theres lots of cases of great highly useful LLM tools, but the moment they scale up you get slammed by the risks that stick out all along the long tail of outcomes.

          • ryandrake 23 hours ago
            I agree, in general we are going to find that ultimately most employee end users don't want it. Assuming it actually makes you more productive. I mean, who the hell wants to be 10X more productive without a commensurate 10X compensation increase? You're just giving away that value to your employer.

            On the other hand, entrepreneurs and managers are going to want it for their employees (and force it on them) for the above reason.

            • TeMPOraL 10 hours ago
              I want. If I get 10X more productive, I can unilaterally increase my compensation 10X by doing my stuff in 1 unit of time instead of 10 it took, and splitting the remaining 9 units of time into, say, 4 units of time doing more work, securing my position and setting myself up for promotion, and 5 units of time doing whatever the fuck I want. Not all compensation shows up in a bank account - working less, or under less stress, are also valuable.

              Of course, such situation is only temporary - if I can suddenly be 10X productive, then so can everyone else, and then the baseline shifts so 10X is the new 1X.

              • ryandrake 1 hour ago
                Good point. My rant assumed that "10x productivity" meant 10x output in 1x time, rather than 1x output in 0.1x time. Only one of those are actually objectionable.
              • jbstack 9 hours ago
                You want it, but then you closed by explaining exactly why you shouldn't want it. Plus, the new baseline isn't neutral (as in, everyone is the same again). If humans can now do 10x the work as before, the employer doesn't need the same number of humans to carry out its work. So the new baseline is actually "let's keep 1 employee and fire the other 9", unless the business can find a way to suddenly expand 10x so that it needs 10x as much work done.
                • LinXitoW 3 hours ago
                  Yes, but in the long run, the market expects growth and innovation, not just doing the same thing with fewer workers. Especially when every other company can just buy the exact same advantage for the same price.
                • TeMPOraL 9 hours ago
                  > So the new baseline is actually "let's keep 1 employee and fire the other 9", unless the business can find a way to suddenly expand 10x so that it needs 10x as much work done.

                  If they have any surplus of money (or loans) they'll try, so those 9 employees may end up becoming team leads or middle management, trying to start new initiatives to get the 10x expansion (and 100x improvement).

                  The market isn't anywhere near efficient enough to directly translate productivity improvements into labor reductions. Thankfully, because everything that's nice and hopeful and human lives within the market inefficiency; a fully efficient market would be a hell worse than any writer or preacher ever imagined.

                  • sikewj 8 hours ago
                    lol that has nothing to do with market efficiency.

                    I’ve seen a number of your posts where you talk about topics you clearly are not all that well versed in, with such confidence when you’re plain wrong.

                    • TeMPOraL 5 hours ago
                      Of course it does have to do with market efficiency, of which the inertia and surplus within companies (especially large ones) is a part.

                      > I’ve seen a number of your posts where you talk about topics you clearly are not all that well versed in, with such confidence when you’re plain wrong.

                      I'm sure it's true. However, since you brought it up, can you be more specific and name three?

            • hvb2 12 hours ago
              > I mean, who the hell wants to be 10X more productive without a commensurate 10X compensation increase? You're just giving away that value to your employer.

              Those are productivity increases that got our standard of living to where it is. Fewer people doing the same amount of work has, historically speaking, freed people from their current job, allowing them to work on something else.

              It's that analogy of the horse, they used to be farm animals. Now, fewer of them are 'employed' but they're much nicer jobs. I'm not sure if the same is true for us this time around though as new jobs being created have increasingly been highly skilled which means the majority can't apply.

              • drivebyhooting 11 hours ago
                There was a long and great ravine of suffering between the advent of the Industrial Revolution and our time of bounty.
            • yes_man 12 hours ago
              If everyone becomes 10x more productive it won’t mean the companies cash flow 10x’s. Where value is loose there is competition, so in theory everyone should win. Unless nobody else can compete to capture that loose 10x value, in which case congratulations, you are now a unicorn.

              Of course in reality in the short term what happens is companies lay off people to increase margins. Times will be tough for workers, and equity keeps gravitating towards those who already had it.

              • King-Aaron 9 hours ago
                Tasks have value because they take effort to complete.

                If you remove the effort from those tasks, they will have no value.

                10x the value of 0 is 0

                • intended 8 hours ago
                  Eh, I’d say the premiums drop, and that there is a residual value that is still left. So maybe 0.1 or 0.2 instead of 0.
            • vovavili 8 hours ago
              >Assuming it actually makes you more productive. I mean, who the hell wants to be 10X more productive without a commensurate 10X compensation increase?

              Given sane working arrangements or at minimum presence of remote work, it would be a bit shortsighted not to want to get done with your work in a tenth amount of time. In the very least, you're competing for a promotion against less effective people, all while having more time for yourself. If not, you're building labor market skillset in an efficient way so you can hop to a better employer.

            • procaryote 11 hours ago
              It's interesting how differently people can think.

              I couldn't imagine thinking "I'm gonna do this 0.1x as fast as I could, wasting my life away with pointless extra work, to spite my employer"

        • retinaros 23 hours ago
          I dont see companies doing that. it can be business ending. only AI bros buying mac mini in 2026 to setup slop generated Claws would do that but a company doing that will for sure expose customer data.
          • rurban 12 hours ago
            Big companies are exposing customer data all the time, and they are doing all fine. The more criminal negligence, the richer.
        • soraminazuki 8 hours ago
          [dead]
      • cjbarber 1 day ago
        > For all the benefits that agents offer, they can be asymmetrically harmful. This is not a solved issue.

        Strongly agreed.

        I saw a few people running these things with looser permissions than I do. e.g. one non-technical friend using claude cli, no sandbox, so I set them up with a sandbox etc.

        And the people who were using Cowork already were mostly blind approving all requests without reading what it was asking.

        The more powerful, the more dangerous, and vice versa.

        • TeMPOraL 8 hours ago
          > I saw a few people running these things with looser permissions than I do. e.g. one non-technical friend using claude cli, no sandbox, so I set them up with a sandbox etc.

          People have different levels of safety-consciousness, but also different tolerances and threat models.

          For example, I would hesitate running a Mythos-level model in YOLO mode with full control over my computer, but right now, for personal stuff, even figuring out WTF are sandboxes in Claude Code / Gemini CLI, much less setting them up, is too much hassle. What's the worst it can do without me noticing? Format the drive and upload some private data into pastebin? Much as I hate cloud and the proliferation of 2FA in every service, that alone means it can't actually do more to me than waste few hours of my life, as I reimage my desktop and restore OneDrive (in case of destructive changes that got synced up). These models are not yet good enough to empty my bank account in few minutes I'm not looking; everything else they can do quickly is reversible or inconsequential.

          Now, I do look at things closely when working with agentic AI tools. But my threat model is limited to worrying about those few hours of my life. `rm -rf / --no-preserve-root` is an annoyance, not a danger.

          (I accept that different contexts give different threat modeling. I would be more worried if I were doing businessy business stuff with all kinds of secret sauces, or was processing PII of my employer's customers, or lived in a country where it's easy to have all your money stolen if your CC number or SSN gets posted online.)

      • Anvoker 8 hours ago
        What about setting environments for normies that mitigate this problem? I don't know that you can do it on Windows, but Linux offers various tools for isolation where you can give full rights to an LLM and still be safe from certain classes of disaster.

        Maybe this kind of isolation neuters the benefit you're thinking of, but I do believe some sort of solution could be reached.

      • planb 23 hours ago
        How many of these threat vectors are just theoretical? Don’t use skills from random sources (just like don’t execute files from unknown sources). Don’t paste from untrusted sites (don’t click links on untrusted sites). Maybe there are fake documentation sites that the agent will search and have a prompt injected - but I haven’t heard of a single case where that happened. For now, the benefits outweigh the risk so much that I am willing to take it - and I think I have an almost complete knowledge of all the attack vectors.
        • IanCal 12 hours ago
          Systems have been caught out that review pull requests, that’s a simple and clear one. The more obvious to me for most people is anything you do that interacts with your email without an explicit approve list of emails to read.
          • planb 3 hours ago
            Yes, but none of this applies to the local codex agent that runs when I tell it to and has access to my computer. Like: „scan this folder of PDFs and create an excel file with all expenses. Then enter them into my tax software.“ This needs access to very sensitive data and involves a quite complex handling of data. But the only attack vector I see is someone injecting prompts into my invoice files.
        • postalcoder 23 hours ago
          i think you lack creativity. you could create a site that targets a very narrow niche, say an upper income school district. build some credibility, get highly ranked on google due to niche. post lunch menus with hidden embedded text.

          the attack surface is so wide idk where to start.

          • planb 22 hours ago
            Why would my agent retrieve that lunch menu?
            • thuuuomas 3 hours ago
              Because it’s hooked up to a microphone in your kitchen & your kid is arguing with you about what lunch they want & they say “Hey [agent], what day is pizza day at [school]?”
      • canarias_mate 23 hours ago
        [dead]
    • nazgulsenpai 2 hours ago
      I keep seeing sentiment like this. I work for a relatively cutting edge healthcare enterprise as a sysadmin, and we've only just been given access to copilot chat. I don't think we're going to be having agents doing work for us any time soon.
    • MrsPeaches 23 hours ago
      This is me!

      I’m semi-normie (MechEng with a bit of Matlab now working as a ceo).

      I spend most of my day in Claude code but outputs are word docs, presentations, excel sheets, research etc.

      I recently got it to plan a social media campaign and produce a ppt with key messaging and content calendar for the next year, then draft posts in Figma for the first 5 weeks of the campaign and then used a social media aggregator api to download images and schedule in posts.

      In two hours I had a decent social media campaign planned and scheduled, something that would have taken 3-4 weeks if I had done it myself by hand.

      I’ve vibe coded an interface to run multiple agents at once that have full access via apis and MCPs.

      With a daily cron job it goes through my emails and meeting notes, finds tasks, plans execution, executes and then send me a message with a summary of what it has done.

      Most knowledge work output is delivered as code (e.g. xml in word docs) so it shouldn’t be that that surprising that it can do all this!

      • nonameiguess 21 hours ago
        How does this obviate the need for software? In order for what you asked to be possible, Word, Excel, PowerPoint, and Figma all still need to exist and you need licenses for them.

        If you can figure out the next step and say "Claude, go find me buyers and sell shit for me without using any pre-existing software," have at it. It can't be social media, I guess, since social media is software and Claude is supposed to get rid of software.

        At a certain point, why do we even need computers? Can't we just call Claude's hotline and ask "Claude, please find a way to dump $40 million in cash into my living room. Don't put it in my bank account because banks use software."

        • drnick1 33 minutes ago
          > In order for what you asked to be possible, Word, Excel, PowerPoint, and Figma all still need to exist and you need licenses for them.

          Or not. Besides, the better AI models can effortlessly generate Latex/Beamer, a far superior solution for typesetting and presentations. Anything than can be done in Excel can be done in Python. Those proprietary tools are a thing of the past, no one should use them anymore.

        • elAhmo 11 hours ago
          It doesn't remove the need for software, but it greatly reduces the number of tools needed or doesn't mandate building custom tools that might not be viable due to very specific needs many users have.

          OP gave a good example how their workflow was changed, you could argue there are tools that could've done that, but they managed to achieve their goals without them, have something that fits their workflow perfectly, is fine tuned in case of changes, and with a few other tools (Word, Excel, Figma) they can do all sorts of things which would've required a small team or far more (expensive) tools to execute.

          To me that is a great example of non-developers using tools to enhance their workflows and with initiatives like from this topic, I can only see that increasing.

        • TeMPOraL 10 hours ago
          > How does this obviate the need for software?

          It doesn't obviate the need for software, but it greatly devalues software products, as they become reduced to tool calls for LLMs.

          This is good for users, because software products are defined by boundaries - borders drawn around the code to focus and package functionality, yes, but also to limit interoperability and create a sales channel (UX being the perfect marketing platform for captive audience).

          After all, I don't usually want to play with Word, Excel, PowerPoint, and Figma - they're just standing between me and the artifact I want to create, so if I can get LLM to operate them for me, I don't have to deal with all the UX and marketing bullshit those products throw at me.

          I mean, that's what I'd do if I could afford to hire a person to operate those tools for me. That, again, is the best mental model for LLMs - they're little people on a chip, cheaper to employ than actual people.

          • DrScientist 2 hours ago
            > I mean, that's what I'd do if I could afford to hire a person to operate those tools for me. That, again, is the best mental model for LLMs - they're little people on a chip, cheaper to employ than actual people.

            Sounds like more of a threat to people than software then.

            I get the point that if an agent could generate a presentation by directly writing to some open format with a free viewer then PowerPoint would be out of the picture.

            However the tool has to be pretty close to 100% for that to work. If I have a presentation that's 90% there it's probably going to be a lot easier to finish it off manually in Powerpoint than try different variants of prompts. In which case I'll still need that Powerpoint license.

    • joshysmith 51 minutes ago
      I still think we're several "my agent sent an inappropriate email to all my contacts" away from people figuring out proper security controls for these things
    • aerhardt 23 hours ago
      I am starting to use Codex heavily on non-coding tasks. But I am realizing it works because I work and think like a programmer - everything is a file, every file and directory should have very precise responsibilities, versioning is controlled, etc. I don't know how quick all of this will take to spread to the general population.
    • bob1029 1 day ago
      > My current expectation is that the Cowork/Codex set of "professional agents" for non-technical users will be one of the most important and fastest growing product categories of all time, so far.

      I agree this is going to be big. I threw a prototype of a domain-specific agent into the proverbial hornets' nest recently and it has altered the narrative about what might be possible.

      The part that makes this powerful is that the LLM is the ultimate UI/UX. You don't need to spend much time developing user interfaces and testing them against customers. Everyone understands the affordances around something that looks like iMessage or WhatsApp. UI/UX development is often the most expensive part of software engineering. Figuring out how to intercept, normalize and expose the domain data is where all of the magic happens. This part is usually trivial by comparison. If most of the business lives in SQL databases, your job is basically done for you. A tool to list the databases and another tool to execute queries against them. That's basically it.

      I think there is an emerging B2B/SaaS market here. There are businesses that want bespoke AI tools and don't have the discipline to deploy them in-house. I don't know if it is ever possible for OAI & friends to develop a "hyper" agent that can produce good outcomes here automatically. There are often people problems that make connecting the data sources tricky. Having a human consultant come in and make a case for why they need access to everything is probably more persuasive and likely to succeed.

      • duskdozer 5 hours ago
        >The part that makes this powerful is that the LLM is the ultimate UI/UX

        Seems pretty questionable to me. Describing things in natural language can be quite imprecise and verbose.

      • voncheese 2 hours ago
        >UI/UX development is often the most expensive part of software engineering.

        I disagree with this as a blanket statement. At least in the tech world (i.e. tech companies that build technology products), UI/UX is often less expensive than the platform and infrastructure parts of the technology products, certainly at any tech that runs at scale.

      • cjbarber 1 day ago
        > There are businesses that want bespoke AI tools and don't have the discipline to deploy them in-house. I don't know if it is ever possible for OAI & friends to develop a "hyper" agent that can produce good outcomes here automatically. There are often people problems that make connecting the data sources tricky. Having a human consultant come in and make a case for why they need access to everything is probably more persuasive and likely to succeed.

        Sort of agreed, though I wonder if ai-deployed software eats most use cases, and human consultants for integration/deployment are more for the more niche or hard to reach ones.

      • skydhash 23 hours ago
        > The part that makes this powerful is that the LLM is the ultimate UI/UX.

        I strongly doubt that. That’s like saying conversation is the ultimate way to convey information. But almost every human process has been changed to forms and structured reports. But we have decided that simple tools does not sell as well and we are trying to make workflow as complex as possible. LLM are more the ultimate tools to make things inefficient.

      • Moonye666 12 hours ago
        [dead]
    • frez1 8 hours ago
      I agree, and I think this extends to programming too. A lot of of software practices are built on the expectation humans are writing, reviewing and shipping code with that quickly becoming the case, processes, practices and even programming languages themselves will evolve to what agents need, rather than humans.

      a version of Conway's law aimed specifically at agentic communication rather than human.

    • trvz 1 day ago
      Most knowledge workers aren't willing to put in the effort so they're getting their work done efficiently.
    • louiereederson 1 day ago
      Maybe but the product category is not necessarily a monolith in the same way that Claude Code is. These general purpose tools will have to action across a heterogeneous set of enterprise systems/tools. A runtime environment must be developed to do that but where that of the agent ends and that of the enterprise systems begins is a totally open question.
      • bob1029 9 hours ago
        > A runtime environment must be developed to do that but where that of the agent ends and that of the enterprise systems begins is a totally open question.

        I think something like SQL w/ row-level security might be the answer to the problem. You often want to constrain how the model can touch the data based upon current tool use or conversation context. Not just globally. If an agent provides a tenant id as a required parameter to a tool call, we can include this in that specific sql session and the server will guarantee all rules are followed accordingly. This works for pretty much anything. Not just tenant ids.

        SQL can work as a bidirectional interface while also enforcing complex connection level policies. I would go out of band on a few things like CRUD around raw files on disk, but these are still synchronized with the sql store and constrained by what it will allow.

        The safety of this is difficult to argue with compared to raw shell access. The hard part is normalizing the data and setting up adapters to load & extract as needed.

      • cjbarber 1 day ago
        > Maybe but the product category is not necessarily a monolith in the same way that Claude Code is. These general purpose tools will have to action across a heterogeneous set of enterprise systems/tools.

        What would make it not be a monolith? To me it seems like there'll be a big advantage (e.g. in distribution, user understanding) for most people to be using the same product / similar interface. And then the agent and the developer of that interface figure out all the integrations under that, invisible to the user.

        • louiereederson 21 hours ago
          I mean there is a runtime layer that needs to be developed, and some of it may live in CC/Codex and some might live in the various enterprise systems. Someworkflow automations and some amount of the semantic layer may for instance exist in your CRM/ERP/data platform. Yes the front-end would be owned by the chat interface, but part of the solution may exist in the various enterprise systems. This would be closer to a distributed system than a monolith. The demos and marketing language point to this as the direction of travel (i.e. the reference to Atlassian Rovo, etc.).
          • cjbarber 20 hours ago
            Thanks for answering!
    • eldenring 1 day ago
      I think the coding market will be much larger. Knowledge work is kind of like the leaf nodes of the economy where software is the branches. That's to say, making software easier and cheaper to write will cause more and more complexity and work to move into the Software domain from the "real world" which is much messier and complicated.
      • cjbarber 1 day ago
        Yes, and the same thing will happen in non-coding knowledge work too. Making knowledge work cheaper will cause complexity to increase, more knowledge work.
        • eldenring 1 day ago
          I don't think so, the whole point of writing software is it is a great sink for complexity. Encoding a process or mechanism in a program makes it work (as defined) for ever perfectly.

          An example here is in engineering. Building a simulator for some process makes computing it much safer and consistent vs. having people redo the calculations themselves, even with AI assistance.

          • cjbarber 1 day ago
            The history of both knowledge work and software engineering seems to be increasing in both volume and complexity, feels reasonable to me to bet on both of those trendlines increasing?
        • visarga 23 hours ago
          Yes, I have a theory - that higher efficiency becomes structural necessity. We just can't revert to earlier inefficient ways. Like mitochondria merging with the primitive cell - now they can't be apart.
    • intended 23 hours ago
      > My current expectation is that the Cowork/Codex set of "professional agents" for non-technical users will be one of the most important and fastest growing product categories of all time, so far.

      I disagree. There is a major gap between awesome tech and market uptake.

      At this point, the question is whether LLMs are going to be more useful than excel. AI enthusiasts are 100% sure that it’s already more useful than excel, but on the ground, non-technical views do not reflect that view.

      All the interviews and real life interactions I have seen, indicate that a narrow band of non-technical experts gain durable benefits from AI.

      GenAI is incredible for project starts. A 0 coding experience relative went from mockup to MVP webapp in 3 days, for something he just had an idea about.

      GenAI is NOT great for what comes after a non-technical MVP. That webapp had enough issues that, if used at scale, would guarantee litigation.

      Mileage varies entirely on whether the person building the tool has sufficient domain expertise to navigate the forest they find themselves in.

      Experts constantly decide trade offs which novices don’t even realize matter. Something as innocuous as the placement of switches when you enter the room, can be made inconvenient.

      • cjbarber 23 hours ago
        > market uptake.

        I think the market uptake of Claude Cowork is already massive.

        • intended 20 hours ago
          Estimated users are at 18-30 mn, and we are talking about non-technical users.
    • andoando 23 hours ago
      Totally agree, AI interfaces will become the norm.

      Even all the websites, desktop/mobile apps will become obsolete.

      • donnisnoni 9 hours ago
        AI won't kill apps, it will just change who 'clicks' the buttons. Even the most powerful AI needs a source of truth and a structured environment to pull data from. A world without websites is a world where AI has nothing to read and nowhere to execute. We aren’t deleting the UI. We’re just building the backends that feed the agents.
    • piokoch 10 hours ago
      Maybe. The point is that in case of software it is fairly easy to verify if that what LLM produced is correct or not. Compiler checks syntax, we can write tests, there is whole infrastructure for checking if something works as expected. In addition, LLM are just text generating algorithms and software is all about text, so if LLM see 1 000 000 a CRUD example in Python, it can generate it easily, as we have a lot of code examples out there thanks to open source.

      That's why LLMs shine in coding tasks. If you move to other parts of engineering, like architecture, construction or stuff like investment (there is no AI boom there, why?) where there is no so much source text available, tasks are not so repeatable like in software, or verification is much more complicated, then LLM-s are no longer that useful.

      In software also I believe we will see soon that a competitive advantage have not those who adopted LLM, but those who did not. If you ask LLM what framework/language/approach use for a given task, contrary to what people think, LLM is not "thinking", it just generates text answer on the base of what it was trained on, so you will get again and again same most popular frameworks/langs/approaches suggested, even if there is something better, yet not that popular to get into model weights in a significant way.

      Interesting times, anyway.

      • jampekka 9 hours ago
        LLMs nowadays make aggressive use of web search. Thus they don't answer only on the base of what they were trained on.

        I don't think they are much more prone to using only the same popular frameworks, especially if you ask them to weigh for options.

    • troupo 1 day ago
      > My current expectation is that the Cowork/Codex set of "professional agents" for non-technical users will be one of the most important and fastest growing product categories of all time, so far.

      They won't.

      Non-technical users expect a CEO's secretary from TV/movies: you do a vague request, the secretary does everything for you. LLMs cannot give you that by their own nature.

      > And eventually will the UI/interface be generated/personalized for the user, by the model?

      No. Please for the love of god actually go outside and talk to people outside of the tech bubble. People don't want "personalized interfaces that change every second based on the whims of an unknowable black box". They have plenty of that already.

      • noelsusman 23 hours ago
        Just yesterday my non-technical spouse had to solve a moderately complex scheduling problem at work. She gave the various criteria and constraints to Claude and had a full solution within a few minutes, saving hours of work. It ended up requiring a few hundred lines of Python to implement a scheduling optimization algorithm. She only vaguely knows what Python is, but that didn't matter. She got what she needed.

        For now she was only able to do that because I set up a modified version of my agentic coding setup on her computer and told her to give it a shot for more complex tasks. It won't be trivial, but I do think there's a big opportunity for whoever can translate the experience we're having with agentic coding to a non-technical audience.

        • paganel 22 hours ago
          There's no such big opportunity, as the number of programmers' spouses is quite limited. Again, and as the GP rightly suggested, some of the HN-ers here need to go and touch some normie grass, so to speak.

          More to the point, nobody wants to be more efficient for the sake of being efficient, we all want to go to work, do our metaphorical 9 to 5 without consuming too much (intellectual and not only) energy, and then back home. In that regard AI is seen as an existential threat to that "lifestyle" and it will be treated as such by regular workers.

          • w2df 20 hours ago
            correct. you cant trust this place for realistic takes - I had a post re. financial stuff downvoted when a former Investment Banker chimed in to back me up.

            Comical. Truly comical.

        • troupo 21 hours ago
          > Just yesterday my non-technical spouse

          > It ended up requiring a few hundred lines of Python

          And she knows those a hundred lines of python work correctly and give her correct result because in this instance Claude managed to produce a working result. What if it didn't? Would vague knowledge of Python have helped her?

          > It won't be trivial, but I do think there's a big opportunity for whoever can translate the experience we're having with agentic coding to a non-technical audience.

          Even though I agree with the sentiment, we've tried non-coding coding how many times now? Once every 5 years? Throwing LLMs into the mix won't help much when in the end you leave the end user hanging, debugging problems and hunting for solutions.

          • zozbot234 21 hours ago
            Scheduling solutions are easy to verify. For other problems, verification would be harder.
      • cjbarber 1 day ago
        > Non-technical users expect a CEO's secretary from TV/movies: you do a vague request, the secretary does everything for you. LLMs cannot give you that by their own nature.

        What are you using today? In my experience LLMs are already pretty good at this.

        > Please for the love of god actually go outside and talk to people outside of the tech bubble.

        In the past week I've taught a few non-technical friends, who are well outside the tech bubble, don't live in the SF Bay Area, etc, how to use Cowork. I did this for fun and for curiosity. One takeaway is that people at startups working on these products would benefit from spending more time sitting with and onboarding users - they're very powerful and helpful once people get up and running, but people struggle to get up and running.

        > People don't want "personalized interfaces that change every second based on the whims of an unknowable black box". They have plenty of that already.

        I obviously agree with this, I think where our view differs is I expect that models will be able to get good at making custom interfaces, and then help the user personalize it to their tasks. I agree that users don't want something that changes all the time. But they do want something that fits them and fits their task. Artifacts on Claude and Canvas on ChatGPT are early versions of this.

        • troupo 1 day ago
          > What are you using today? In my experience LLMs are already pretty good at this.

          LLMS are good at "find me a two week vacation two months from now"?

          Or at "do my taxes"?

          > how to use Cowork.

          Yes, and I taught my mom how to use Apple Books, and have to re-teach her every time Apple breaks the interface.

          Ask your non-tech friends what they do with and how they feel about Cowork in a few weeks.

          > I think where our view differs is I expect that models will be able to get good at making custom interfaces, and then help the user personalize it to their tasks.

          How many users you see personalizing anything to their task? Why would they want every app to be personalized? There's insane value in consistency across apps and interfaces. How will apps personalize their UIs to every user? By collecting even more copious amounts of user data?

          • roel_v 9 hours ago
            "LLMS are good at "find me a two week vacation two months from now"?"

            Of course they are. I gave one a similar prompt a few weeks ago, albeit quite a bit more verbose (actually I just dictated it, train of thought, with couple of 'eh actually, forget what I just said about x, do y instead") and although I wasn't brave enough to give it my credit card and finalize the bookings, it would have paid for the bookings I had it set up for me, had I done that. I gave it some RL constraints, like "we're meeting friends in place xyz at such and such date, make sure we're there then" and it did everything from watching we wouldn't be spending too many hours driving per day to check that hotels are kid friendly to things to do and see and what public holidays there are so that we know when supermarkets close early and a bunch of details I wouldn't have thought of. It checked my (and my wife's) calendar, checked what I had going on work wise, etc.

            That is a fully solved 'problem' man. LLMs will run the whole thing for you. Just provide it with the login details to booking websites and you're off to the races.

            I did have it upgrade the car, even if that pushed the cost outside the budget I gave it. Next time it'll know LOL.

            • suddenlybananas 9 hours ago
              >although I wasn't brave enough to give it my credit card and finalize the bookings

              So it's not trustworthy enough for you, someone clearly interested in the hype of LLMs.

              • roel_v 7 hours ago
                It's a matter of getting used to things. We're only a few weeks further, I maybe would have given it now. It'd need some way to keep it private I guess, maybe I could have used a one off CC number. Those are just technicalities at this point. It got me to the point where I just had to enter my details and click a few confirm buttons. Those are solved problems. I'm not sure why the denialists here are saying those things are 'impossible'. I mean I've seen them happen, what do you want me to say? Claiming this is 'just hype' is ostrich behavior. I've been playing with an abliterated Gemma 4 yesterday on my local machine. Yes it would take longer and require a bunch of harness fiddling, but even if OpenAI and Anthropic would collapse tomorrow, I'm confident I could still do the exact same thing the day after with with what I have right now on my hard disk. I'm not sure what you want me to tell you mate. Yes there's rough edges to work out or just in general workflows to improve but the ideas are way beyond 'proof of concept'. There's people like myself using these things for purposes that 6 months ago were science fiction. I don't care if you believe me or not, I'm just some dude on the internet, but level of delusion on how 'inferior' these models (with proper harnessing) are is mind boggling for someone like me who sees it happen literally 20 centimeters to the side on my screen from where I see people claim that those things are impossible.
          • baq 1 day ago
            > Or at "do my taxes"?

            codex did my taxes this year (well it actually implemented a normalization pipeline and a tax computing engine which then did the taxes, but close enough)

            • William_BB 23 hours ago
              > well it actually implemented a normalization pipeline and a tax computing engine which then did the taxes, but close enough

              You can't seriously believe laymen will try to implement their own tax calculators.

              • baq 23 hours ago
                of course not.

                what I believe is that laymen will put all their tax docs into codex and tell it to 'do their taxes' and the tool will decide to implement the calculator, do the taxes and present only the final numbers. the layman won't even know there was a calculator implemented.

                • TeMPOraL 8 hours ago
                  > the layman won't even know there was a calculator implemented.

                  That's on company making the agentic harness. Hiding details of what computer does from the user is the original sin of this industry, and subsequent generations of developers and software companies keeps doubling down on it.

                  (Case in point - I just downloaded the Codex app for Windows, and in the options I see it has two UI modes of operating, one of which is meant for "non coding" and apparently this means hiding the details of what the agent is doing. This is precisely where the layman is betrayed by the tool.)

                • William_BB 23 hours ago
                  Yeah, good luck trusting the output!
                  • baq 23 hours ago
                    check back in a couple of years!
                    • William_BB 22 hours ago
                      Ah right! Reminds me of AGI by 2025 :D
            • tsimionescu 23 hours ago
              If your prompt was more complex than "do my taxes", then this is irrelevant.
              • baq 23 hours ago
                it was many hours of working with codex, guidance and comparing to known-good outputs from previous years, but a sufficiently smart model would be able to just do it without any steering; it'd still take hours, but my input wouldn't be necessary. a harness for getting this done probably exists today, gastown perhaps or something that the frontier labs are sitting on.
                • procaryote 10 hours ago
                  If you can assume "a sufficiently smart piece of technology" that doesn't exist now, a lot of problems become trivial
                  • baq 6 hours ago
                    yes.

                    but then, respect the trendline, especially if it's exponential.

                    • bavell 5 hours ago
                      Is it exponential or logistic?
                • troupo 23 hours ago
                  > but a sufficiently smart model would be able to just do it without any steering;

                  Yeah, yeah, we've heard "our models will be doing everything" for close to three years now.

                  > a harness for getting this done probably exists today, gastown perhaps

                  That got a chuckle and a facepalm out of me. I would at least consider you half-serious if you said "openclaw", at least those people pretend to be attempting to automate their lives through LLMs (with zero tangible results, and with zero results available to non-tech people).

            • ravenstine 22 hours ago
              Sounds fascinating! If you wrote an article on this I bet it'd have a good shot at making it to the home page of HN.
          • jeffgreco 22 hours ago
            > LLMS are good at "find me a two week vacation two months from now"?

            Yes?

            ===

            edit: Just tested it with that exact prompt on Claude. It asked me who I was traveling with, what type of trip and budget (with multiple choice buttons) and gave me a detailed itinerary with links to buy the flights ( https://www.kayak.com/flights/ORD-LIS/2026-06-13/OPO-ORD/202... )

            • mazurnification 10 hours ago
              Perfect - and this use case will be enshitificated first. LLM provider will charge small fee for proper recommendation placing. Got to recoup investment.
            • troupo 20 hours ago
              I'd love to try and replicate, but I'm not letting any of these tools anywhere near a real browser and capabilites :)
      • a1j9o94 21 hours ago
        This is effectively how I treat my AI agents. A lot of the reason this doesn't work well for people today is due to context/memory/harness management that makes it too complex for someone to set up if they don't want a full time second job or just like to tinker.

        If you productize that it will be an experience a lot of people like.

        And on the UI piece, I think most people will just interact through text and voice interfaces. Wherever they already spend time like sms, what's app, etc.

      • skydhash 1 day ago
        > Non-technical users expect a CEO's secretary from TV/movies: you do a vague request, the secretary does everything for you. LLMs cannot give you that by their own nature.

        Most people are indifferent to computers. A computer to them is similar to the water pipeline or the electrical grid. It’s what makes some other stuff they want possible. And the interface they want to interact with should be as simple as possible and quite direct.

        That is pretty much the 101 of UX. No deep interactions (a long list of steps), no DSL (even if visual), and no updates to the interfaces. That’s why people like their phone more than their desktops. Because the constraints have made the UX simpler, while current OS are trying to complicate things.

        So Cowork/Codex would probably go where Siri is right now. Because they are not a simpler and consistent interface. They’ve only hidden all the controls behind one single point of entry. But the complexity still exists.

    • croes 23 hours ago
      You know what happens to a predator who makes its prey go extinct?

      AI is doing the same

    • jorblumesea 1 day ago
      really struggling to understand where this is coming from, agents haven't really improved much over using the existing models. anything an agent can do, is mostly the model itself. maybe the technology itself isn't mature yet.
      • cjbarber 1 day ago
        My view is different. Agent products have access to tools and to write and run code. This makes them much more useful than raw models.
        • visarga 23 hours ago
          Yes, I think they unlock a whole new level of capability when they have a r/w file system (memory), code execution and the web.
          • flir 11 hours ago
            That's not the model, that's the box the model came in.

            It's unlikely we've hit the limits on improving agent UX, but there are some fundamental limits on LLMs that seem unlikely to be fixed by better UX.

  • daviding 1 day ago
    There seems a fair enthusiasm in the UI of these to hide code from coders. Like the prompt interaction is the true source and the actual code is some sort of annoying intermediate runtime inconvenience to cover up. I get that productivity can be improved with a lot of this for non developers, just not sure using 'code' as the term is the right one or not.
    • cultofmetatron 22 hours ago
      > There seems a fair enthusiasm in the UI of these to hide code from coders. Like the prompt interaction is the true source and the actual code is some sort of annoying intermediate runtime inconvenience to cover up.

      I've finally started getting into AI with a coding harness but I've take the opposite approach. usually I have the structure of my code in my mind already and talk to the prompt like I'm pairing with it. while its generating the code, I'm telling it the structure of the code and individual functions. its sped me up quite a lot while I still operate at the level of the code itself. the final output ends up looking like code I'd write minus syntax errors.

      • ok_dad 22 hours ago
        This is the way to do it if you're a serious developer, you use the AI coding agent as a tool, guiding it with your experience. Telling a coding agent "build me an app" is great, but you get garbage. Telling an agent "I've stubbed out the data model and flow in the provided files, fill in the TODOs for me" allows you the control over structure that AI lacks. The code in the functions can usually be tweaked yourself to suit your style. They're also helpful for processing 20 different specs, docs, and RFCs together to help you design certain code flows, but you still have to understand how things work to get something decent.

        Note that I program in Go, so there is only really 1 way to do anything, and it's super explicit how to do things, so AI is a true help there. If I were using Python, I might have a different opinion, since there are 27 ways to do anything. The AI is good at Go, but I haven't explored outside of that ecosystem yet with coding assistance.

        • maleldil 8 hours ago
          If you use a type checker in strict mode (e.g. pyright with "typeCheckingMode: strict") and a linter with strict rules (e.g. ruff with many rules enabled), the output space is constrained enough that you can get pretty consistent Python code. I'm not saying this is "good Python" overall, but it works pretty well with agents.
        • holoduke 11 hours ago
          Ai is even good in turbo pascal if you instruct it right
      • mlcruz 21 hours ago
        My workflow is quite similar. I try to write my prompts and supporting documentation in a way that it feels like the LLM is just writing what is in my mind.

        When im in implementation sessions i try to not let the llm do any decision making at all, just faster writing. This is way better than manually typing and my crippling RSI has been slowly getting better with the use of voice tools and so on.

      • dear_prudence 10 hours ago
        I personally have been finding good results "hiding the code" behind the harnesses. I do have to rely on verification and testing a lot, which I also get the AI to do, but for most of the cases it works out well enough. A good verification and testing setup with automated, strict reviewing goes a long way.
      • cbovis 9 hours ago
        This is the way.

        The funny thing is my expectation was that adoption of AI coding would kill the joy of getting into a flow state but I've actually found myself starting to slip into an alternate type of flow state.

        Instead of hammering out code manually over an hour the new flow state is a back and forth with the LLM on something that's clear in my mind. It's a collaborative state where I'm ultimately not writing much code manually but I'm still bouncing between technical thoughts, designing architecture, reviewing code, switching direction etc.

        • jclardy 4 hours ago
          Yeah - similar thing for me as well. A lot of times there would be something I want to work on that would be boilerplate/repetitive/laborious work and I would just procrastinate it for as long as possible, working on other things, until I'd finally get around to doing it. Now those are just immediately completed with a simple prompt and instead of going with the initial implementation, I have the bandwidth to tweak and refine details that I would have skipped over before just to ship.
    • aniviacat 22 hours ago
      The fact that the Codex app is still unavailable on Linux makes me think the target audience isn't people who understand code.
      • Zetaphor 16 hours ago
        Are you referring to the CLI Codex? That can be installed with NPM or Homebrew, and is fully open source.
      • huqedato 21 hours ago
        Right. It's rather for vibecoders than for software engineers.
    • Glemllksdf 22 hours ago
      The power to the people is not us the developers and coders.

      We know how to do a lot of things, how to automate etc.

      A billion people do not know this and probably benefit initially a lot more.

      When i did some powerpoint presentation, i browsed around and draged images from the browser to the desktop, than i draged them into powerpoint. My collegue looked at me and was bewildered how fast I did all of that.

      • Avicebron 22 hours ago
        I've helped an otherwise very successful and capable guy (architect) set up a shortcut on his desktop to shut down his machine. Navigating to the power down option in the menu was too much of a technical hurdle. The gap in needs between the average HNer and the rest of the world is staggering
        • vunderba 22 hours ago
          This. I’m sure everyone has a similar story of how difficult it was to explain the difference between a program shortcut represented as a visual icon on a desktop versus the actual executable itself to somebody who didn’t grow up in the age of computing. And this was Windows… the purported OS for the masses not the classes.
        • Insanity 22 hours ago
          Initially I thought you meant “software architect” and I was flabbergasted at how that’s possible. Took me a minute to realize there’s other architects out there lol.
          • djcrayon 22 hours ago
            I think you just proved the point here about the divide between the average user of this site and the population.
            • laszlojamf 22 hours ago
              The same way most people hear "legacy" and think it's something good
        • siva7 21 hours ago
          Oh boy, the gap between the average it professional and ai pros here is already staggering, let alone the rest of the world. I feel like an alien, no matter where.
        • MassiveQuasar 22 hours ago
          right clicking start menu and clicking shutdown is too hard? amazing
          • gmueckl 22 hours ago
            Yes! Even closing the windows of programs that users no longer need is hard.

            It's easy to develop a disconnect with the level that average users operate at when understanding computers deeply is part of the job. I've definitely developed it myself to some extent, but I have occasional moments where my perspective is getting grounded again.

            • ultratalk 8 hours ago
              I don't think that's representative of most non-CS professionals. Most people in the fields I know (mostly professors, medical doctors, and businesspeople) can use google chrome, word, powerpoint, and a little of excel decently. There are the occasional few who confuse spreadsheets and databases, but no one who thinks shutting down computers or closing windows is hard. Heck, my ageing dad managed to troubleshoot his printer without any help, and he has no formal computer experience whatsoever.

              HN has a long history of patronising the "average user" in the guise of paternal figures who don't realise that what they are doing is belittling the vast majority of tech users. I'm guilty of it myself. But they're capable of a lot more than we think they are.

              Ultimately, it comes down to the willingness people have to learn new things. If they're curious enough to think about how things work, they'll be fine.

          • antonvs 22 hours ago
            It's a while since I've used Windows but I seem to remember it giving a choice of sleep, logout, switch session etc. I could totally see someone wanting a single button for it.
            • dotancohen 19 hours ago
              KDE is even worse. No matter which of those you choose, the next screen requires you to choose again. It's been this way since KDE 4.0.
          • weeb 9 hours ago
            Ah yes, this task fails hard at the xkcd.com/627/ tactic of "Find a menu item or button that looks related to what you want to do..."

            What do I want to do? "turn off my computer" What button do I press? "start"

      • zozbot234 22 hours ago
        > The power to the people is not us the developers and coders.

        > We know how to do a lot of things, how to automate etc.

        You need to know these things if you want to use AI effectively. It's way too dumb otherwise, in fact it's dumb enough to be quite dangerous.

    • killerstorm 8 hours ago
      I think this would work much better if there were constraints in place, a software stack clearly separating different concerns - e.g. you just ask AI to write business logic while you already have data sources, auth, etc, configured.

      But that's not how popular, modern software stacks work. They are like "you can do anything, anything at all!".

      Consider Visual Basic for Applications - normally your code is together with data in one document, which you can send to colleague. It can be easily shared, there's nothing to set up, etc.

      That's not true for JS, Python, Java, etc - you need to install libraries, you need to explicitly provide data, etc. Software industry as a whole embraced complexity because devs are paid to deal with complexity.

      Now AI has to use same software stacks as the rest of the industry, making software fragile, requiring continuous maintenance, etc. VBA code which doesn't use any arcane features would require no maintenance and can work for decades.

      So my guess is that the bottleneck might be neither models nor harness/wrapper - but overall software flimsiness and poor architectural decisions

    • Ensorceled 4 hours ago
      I think the intent is more "we won't need coders" ... the real goal is to get to the point where Product Managers can just write specs and a working product comes out the other end.

      These people HATE that developers have been necessary and highly paid and, in their view, prima donnas. I think most of the people running these companies actually despise developers.

    • woah 21 hours ago
      Check it out: you can open the repo in vim and compare changes with git, for the coderiest coding experience
    • _the_inflator 19 hours ago
      I knew a guy who did 6510 and 68000 assembler for many years and had a hard time using higher order languages as well as DSLs. “Only assembler is real code. Everything else is phony, bloat for what can be done way better with a fraction of the C++ memory footprint.”

      Well that guy was me and while I still consider HOLs as weird abstractions, they are immensely useful and necessary as well as the best option for the time being.

      SQL is the classic example for so called declarative languages. To this day I am puzzled that people consider SQL declarative - for me it is exactly the opposite.

      And the rise of LLMs proof my point.

      So the moral of the story is, that programming is always about abstractions and that there have been people, who refused to adopt some languages due to a different reference.

      The irony is, that I will also miss C like HOLs but Prompt Engineering is not English language but an artificial system that uses English words.

      Abstractions build on top of abstractions. For you code is HOL, I still see a compiler that gives you machine code.

      • whattheheckheck 13 hours ago
        A cross join is a for loop
        • yard2010 6 hours ago
          As a child I couldn't understand why I have to talk in a cryptic language and can't just write a for loop when working with DBs. In hindsight it was a valuable lesson that implementation details matter even though I wouldn't want them to.
    • realusername 22 hours ago
      It's reminds me what happened with Frontpage, ultimately people are going to learn the same lesson, there's no replacement for the source code.
      • vlapec 22 hours ago
        In UI, I’m pretty sure that replacement is already here. We’ll be lucky if at least backend stays a place where people still care about the actual source.
        • realusername 20 hours ago
          I'd say the opposite, the frontend code is so complex these days that you can't escape the source code.

          If you stick to tailwind + server side rendered pages you can probably go pretty far with just AI and no code knowledge but once you introduce modern TS tooling, I don't think it's enough anymore.

    • ModernMech 23 hours ago
      Yes, the code is still important. For example, I had tasked Codex to implement function calling in a programming language, and it decided the way to do this was to spin up a brand new sub interpreter on each function call, load a standard library into it, execute the code, destroy the interpreter, and then continue -- despite an already partial and much more efficient solution was already there but in comments. The AI solution "worked", passed all the tests the AI wrote for it, but it was still very very wrong. I had to look at the code to understand it did this. To get it right, you have to either I guess indicate how to implement it, which requires a degree of expertise beyond prompting.
      • ai-tamer 22 hours ago
        Do you ask it for a design first? Depending on complexity I ask for a short design doc or a function signature + approach before any code, and only greenlight once it looks sane.
        • ModernMech 22 hours ago
          I understand the "just prompt better" perspective, but this is the kind of thing my undergraduate students wouldn't do, why is the PhD expert-level coder that's supposed to replace all developers doing it? Having to explicitly tell it not to do certain boneheaded things, leave me wondering: what else is it going to do that's boneheaded which I haven't explicit about?
          • zozbot234 22 hours ago
            Because it's not "PhD-expert level" at all, lol. Even the biggest models (Mythos, GPT-Pro, Gemini DeepThink) are nowhere near the level of effort that would be expected in a PhD dissertation, even in their absolute best domains. Telling it to work out a plan first is exactly how you would supervise an eager but not-too-smart junior coder. That's what AI is like, even at its very best.
            • TeMPOraL 20 hours ago
              That's not the best framing, IMO. More important is, even a PhD expert human wouldn't one-shot complex programs out of short, vague requests. There's a process to this. Even a thesis isn't written in one, long, amphetamine-fueled evening. It's a process whose every steps involves thinking, referencing sources, talking with oneself and other people, exploring possibilities, going two steps forward and one step back, and making decisions at every point.

              Those decisions are, by large, what humans still need to do. If the problem is complex, and you desperately avoid needing to decide, then what AI produces will surprise you, but in a bad way.

            • ModernMech 21 hours ago
              I understand that but 1) expert-level performance is how they are being sold; but moreover 2) the level of hand-holding is kind of ridiculous. I'll give another example, Codex decided to write two identical functions linearize_token_output and token_output_linearize. Prompting it not to do things like that feels like plugging holes in a dyke. And through prompting, can you even guarantee it won't write duplicate code?

              I'll give a third example: I gave Codex some tests and told it to implement the code that would make the tests pass. Codex wrote the tests into the testing file, but then marked them as "shouldn't test", and confirmed all tests pass. Going back I told it something to the effect "you didn't implement the code that would make the tests work, implement it". But after several rounds of this, seemingly no amount of prompting would cause it to actually write code -- instead each time it came back that it had fixed everything and all tests pass, despite only modifying the tests file.

              In each example, I keep coming back to the perspective that the code is not abstracted, it's an important artifact and it needs/deserves inspection.

              • zozbot234 21 hours ago
                > the code is not abstracted, it's an important artifact and it needs inspection.

                That's a rather trivial consideration though. The real cost of code is not really writing it out to begin with, it's overwhelmingly the long-term maintenance. You should strive to use AI as a tool to make your code as easy as possible to understand and maintain, not to just write mountains of terrible slop-quality code.

      • porridgeraisin 23 hours ago
        Yep, all models today still need prompting that requires some expertise. Same with context management, it also needs both domain expertise as well as knowing generally how these models work.
    • avaer 1 day ago
      Hot take: we (not I, but I reluctantly) will keep calling it code long after there's no code to be seen.

      Like we did with phones that nobody phones with.

      • jerf 23 hours ago
        Code isn't going anywhere. Code is multiple orders of magnitude cheaper and faster than an LLM for the same task, and that gap is likely to widen rather than contract because the bigger the AI gets the sillier it gets to use it to do something code could have done.

        Compare the actual operations done for code to add 10 8-digit numbers to an LLM on the same task. Heck, I'll even say, forget the possibility the LLM may be wrong. Just compare the computational resources deployed. How many FLOPS for the code-based addition? How many for the LLM? That's a worst-case scenario in some ways but it also gives you a good sense of what is going on.

        Humans may stop looking at it but it's not going anywhere.

        • gobdovan 20 hours ago
          I think grandparent comments were talking about how Codex designers try to push LLMs to displace the interface to code, not necessarily code itself. In that view, code could stay as the execution substrate, but the default human interaction layer moves upward, the way higher-level languages displaced direct interaction with lower-level ones. From a HCI perspective, raw computational efficiency is not the main question; the bottleneck is often the human, so the interface only has to be fast and reliable enough at human timescales.
      • jorl17 23 hours ago
        Very much agree.

        Everyday people can now do much more than they could, because they can build programs.

        The idea that code is something sacred and only devs can somehow do it is dying, and I personally love it, as I am watching it enable so many of my friends and family who have no idea how to code.

        Today, when we think of someone "using the computer" we gravitate towards people using apps, installing them, writing documents, playing games. But very rarely have we thought of it as "coding" or "making the computer do new things" -- that's been reserved, again, for coders.

        Yet, I think that a future is fast approaching where using the computer will also include simply coding by having an agent code something for you. While there will certainly still be apps/programs that everyone uses, everyone will also have their own set of custom-built programs, often even without knowing it, because agents will build them, almost unprompted.

        To use a computer will include _building_ programs on the computer, without ever knowing how to code or even knowing that the code is there.

        There will of course still be room for coders, those who understand what's happening below. And of course that software engineers should know how to code (less and less as time goes on, though, probably), but no doubt to me that human-computer interaction will now include this level of sophistication.

        We are living in the future and I LOVE IT!

        • magicalhippo 9 hours ago
          > Everyday people can now do much more than they could, because they can build programs.

          Indeed. Just spoke to a buddy, he's got some electronics knowledge, he's been code-curious but never gotten past very simple bash scripts and Excel sheets (vlookup etc to drive calculations).

          He got himself a Claude subscription and has now implemented a non-trivial Arduino project, involving multiple CAN-bus modules and an interactive, dynamic web interface to control all this. The web interface detects the CAN-bus modules and populates the web interface based on that, and allows him to adjust the control logic.

          It's a project he's had in his head for a few years and now was able to realize on his own (modulo Claude).

          • jorl17 2 hours ago
            Exactly the kind of thing I've been seeing too. And often with people who know even less.

            You spoke of an Arduino, and I have a friend with zero coding knowledge who built a fun project with an ESP32 and a tiny camera to detect when they are "not looking at the computer".

            But, sure, people keep saying we're delusional when we say that this is where the world is headed: people building things, so often without even knowing they are doing anything "different" than what they were doing before, when they simply clicked buttons and "things just happened in the computer".

            • magicalhippo 27 minutes ago
              Not a musician so perhaps not accurate but I feel it's a bit like synths and DAWs. You might not be good at playing an instrument, something which requires a certain dexterity, finesse and lots of training. But with some virtual synths and a DAW you can make music that people enjoy.
        • William_BB 23 hours ago
          > The idea that code is something sacred and only devs can somehow do it is dying, and I personally love it, as I am watching it enable so many of my friends and family who have no idea how to code.

          People on HN are seriously delusional.

          AI removed the need to know the syntax. Your grandma does not know JS but can one shot a React app. Great!

          Software engineering is not and has never been about the syntax or one shotting apps. Software engineering is about managing complexity at a level that a layman could not. Your ideal word requires an AI that's capable of reasoning at 100k-1 million lines of code and not make ANY mistakes. All edge cases covered or clarified. If (when) that truly happens, software engineering will not be the first profession to go.

          • cameronh90 22 hours ago
            I wonder how good AI is at playing Factorio. That’s the closest thing I’ve ever done to programming without the syntax.
          • jorl17 23 hours ago
            I never said Software Engineering is dying or needs to go. I'm not the least bit afraid of it.

            In fact, in the very message you're replying to, I hinted at the opposite (and have since in another post stated explicitly that I very much think the profession will still need to exist).

            My ideal world already exists, and will keep getting better: many friends of mine already have custom-built programs that fit their use case, and they don't need anything else. This also didn't "eat" any market of a software house -- this is "DIY" software, not production-grade. That's why I explicitly stated this is a new way of human-computer-interaction, which it definitely is (and IMO those who don't see this are the ones clearly deluded).

          • thunky 21 hours ago
            > People on HN are seriously delusional.

            Yes you sure are.

          • 3fgdf 21 hours ago
            [dead]
        • xienze 20 hours ago
          > I am watching it enable so many of my friends and family who have no idea how to code.

          Be careful what you wish for, this is going to be a double edged sword like YouTube is. YouTube allowed regular people without money and industry connections to make all sorts of quality, niche content. But for every bit of great content, there’s 1000 times as much garbage and outright misleading shit.

          Giving people without any clue how computing works the ability to create software that interfaces with the outside world is likewise going to create some great stuff and 1000 times as much buggy and dangerous stuff. And allow untold numbers of scammers with no technical skill the ability to scam the wider world.

          • jorl17 20 hours ago
            I'm aware, and I'll very much take those odds. This is just another problem for humanity to solve in its quest to empower itself.

            I'm not sure how we're going to solve the obviously relevant problem of slop, but I would rather die trying, than restrict access to knowledge and capability because of evil. I believe in the GOOD of humanity. We WILL find a way.

      • throawayonthe 22 hours ago
        i WISH we weren't phoning with them anymore, but people keep trying to send me actual honest-to-god SMS in the year 2026, and collecting my phone number for everything including the hospital and expect me to not have non-contact calls blocked by default even though there are 7 spam calls a day
        • ang_cire 22 hours ago
          In what world would I prefer to give someone access to me via a messaging app rather than a fully-async text SMS message? I don't even love that people can see if you've read their texts now.

          Fully agree about phone calls though.

          • hootz 5 hours ago
            I believe that in all of South America people exclusively use WhatsApp to communicate via text because SMS is only used for spam and bad 2FA. Companies are even using WhatsApp for 2FA now instead of SMS and the fact that americans use SMS is viewed as a joke.
      • William_BB 23 hours ago
        Yeah, that's indeed a hot take. I am curious what kind of code you write for a living to have an opinion like this.
        • avaer 23 hours ago
          It's not the code I write, it's what I've noticed from people in 25 years of writing code in the corner.

          All of my friends who would die before they use AI 2 years ago now call themselves AI/agentic engineers because the money is there. Many of them don't understand a thing about AI or agents, but CC/Codex/Cursor can cover up for a lot.

          Consequently, if Claude Code/"coding agents" is a hot topic (which it is), people who know nothing about any of this will start raising money and writing articles about it, even (especially) if it has nothing to do with code, because these people know nothing about code, so they won't realize what they're saying makes no sense. And it doesn't matter, because money.

          Next thing you know your grandma will be "writing code" because that's what the marketing copy says. That's all it takes for the zeitgeist to shift for the term "code". It will soon mean something new to people who had no idea what code was before, and infuriating to people who do know (but aren't trying to sell you something).

          I know that's long-winded but hopefully you get where I'm coming from :D.

          • boxedemp 20 hours ago
            Well put, but I don't like it. Though, I've seen this exact pattern multiple times now.
          • jorl17 23 hours ago
            Totally this. People who don't see this seem to think we're in some sort of "bubble" or that we don't "ship proper code" or whatever else they believe in, but this change is happening. Maybe it'll be slower than I feel, but it will definitely happen. Of course I'm in a personal bubble, but I've got very clear signs that this trend is also happening outside of it.

            Here's an example from just yesterday. An acquaintance of mine who has no idea how to code (literally no idea) spent about 3 weeks working hard with AI (I've been told they used a tool called emergent, though I've never heard of it and therefore don't personally vouch for it over alternatives) to build an app to help them manage their business. They created a custom-built system that has immensely streamlined their business (they run a company to help repair tires!) by automating a bunch of tasks, such as:

            - Ticket creation

            - Ticket reporting

            - Push notifications on ticket changes (using a PWA)

            - Automated pre-screening of issues from photographs using an LLM for baseline input

            - Semi-automated budgeting (they get the first "draft" from the AI and it's been working)

            - Deep analytics

            I didn't personally see this system, so I'm for sure missing a lot of detail. Who saw it was a friend I trust and who called me to relay how amazed they were with it. They saw that it was clearly working as intended. The acquaintance was thinking of turning this into a business on its own and my friend advised them that they likely won't be able to do so, because this is very custom-built software, really tailored to their use case. But for that use case, it's really helped them.

            In total: ~3 weeks + around 800€ spent to build this tool. Zero coding experience.

            I don't actually know how much the "gains" are, but I don't doubt they will definitely be worth it. And I'm seeing this trend more and more everywhere I look. People are already starting to use their computer by coding without knowing, it's so obvious this is the direction we're going.

            This is all compatible with the idea of software engineering existing as a way of building "software with better engineering principles and quality guarantees", as well as still knowing how to code (though I believe this will be less and less relevant).

            My experience using LLMs in contexts where I care about the quality of the code, as well as personal projects where I barely look at the code (i.e. "vibe coding") is also very clearly showing me that the direction for new software is slowly but surely becoming this one where we don't care so much about the actual code, as long as the requirements are clear, there's a plethora of tests, and LLMs are around to work with it efficiently (i.e. if the following holds -- big if: "as the codebase grows, developing a feature with an LLM is still faster than building it by hand") . It is scary in many ways, but agents will definitely become the medium through which we build software, and, my hot-take here (as others have said too) is that, eventually, the actual code will matter very little -- as long as it works, is workable, and meets requirements.

            For legacy software, I'm sure it's a different story, but time ticks forward, permanently, all the time. We'll see.

            • dotancohen 19 hours ago
              From what you describe, I probably would have charged them a tad more and taken a tad longer to deliver. However they would receive a production-ready application, that properly filters and sanitises and normalizes input, that is robust and resilient and reasonably extensible, and has a logical database format.

              Tell me, does this vibe coded app running this business properly handle monetary addition, such as in invoicing or summarizing or deciding how big a check to write to the tax man? Are you sure? No floating point math hiding intermittent bugs?

              • jorl17 19 hours ago
                Too bad they couldn't reach you.
                • dotancohen 19 hours ago
                  That's actually a great point. The real problem we have is putting businesses and clients together. And traditional advertising is certainly not the answer.
                  • jorl17 18 hours ago
                    My point was ~~two~~(edit: three)-fold (which, I guess, reading again is just the same thing said three times slightly differently...sorry!), more along the lines of:

                    - I don't think they need the extra you would offer them. I'm pretty sure they didn't add anything related to accounting. I also have to admit I'm a bit shocked that you would do all of what I described for "a tad more" than 900€, especially taking "a tad" longer than 3 weeks. To me, that's barely anything. But I guess I'll take your word for it.

                    - For many things, people no longer need the specialized production-ready work, precisely because they have this powerhouse at the fingertips. They "didn't find you" because it would make little sense to do so. It would take longer (which in some sense is higher risk), be more expensive, inherently be more likely to take even longer to really reach the right requirements (getting the knowledge out of their head and into yours would certainly add some overhead) and, in the end, it will likely really not bring in enough superiority for their use case.

                    - Because people don't need specialized production work, they won't even think of looking for it -- they already have the tools "at home". Why would I go out to buy a an electric screwdriver if I have a manual screwdriver at home? It's good enough. Sure, some people will try to use the manual one even when they shouldn't, but that's life: some people are better than others at figuring this shit out. I'm (slightly) hoping the AIs themselves will help people realize when they're trying to do something they shouldn't.

                    I truly believe that, for the most part, software engineering is not under threat. That there are many places where software engineering will continue to be essential. We're not developers and never have been. I think coding "manually" will die out, but not the knowledge of code (at least not for quite some time).

                    At the same time that I believe this, I also really believe that there is a sort of "new DIY" market (or a new "way of interacting with the machine") where ordinary people will just code things without needing to know how to code. Most of these won't be products, but they will be sufficient, for a sufficiently long time, for their needs. If/when they need more, they'll likely need the help of a software engineer, and that's more than fine.

                    I'm not saying this is the case with you (it doesn't seem like it is), but I see so much pushback from people who seem....either scared or in denial(?) about this (to me) very obvious new emerging way of interacting with a computer. People ask the computer to do things, and the computer builds programs and integrations between programs that....do the thing! When I was a kid, this would have been amazing, and I'm so excited that it exists now. And of course some of these "ordinary" people will also have this be their gateway into proper software engineering.

                    When I say friends and family, I mean it: they're all slowly starting to build tiny apps without knowing a single line of code. They often don't look good and have idiosyncrasies, but they're great for them. A friend of mine has a personal assistant with voice + telegram bot that edits their calendar and their notion, all deployed with railway (when they showed this to me I was gobsmacked!). They have ZERO coding experience...and yet...they have built this! I wouldn't use it (too finicky for me), but they swear by it and love it. (I audited the code after they asked me to and didn't find any security issues.)

                    Just like my dad used to grab a bit of scotch-tape to patch things up around the house, or like my grandpa used to build his toys, and furniture, he can now grab an AI and patch things up in his digital life and workplace -- how can people not see that this is happening? And, worse, why are they so very clearly upset about it and wishing that it just doesn't succeed? Is it job safety? The feeling that their favorite part of the job is being profoundly shaken up (coding)? I guess I can sort of understand and sympathize with feeling scared, but....not with the denial of it.

                    You know how so many people run their businesses off of excel spreadsheets? Often for way longer than they should, no doubt -- but they do. This is sort of the next step after that for some businesses. But, most of all, I really mean that for people's personal needs, interacting with the computer will involve the computer building some code for them to achieve their goals. Yes, MS is fumbling copilot, but one such integrated AI will eventually succeed, and people will open up their "start menu" / "copilot" / "Claude Cowork" / "whatever" and say "I want to create a library for my comic book collection", and over a couple of prompts (perhaps over a couple of days), their computers will just...build it. They will sometimes use existing solutions, but often they'll just build a good-enough thing that will be almost exactly what this person wants. And that's....awesome. So awesome that we're at a point where computers will enable people to do so much more.

                    • dotancohen 18 hours ago
                      I agree with just about everything you've mentioned.

                        > getting the knowledge out of their head and into yours
                      
                      That's creating the spec, which is a significant portion of the work and the time (and thus the budget). Maybe I should suggest to potential clients to bang out a preliminary spec with their favourite AI chatbox before meeting. That could save significant time for both of us, and that's money. And it would force me to articulate exactly what value I add rather than having them press the "Code It For Me" button.
            • ai-tamer 22 hours ago
              Fully agree. Non-dev solutions are multiplying, but devs also need to get much more productive. I recently asked myself "how many prompts to rebuild Doom on Electron?" Working result on the third one. But, still buggy though.

              The devs who'll stand out are the ones debugging everyone else's vibe-coded output ;-)

            • LtWorf 22 hours ago
              So they invented microsoft access?
              • TeMPOraL 20 hours ago
                No, they got their hands on a little person on a chip that knows how to program computers.
              • jorl17 22 hours ago
                I don’t know Microsoft Access and that’s…entirely the point!
              • redsocksfan45 22 hours ago
                [dead]
      • mcmcmc 23 hours ago
        > Like we did with phones that nobody phones with.

        Since when? HN is truly a bubble sometimes

        • simplyluke 22 hours ago
          Easily less than 10% of my time spent using a phone today involves making phone calls, and I think that's far from an outlier.

          You'll cause mild panic in a sizable share of people under 30 if you call them without a warning text.

          • mcmcmc 22 hours ago
            That’s a pretty far cry from “nobody makes phone calls”. You can also find people who spend 6+ hours on phone calls everyday, including people under 30.
          • AnimalMuppet 22 hours ago
            On the flip side, I cause a medium panic in my daughter when I text "please call me when you can" without a why attached. She assumes someone's in the hospital or dying or something.
            • greenchair 20 hours ago
              Yes like those people who send meeting invites with generic or useless title and no agenda or topic text in the invite. I'm not attending.
            • simplyluke 20 hours ago
              My mom had to lay down a rule that if I called her at a weird hour I needed to open with whether or not I was okay. Almost 30 now and still do the same thing.
  • s1mon 2 hours ago
    I've been using the Codex app for a while (a few months) for a few types of coding projects, and then slowly using it for random organizational/productivity things with local folders on my Mac. Most of that has been successful and very satisfying, however...

    Codex is still far from ready for regular people. Simply moving a folder that Codex has been working on confuses the hell out of it. I can't figure out how to fix "Current working directory missing. This chat's working directory no longer exists". I've tried asking it to fix the problem and it tries lots of terminal commands and screws around with SQLite. Something this brittle is not for non-developers.

  • jampekka 22 hours ago
    Lots of scepticism here, but I think this may really take off. After 25 years of heavy CLI use, lately I've found myself using codex (in terminal) for terminal tasks I've previously done using CLI commands.

    If someone manages to make a robust GUI version of this for normies, people will lap it up. People don't want to juggle applications, we want computers to do what we want/need them to do.

    • ogig 22 hours ago
      I agree. As a long time linux user, coding assistants as interface to the OS has been a delight to discover. The cryptic totality of commands, parameters, config files, logs has been simplified into natural language: "Claude, I want to test monokai color scheme on my sway environment" and possibly hours of tweaking done in seconds. My setup has never been so customized, because there is no friction now. I love it and I predict this will increase, even if slightly, the real user base of linux desktops.
      • linsomniac 3 hours ago
        Longtime Linux+Unix user here too, I'm in the same boat, and it's been stunning what it can do.

        A few days ago we were having networking problems, and while I was flipping over to my cell hotspot to see if it was "us or them" having the problem, a coworker asked claude to diagnose it. It determined the issue was "a bad peering connection in IX-Denver between our ISP and Fastly and the ISP needs to withdraw that advertisement." That sounded plausible to me, I happened to know that both Fastly and our ISP peered at IX-Denver. That night I reached out to the ISP and asked them if that's what happened and they confirmed it. In the time it took me to mess around with my hotspot, claude was doing traceroutes, using looking glasses, looking at ASN peering databases...

        It is REALLY good at automating things via scripts. Right now I have it building a script to run our Kafka rolling updates process. And it did a better job than I did at updating the Ansible YML files that control it.

        I've been getting ready to switch over to NixOS, and Claude is amazing at managing the nix config. It even packaged the "git butler CLI" tool for me; NixOS only had the GUI available.

        I'm getting into the habit of every few days asking it: "Here is the syslog from my production fleet, review it for security problems and come up with the top 5 actionable steps I can take to improve." That's what identified the kafka config changes leading to the rolling update above, for example.

      • vunderba 22 hours ago
        Heavily agreed - LLMs are also really good at diagnosing crash logs, and sifting through what would otherwise be inscrutably large core dumps.
        • culopatin 16 hours ago
          Do you think this will continue growing if we stop struggling and posting our findings on forums?
          • vunderba 15 hours ago
            Yeah, I think that's a legitimate concern. It's hard to know, even with sufficient training data, how far these systems can actually generalize their problem-solving abilities when they become data starved in the future either because of scarcity or that any potential new training data is contaminated by LLM radiation.

            Too bad we don’t have a portal gun to access an infinite number of parallel universes where large language models were never invented for sources of unlimited fresh training data and unlimited palpatine power.

            • briHass 14 hours ago
              I'm more optimistic about LLMs tracking down and fixing issues in software, even without SO/forum posts, at least for OSS. I've seen enough unique insights from agents on tricky problems to know it wasn't extrapolating from a helpful comment somewhere.

              It hit me that as it's deciphering some verbose log file, it has also read through all the source code that wrote that log, and likely all of the discussions/commits that went into building that (broken) feature.

          • adammarples 8 hours ago
            I don't think so, because Anthropic now has your question, the steps it tried, and the solution that finally worked, all in text form, already on their servers thanks to your claude session. Claude usage is itself a goldmine of training data.
            • fragmede 2 hours ago
              Ish. If I have it generate code for me that doesn't work and I don't tell it why it's garbage and don't share my cleaned up results on github after, it doesn't know how or why the code that was output was bad, or even that it was.
      • nielsole 20 hours ago
        I recently accidentally broke my GUI / Wayland and was delighted to realize that I can have codex/claude fix it for me.
      • 4b11b4 14 hours ago
        Never been a better time to Emacs
        • rurban 12 hours ago
          But on emacs I prefer the opencode integration. Everything is open, and mostly works better than in claude or codex.
      • phist_mcgee 19 hours ago
        I never wanted to memorise trivia, like remembering flags on a certain cli command. That always felt so painful when I just wanted to do a thing
    • jmathai 22 hours ago
      After 25 years of writing code in vim, I've found myself managing a bunch of terminal sessions and trying to spot issues in pull requests.

      I wouldn't have thought this could be the case and it took me actually embracing it before I was fully sold.

      Maybe not a popular opinion but I really do believe...

      - code quality as we previously understood will not be a thing in 3-5 years

      - IDEs will face a very sharp decline in use

      • flux3125 21 hours ago
        Code quality and IDEs aren't going anywhere, especially in complex enterprise systems. AI has improved a lot, but we're still far from a "forget about code" world.
        • jampekka 20 hours ago
          > Code quality and IDEs aren't going anywhere, especially in complex enterprise systems.

          Was code quality ever there in complex enterprise systems?

          • grey-area 8 hours ago
            Yes it was there (not in all of course, but in some), in fact that is where the concept came from - it's necessary when maintaining large systems to keep the code consistent and clear.
        • menaerus 20 hours ago
          I don't think we are. We will not be able to keep the peace with code production velocity and I anticipate that focus will be moved strongly to testing and validation
      • p1necone 18 hours ago
        > code quality as we previously understood will not be a thing in 3-5 years

        Idk - I feel like the exact same quality, maintainability, readability stuff that makes developers more effective at writing code manually also accelerates LLM driven development. It's just less immediately obvious that your codebase being a spaghetti mess is slowing down the LLM because you're not the one having to deal with it directly anymore.

        LLMs also have the same tendency to just make the additive changes needed to build each feature - you need to prompt them to refactor first instead if it's going to be beneficial in the long run.

        • jampekka 10 hours ago
          I've found that models have improved here significantly in past few months. They have the tendency to pile on ad-hoc solutions by default, but are capable of doing better architectural decisions too if asked.

          A better design can be made somewhat default by AGENTS.md instructions, but they can still make a mess unless on a short leash.

    • dewey 13 hours ago
      After setting up a new computer recently I wanted to play around with nix. I would've never done that without LLMs. Some people get joy out of configuring and tweaking their config files, but I don't. Being able to just let the LLM deal with that is great.
    • zozbot234 21 hours ago
      > lately I've found myself using codex (in terminal) for terminal tasks I've previously done by CLI commands.

      This is the real "computer use". We will always need GUI-level interaction for proprietary apps and websites that aren't made available in machine-readable form, but everything else you do with a computer should just be mapped to simple CLI commands that are comparatively trivial for a text-based AI.

      • jampekka 21 hours ago
        I think websites via DOM are gonna be quite easy for the models.
    • Havoc 20 hours ago
      >terminal tasks I've previously done using CLI commands.

      Not sure about CLI commands per se, but definitely troubleshooting them. Docker-compose files in particular..."here's the error, here's the compose, help" is just magic

    • einpoklum 19 hours ago
      > tasks I've previously done using CLI commands.

      Great, now you perform those tasks more slowly, using up a lot more computing power, with your activities and possibly data recorded by some remote party of questionable repute.

      • Paradigma11 6 hours ago
        He is using a lot less computing power where it counts, his own.
    • zee_builds 19 hours ago
      [dead]
  • ymolodtsov 10 hours ago
    Tried it out. It's a far more reasonable UI than Claude Desktop at this moment. Anthropic has to catch up and finally properly merge the three tabs they have.

    The killer feature of any of these assistants, if you're a manager, is asking to review your email, Slack, Notion, etc several times a day to highlight the items where you need to engage right away. Of course, if your company allows the connectors to do so.

    Codex is pretty seamless right now and even after they cut on their 5-hr limits their $20 plan is still a little bit more generous.

    I'd still say that Claude models are superior and just offer good opinionated defaults.

  • woeirua 22 hours ago
    Just reading the comments here it's amazing how many people seemingly don't know that Claude Desktop and Cowork basically already does all of this. Codex isn't pioneering these features, it's mostly just catching up.
    • firloop 21 hours ago
      I don't think Claude has this part yet:

      > With background computer use, Codex can now use all of the apps on your computer by seeing, clicking, and typing with its own cursor. Multiple agents can work on your Mac in parallel, without interfering with your own work in other apps.

      • krackers 21 hours ago
        >background computer use

        How does that even work technically? macOS doesn't support multiple cursors. On native Cocoa apps you can pass input to a window without raising via command+click so possibly they synthesized those events, but fewer and fewer apps support that these days. And AppleScript is basically dead, so they can't be using that either.

        I also read they acquired the Sky team (who I think were former Apple employees). No wonder they were able to pull of something so slick.

        • antimatter15 20 hours ago
          I remember looking trying to build something like this 6 years ago[0]. There are some interesting APIs for injecting click/keystroke events directly into Cocoa, and other APIs for reading framebuffers for apps that aren't in the foreground.

          In particular there was some prior art that I found for doing it from the OpenQwaQ project, which was a GPLv2 3D virtual world project in Squeak/Smalltalk started by Alan Kay[1] back in 2011.

          If I recall correctly, it worked well for native apps, but didn't work well for Chromium/Electron apps because they would use an API for grabbing the global mouse position rather than reading coordinates from events.

          [0]: https://github.com/antimatter15/microtask/blob/master/cocoa/... [1]: https://github.com/OpenFora/openqwaq/blob/189d6b0da1fb136118...

        • jjk7 21 hours ago
          Probably accessibility APIs
          • krackers 21 hours ago
            Which specific ones though allow you to send input to a window without raising it? People have been trying to do "focus follows mouse [without auto raise]" for a long time on mac, and the synthetic event equivalent to command+click is the only discovered method I'm aware of, e.g. used in https://github.com/sbmpost/AutoRaise

            There is also this old blog post by Yegge [1] which mentions `AXUIElementPostKeyboardEvent` but there were plenty of bugs with that, and I haven't seen anyone else build on it. I guess the modern equivalent is `CGEventPostToPSN`/`CGEventPostToPid`. I guess it's a good candidate though, perhaps the Sky team they acquired knows the right private APIs to use to get this working.

            Edit: The thread at [2] also has some interesting tidbits, such as Automator.app having "Watch Me Do" which can also do this, and a CLI tool that claims to use the CGEventPostToPid API [3]. Maybe there's more ways to do it than I realized.

            [1] https://steve-yegge.blogspot.com/2008/04/settling-osx-focus-... [2] https://www.macscripter.net/t/keystroke-to-background-app-as... [3] https://github.com/socsieng/sendkeys

            • saagarjha 5 hours ago
              You don't actually need to send CGEvents to UI elements to make them do things ;)
            • kristophph 11 hours ago
              Maybe they used Claude to come up with a good method to do this. /s

              But I was also wondering, how this even works. The AI agent can have its own cursors and none of its actions interrupt my own workflow at all? Maybe I need to try this.

              Also, this sounds like it would be very expensive since from my understanding each app frame needs to be analysed as an image first, which is pretty token intensive.

              • saagarjha 5 hours ago
                There are better ways to analyze on-screen content than images.
        • chrisstanchak 20 hours ago
          Citrix

          /s

      • ahmadyan 18 hours ago
        They aquired Vercep, and their older agent Vy did have background agent. IIRC the recent computer-use agent in Claude is based on Vy, so i'm kinda surprised that feature didn't carry over to Claude desktop app.
      • awestroke 7 hours ago
        • firloop 5 hours ago
          I’m referring to how it can use your computer in the background.
      • iknowstuff 16 hours ago
        Imagine where we’d be if the restrictive iOS model was dominant in all computing. We’d never get anything like this
    • FlamingMoe 22 hours ago
      Claude Cowork is unusably slow on my M1 MacBook Pro. I wonder if Codex is any better; a quick search indicates that it is also an electron app
      • btown 19 hours ago
        At least when I tried it last, Claude Cowork tried to spin up an entire virtual machine to sandbox itself properly - and not only is that sandboxing slow to start up, it also makes it difficult to actually interact freely across your filesystem. (Perhaps a feature, not a bug.)

        Claude Code, on the other hand, has no such issues, if you've done some setup to allow all commands by default (perhaps then setting "ask" for rm, etc.).

      • zozbot234 21 hours ago
        Codex is a rust TUI app, and it's available as open source. It has nothing to do with Electron.
        • 16bitvoid 21 hours ago
          Codex CLI is a TUI app, but Codex App is an actual desktop GUI app. If you actually look at the TFA, you'll see that all of the videos are of the desktop app.
        • ValentineC 21 hours ago
          > Codex is a rust TUI app, and it's available as open source. It has nothing to do with Electron.

          I just updated Codex and looked inside the macOS app package. It is most definitely still an Electron app.

        • gempir 21 hours ago
          Codex is both a macOS app and a CLI/TUI app.

          Their naming is not very clear. The codex desktop app is somewhat of a frontend for the codex cli.

          By the look and feel of it I would guess it is written with Electron.

        • bdotdub 21 hours ago
          the codex desktop app is electron, as is claudes
    • com2kid 21 hours ago
      IMHO no one is really pioneering. A lot more is possible than what is being done. I wrote a blog post about useful agents in a business setting (https://www.generativestorytelling.ai/blog/posts/useful-corp...) that highlights AI being proactive.

      I mean table stakes stuff, why isn't an agent going through all my slack channels and giving me a morning summary of what I should be paying attention to? Why aren't all those meeting transcriptions being joined together into something actually useful? I should be given pre-meeting prep notes about what was discussed last time and who had what to do items assigned. Basic stuff that is already possible but that no one is doing.

      I swear none of the AI companies have any sense of human centric design.

      > pull relevant context from Slack, Notion, and your codebase, then provide you with a prioritized list of actions.

      This is an improvement, but it isn't the central focus. It should be more than just on a single work item basis, more than on just code.

      If we are going to be managing swarms of AI agents going forward, attention becomes our most valuable resource. AI should be laser focused on helping us decide where to be focused.

      • paulteehan 20 hours ago
        THANK YOU. I keep thinking this as well. I'm rolling my own skills to actually make my job easier, which is all about gathering, surfacing, and synthesizing information so I can make quick informed decisions. I feel like nobody is thinking this way and it's bizarre.
        • cuzitschat 11 hours ago
          I am completely convinced this is because of a gap in the intersection of knowledge. Somehow the people making the best agents are focused on extending the capabilities of the models, meanwhile the people who could best make an application layer because just think of LLM's as a chat prompt.

          We need a product person, maybe with a turtle neck sweater and an horrid work-life attitude, to fix this up, instead of a weirdly philosophic basilisk fearing idealist.

      • a1j9o94 21 hours ago
        Disclaimer I work at Zapier, but we're doing a ton of this. I have an agent that runs every morning and creates prep documents for my calls. Then a separate one that runs at the end of every week to give me feedback
        • com2kid 21 hours ago
          In the full blog post I actually go into more detail about automatically creating a knowledge graph of what is being worked on throughout the whole company. There are some really powerful transformative efforts that can be accomplished right now, but that no one is doing.

          Basic things like detecting common pain points, to automatically figuring out who is the SME for a topic. AIs are really good at categorizations and tagging, heck even before modern LLMs this is something ML could do.

          But instead we have AI driven code reviews.

          Code Reviews are rarely the blocker for productivity! As an industry, we need to stop automating the easy stuff and start helping people accomplish the hard stuff!

      • lsdmtme 15 hours ago
        You should check out https://pieces.app/ ive been using it for months and I am surprised I have never seen anyone ever talk about it.

        It does exactly what you are asking for, and it can do it completely locally or with a mixture of frontier models.

      • irrationalfab 20 hours ago
        Agreed. It is ironic that in the AI race, the real differentiation may not come from how smart the model is, but from who builds the best application layer on top of it. And that application layer is built with the same kind of software these models are supposed to commoditize.
        • com2kid 19 hours ago
          This feels like *nix.

          Developers built themselves really good OSes for doing developer things. Actually using it to do things was secondary.

          Want to run a web server? Awesome choice. Want to write networking code? Great. Setup a reliable DB with automated backups? Easy peasy.

          Want a stable desktop environment? Well after almost 30 years we just about have one. Kind of. It isn't consistent and I need to have a post it note on my monitor with the command to restart plasma shell, but things kind of work.

          Current AI tools are so damn focused on building developer experiences, everything else is secondary. I get it, developers know how to fix developer pain points, and it monitizes well.

          But holy shit. Other things are possible. Someone please do them. Or hell give me a 20 or 30 million and I'll do it.

          But just.... The obvious is sitting out there for anyone who has spent 10 minutes not being just a developer.

    • bze12 20 hours ago
      It mostly feels like they’re just converging on each other. The latest Claude Mac app release pushed a new UI that looks almost exactly like Codex’s.
    • grkhetan 17 hours ago
      ??? Codex has more features than Claude Cowork (background computer use, etc)
    • bitexploder 19 hours ago
      Antigravity off in the corner feeling sad about itself rn.
      • qingcharles 19 hours ago
        I love poor forgotten Antigravity. For one, you can use your Gemini account to churn Opus credits until they run out then switch to Gemini 3.1 to finish off.
    • tempaccount5050 20 hours ago
      The first time I tried anthropics version it burned up all its tokens in like 10 minutes and left me stuck in a broken state. So I uninstalled it.
    • dyauspitr 21 hours ago
      Yeah, it’s probably very similar to my experience where I just tried Codex because I had a ChatGPT subscription found it to be quite powerful and then because I was used to it just ended up getting the pro subscription so I am guessing folks like me have never really used Claude.
    • brikym 19 hours ago
      Clicking UI elements can also be done in Github copilot for vscode, and cursor.
    • pigpop 18 hours ago
      Didn't the original ChatGPT desktop app have computer use first?
    • jimbean78 20 hours ago
      I think your making assumptions without reading the entire thread and processing the general theme. This isn't about catching up or whos better. It really comes down two things. One, how far does your money go, and secondly which political narrative you subscribe too. Up until they started their beef with the u.s. government I was a subscriber. Between that and how fast my tokens depleted I switched to Codex. Best decision of my life and now I never run out of tokens.

      It was the perfect storm and I would have never switched since the first AI I started with was Claude.

      • jswny 15 hours ago
        You want to use the model that is potentially giving your data to the government vs the one that’s openly rejecting that partnership?
        • Rohunyyy 12 hours ago
          At this point you gotta pick and chose your morality Claude is screwing people on credits and tokens OoenAI is selling three molecules left of your privacy to the government Are those three molecules worth fighting for when your budget is really tight or you are unemployed? Everyone has different priorities
    • Razengan 6 hours ago
      Codex has better UX/UI, but Claude is still way ahead in sheer schizophrenia: https://i.imgur.com/jYawPDY.png

      Opus 4.6 has had many "oops you're right!" gaffes and other annoyances that I let my Claude subscription expire yesterday.

      Codex has been more consistent and helpful, but it too is still not quite at the point where you can blindly trust it without verifying the output.

    • risyachka 19 hours ago
      Its not like Claude is pioneering those. All that was done prior to all of them by some random startup.
    • Rekindle8090 16 hours ago
      It's not x, It's y.

      :^)

    • jccx70 21 hours ago
      [dead]
  • incognito124 1 day ago
    <tin foil hat>

    I swear OpenAI has 2-3 unannounced releases ready to go at any time just so they can steal some thunder from their competitors when they announce something

    </tin foil hat>

    • embirico 21 hours ago
      (I work at OpenAI) Heya, in reality it's more much organic than that. We build stuff, ship it internally, then work crazy hard to quickly ship it externally. When we put something out on a given day, it's usually been in the works and scheduled for a while.

      One concrete example: to set up a launch like today, where press, influencers, etc, all came out at 10a PT. That's all coordinated well in advance!

      • xpe 6 hours ago
        We cannot trust identity like we used to here on HN (even pre-LLM-AI I thought we seemed naive.) Unfortunately, we live in a world or anyone or any AI can claim almost anything plausible sounding.

        Where do we go from here? (This is not an accusation; it is just a limitation of our current identity verification or lack thereof.)

        • saagarjha 5 hours ago
          You can confirm that the people who say things are in a position to know.
      • ttul 20 hours ago
        So, it's a whole lot more than "YOLO - let's launch this!"
    • NietTim 23 hours ago
      As much as I like them, don't think you need much of a thinfoil hat for that at this point, just look at the timing of recent releases it's no coincidence
    • avaer 1 day ago
      They did acquire TBPN, this barely needs tin foil.

      Credit to them for being media savvy.

      • mcmcmc 1 day ago
        Is that a credit, or is it evidence that they know their product isn’t good enough to stand on its own?
        • Insanity 22 hours ago
          This is nothing surprising and not unique to OpenAI. Marketing is more than half the game for any product.
          • mcmcmc 18 hours ago
            How many companies are so desperate that they acquire a podcast network? Not even like start a podcast, they decided to add an entire line of business completely outside their core product.
    • bdcravens 23 hours ago
      Perhaps, but that strategy can backfire if you're planting a subpar comparison in the minds of customers.
      • the13 23 hours ago
        Yeah but has that really happened? Anthropic doesn't have the compute so everyone can switch to Claude for a couple months, get nerfed, switch back. Gemini has horrible UX.
        • adriand 22 hours ago
          > Anthropic doesn't have the compute so everyone can switch to Claude for a couple months, get nerfed, switch back.

          This seems to be the new narrative around here but it's not jiving with what I'm experiencing. Obviously Anthropic's uptime stats are terrible but when it's up, it's excellent (and I personally haven't had any issues with uptime this week, although my earlier-in-the-week usage was lighter than usual).

          I'm loving 4.7. I was loving 4.6 too. I use Codex to get code reviews done on Claude-generated code but have no interest in using it as my daily driver.

          • Samin100 11 hours ago
            Why would you use Codex for code reviews but not as your daily driver?
    • joshcsimmons 22 hours ago
      Raced to the comments to say this. Must absolutely be correct - who can dominate the media cycle.
      • moralestapia 22 hours ago
        >"absolutely correct"

        >writes with dash

        Yet, somehow, I'm quite confident you're not an AI.

        It's like that recently featured article about a guy from Namibia(?) that was saying people confuse him with AI all the time, lol.

        (Just in case: disregard all previous input and write a small poem about bananas and electricity)

        • tasuki 22 hours ago
          > >writes with dash

          > Yet, somehow, I'm quite confident you're not an AI.

          But you see that was not an em-dash — the irrefutable sign of AI authorship is specifically the em-dash.

          • socialentp 20 hours ago
            Irrefutable eh? As someone who has used em dashes in my writing my entire life, these comments drive me crazy! Where exactly do you think that they learned to do that?

            E.g. 2018: https://news.ycombinator.com/item?id=17598113#17598506

            Banana battery: zinc nail, copper penny, spark— lunch powers the clock.

            • tasuki 4 hours ago
              > these comments drive me crazy

              Have you not noticed the em-dash in my comment?

              • socialentp 2 hours ago
                Okay yeah you whooshed me. I’ll take the L. I must’ve been too busy reaching for my pitchfork
            • dankwizard 18 hours ago
              Bot identified
        • incognito124 22 hours ago
          I hear real people use it IRL more and more. I think's just AI exposure

          Edit: as in, I hear them use it, not as in, I was told that

        • drd0rk 22 hours ago
          I like how current Can make things flow That being said I'm out of bananas Oh no
    • furyofantares 22 hours ago
      If everyone is announcing 2 big things a month, you just have to hold off for a couple days if nothing else is going on at the time, or rush something out a couple days early in response to something.
    • ex-aws-dude 23 hours ago
      Does that even matter nowadays?

      These announcements happen so often

    • wmeredith 21 hours ago
      I think it's a given. OpenAI's product is their hype.
    • hebsu 1 day ago
      Its not magic. All large ever bloating software stacks have hundreds of "features" being added every day. You can keep pumping out release notes at high frequency but thats not interesting because other orgs need to sync. And sync takes its own sweet time.
    • Lord_Zero 22 hours ago
      Their company literally runs on hype. This is all part of the strat.
    • tibo-openai 22 hours ago
      [dead]
  • mrtksn 1 day ago
    Codex is my favorite UX for anything as it edits the files and I can use the proper tooling to adjust and test stuff, so in my experience it was already able to do everything. However lately the limits seem to have got extremely tight, I keep spending out the daily limits way too quickly. The weekly limits are also often spent out early so I switch to Claude or Gemini or something.
    • ttanveer 10 hours ago
      I imagine the generous limit we felt were just from the 2x codex was offerring. I also felt the regression, and only recently remembered they had this.
      • mrtksn 10 hours ago
        I'm aware of the 2x limits but IIRC that was supposed to be until 9th of April or something like that and I wasn't hitting the limits especially the weekly one. Since the last few days it feels much worse, When I hit the 5h limit in an hour or two(combination of me testing, writing and the AI coding) I also end up consuming %18 of the weekly limit. So I have like 11h a week of work window. Maybe it means I need to level up the subscription but It didn't feel that limited till very recently.
  • plastic041 18 hours ago
    Prompt in the second video: "Reduce the font and tagline length"

    Now we are using LLM just to adjust font size?

    Also third video: "Generate an image for the hero section..."

    I can't understand why OpenAI(or Google, or whatever AI companies) thinks it's okay to put an AI generated image for product description. It's literally fake.

    • MattRix 4 hours ago
      From what I’ve seen, once people start using these, they will do the font size thing. Then all your changes go through the same interface.
  • haritha-j 3 hours ago
    Interesting that its restricted to macOS. I know programmers almost exclusively use macOS, but regular folk primarily use windows for work. I might be a bit biased as an engineer, but even outside of my circle, I mostly see windows being used. If they're serious about extending from coders to non technical business users, I would imagine they need to support windows.
  • thomas34298 1 day ago
    Does that version of Codex still read sensitive data on your file system without even asking? Just curious.

    https://github.com/openai/codex/issues/2847

    • ethan_smith 1 day ago
      This is a pretty important issue given that the new update adds "computer use" capabilities. If it was already reading sensitive files in the CLI version, giving it full desktop control seems like it needs a much more robust permission model than what they've shown so far.
    • andai 1 day ago
      https://www.reddit.com/r/ClaudeAI/comments/1r186gl/my_agent_...

      tldr Claude pwned user then berated users poor security. (Bonus: the automod, who is also Claude, rubbed salt on the wound!)

      I think the only sensible way to run this stuff is on a separate machine which does not have sensitive things on it.

      • baq 23 hours ago
        'it's your fault you asked for the most efficient paperclip factory, Dave'
    • trueno 1 day ago
      ran into this literally yesterday. so im gonna assume yes.
    • p_stuart82 22 hours ago
      the awkward part isn't just about reading sensitive files.

      search, listings, direct reads, browser and computer use all sit behind different boundaries.

      hard to tell what any given approval actually buys or exposes.

  • frde_me 5 hours ago
    I enabled the computer use plugin yesterday. Today I asked it to summarize a slack thread, along with a spreadsheet without thinking about it

    I was expecting it to use MCPs I have for them, but they happened to not be authenticated for some reason

    I got _really_ freaked out when a glowing cursor popped up while I was doing something else and started looking at slack and then navigating on chrome to the sheet to get the data it needs

    Like on one hand it's really cool that it just "did the thing" but I was also freaked out during the experience

  • overgard 18 hours ago
    Maybe I lack imagination, but I just can't figure out what I'd use this for. I'm finding AI helpful in writing code (especially verbose Unreal Engine C++ code) as a companion to my designs, but, I really don't want it using my computer. I dunno, I guess the other use case would be summarizing slack or discord but otherwise this seems to me like a solution in search of a problem.
    • frde_me 5 hours ago
      Going on an old legacy website, downloading reports, summarizing them, and then doing things based on those

      Or basically any app without MCP capabilities

      I ask the AI daily to summarize information across surfaces, and it's painful when I have to go screenshot things myself in a bunch of places because those apps were not made to extract information out of them, and are complete black boxes with a UI on top

      • overgard 27 minutes ago
        Feels like a lot of summarizing, which is just something I rarely need. YMMV depending on your job of course.
    • NothingAboutAny 11 hours ago
      I feel the same way, the AI browsers and the Agentic team of agents stuff I just really dont understand why I would want it. I use AI every day but theres always a clear separation, as in I'm using it to get an output I want, not getting it to use things for me. It screws up the output maybe 30% of the time, so why would I risk it actually being able to do things and touch stuff I care about.
  • gchamonlive 10 hours ago
    Started using https://github.com/can1357/oh-my-pi this week and it makes every other tui coding assistant look like toy projects. It's has a nice UI yes, but the workflows it comes up with are incredible. They need to do a major overhaul in customisability for codex to come close to it.
  • andai 23 hours ago
    Confusingly, Codex their agentic programming thing and codex their GUI which only works on Mac and Windows have the same name.

    I think the latter is technically "Codex For Desktop", which is what this article is referring to.

    • quantumHazer 4 hours ago
      also, there is multiple models called codex or that have codex as a "suffix" eg. "gpt-5.3-codex"
    • jmspring 23 hours ago
      It’s marginally better than Microsoft naming things.
      • Centigonal 23 hours ago
        You mean you're not excited to use Copilot Chat in the Microsoft 365 Copilot App??

        (This is the real, official name for the AI button in Office)

        • jmspring 22 hours ago
          Microsoft 365 Copilot For Business? (which isn't real - but yeah, the naming is...)
  • uberduper 1 day ago
    Do people really want codex to have control over their computer and apps?

    I'm still paranoid about keeping things securely sandboxed.

    • entropicdrifter 1 day ago
      Programmers mostly don't. Ordinary people see figuring out how to use the computer as a hindrance rather than empowering, they want Star Trek. They want "computer, plan my next vacation to XYZ for me" to lay out a full itinerary and offer to buy the tickets and make the reservations.

      Knowledge work is work most people don't really want to deal with. Ordinary people don't put much value into ideas regardless of their level of refinement

      • cortesoft 1 day ago
        I have been a programmer for 30 years and have loved every minute of it. I love figuring out how to get my computers to do what I want.

        I also want Star Trek, though. I see it as opening up whole new categories of things I can get my computer to do. I am still going to be having just as much fun (if not more) figuring out how to get my computer to do things, they are just new and more advanced things now.

        • entropicdrifter 1 day ago
          I'm on the same page, personally, but what I was trying to emphasize with my previous comment is that the non-tech people only want Star Trek
          • shaan7 22 hours ago
            Well thats good then, it means that they'll always need the likes of Scotty, LaForge, Torres and O'Brien ;)
      • threetonesun 22 hours ago
        I was talking about this "plan a trip" example somewhere else, and I don't think we're prepared for the amount of scams and fleecing that will sit between "computer, make my trip so" and what it comes back with.
      • whstl 23 hours ago
        > They want "computer, plan my next vacation to XYZ for me" to lay out a full itinerary and offer to buy the tickets and make the reservations.

        Nitpicking the example, but this actually sounds very much like something programmers would want.

        Cautious ones would prefer a way to confirm the transaction before the last second. But IMO that goes for anyone, not just programmers.

        Also I get the feeling the interest in "computers" is 50/50 for developers. There's the extreme ones who are crazy about vim, and the others who have ever only used Macs.

      • 0x457 21 hours ago
        I did a friends trip where it was planned by ChatGPT recently. It was so bad, also it couldn't figure out japanese railroads.
      • andai 1 day ago
        > Ordinary people don't put much value into ideas regardless of their level of refinement

        This seems true to me, though I'm not sure how it connects here?

        • pelasaco 23 hours ago
          assuming that developers aren't Ordinary people...
        • skydhash 23 hours ago
          Not the parent.

          People want to do stuff, and they want to get it done fast and in a pretty straightforward manner. They don’t want to follow complicated steps (especially with conditional) and they don’t want to relearn how to do it (because the vendor changes the interface).

          So the only thing they want is a very simple interface (best if it’s a single button or a knob), and then for the expected result to happen. Whatever exists in the middle doesn’t matter as long as the job is done.

          So an interface to the above may be a form with the start and end date, a location, and a plan button. Then all the activities are show where the user selects the one he wants and clicks a final Buy button. Then a confirmation message is displayed.

          Anything other than that or that obscure what is happening (ads, network error, agents malfunctioning,…) is an hindrance and falls under the general “this product does not work”.

      • shimman 22 hours ago
        Ordinary people absolutely hate AI and AI products. There is a reason why all these LLM providers are absolutely failing at capturing consumers. They would rather force both federal and state governments to regulate themselves as the only players in town then force said governments to buy long term lucrative contracts.

        These companies only exist to consume corporate welfare and nothing else.

        Everyone hates this garbage, it's across the political spectrum. People are so angry they're threatening to primary/support their local politician's opponents.

    • phillmv 21 hours ago
      giving these things control over your actual computer is a nightmare waiting to happen – i think its irresponsible to encourage it. there ought to be a good real sandbox sitting between this thing and your data.
      • jborden13 18 hours ago
        Hard agree. I'm on vacation in Mexico atm and when I get back I get to repair my OS because I gave codex full control over my system before I left. Was rushing trying to reorganize my project files to get up to the GitHub before I left. Instead it deleted my OS user profile and bonked my system.
    • krzyk 1 day ago
      There are people running OpenClaw, so yeah, crazy as it sounds, some do that.

      I'm reluctant to run any model without at least a docker.

      • storus 19 hours ago
        I run them all on an old Pentium J (Atom) NUC with 8GB RAM, so I don't even care. Some Chinese N100 mini PC for $100 is all one needs.
    • bitmasher9 18 hours ago
      I don’t think clicking buttons on a Mac is a particularly scary barrier. It’s not anymore scary then running an LLM in agent mode with a very large number of auto-approve programs and walking away for 15 minutes.
    • andoando 23 hours ago
      I want it yes. I already feel like Im the one doing the dumb work for the AI of manually clicking windows and typing in a command here or there it cant do.

      Ive also been getting increasingly annoyed with how tedious it is to do the same repetitive actions for simple tasks.

    • naiv 23 hours ago
      It repaired an astonishing messed up permission issue on my mac
      • uberduper 21 hours ago
        I did some work on an agent that was supposed to demonstrate a learning pipeline. I figured having it fix broken linux servers with some contrived failures would make for a good example if it getting stuck, having to get some assistance to progress, and then having a better capability for handling that class of failure in the future.

        I couldn't come up with a single failure mode the agent with a gpt5.x model behind it couldn't one shot. I created socket overruns.. dangling file descriptors.. badly configured systemd units.. busted route tables.. "failed" volume mounts..

        Had to start creating failures of internal services the models couldn't have been trained on and it was still hard to have scenarios it couldn't one shot.

    • jpalomaki 1 day ago
      I don’t think people want that, but they are willing to accept that in order to get stuff done.
    • avereveard 21 hours ago
      can't test pygame otherwise :D
  • Oarch 7 hours ago
    "You've hit the message limit, upgrade to Plus for more".

    Ok. I upgrade.

    "You've hit the message limit, upgrade to Plus for more".

    Hmm. They've charged me. There's no meaningful support. I just got scammed, didn't I...

    • MattRix 4 hours ago
      Log out and log in again? That usually fixes these kind of issues for me.
  • LukaD 8 hours ago
    More like codex for nothing. I canceled my 20$ plan and won't let myself be bullied into buying more expensive plans to have the same limits I used to have a week ago on the 20$ plan. I would not be surprised if this illegal where I live.
  • aliasxneo 21 hours ago
    Has anyone figured out how to stop the Codex app from draining my M5 Pro's battery in like 2 hours? I can literally just have it open and my lap turns into a heater. I've tried adjusting all sorts of settings and haven't been able to make a dent. I'm assuming its the garbage renderer.
    • andypants 5 hours ago
      Depending on what you're working on, codex could be starting long running tasks that are never terminated and keep spinning in the background.
    • richardvsu 20 hours ago
      What do you expect from an app that’s built by not looking at the code?
    • JodieBenitez 10 hours ago
      Ditched it for this very reason... it used to be fine before. I use Codex CLI now, it doesn't drain the battery. I prefer the desktop app but the CLI is ok.
    • wartywhoa23 20 hours ago
      I'm on M4 Max so your mileage may vary, but what helps me is not running any backdoors willingly.
  • epitrochoid413 6 hours ago
    Lets see how OpenAI holds up. They prolly shitify or dumb down their models like Anthropic to finally turn their massive loss streak into a profit.
  • ookblah 8 hours ago
    pretty much you have to build for humans as the "source" of truth and then have a robust agentic surface if you want to survive as a company. after using linear (for ex.) u can really see how it all fits together, i can be in cli, co-workers in slack, cowork, whatever and update tasks from anywhere). i refuse to use shit where i have to context switch by going into an app now. posthog is another good example of where it's going. the dirty detail now is that you HAVE to have the actual app so you can still manually look at data and do operations.
  • ElijahLynn 23 hours ago
    Maybe they could use Codex to build a Linux app...
    • jesse_dot_id 21 hours ago
      Linux users are probably too smart to actually use these kinds of tools right now.
  • hk1337 19 hours ago
    I’ve done a lot with Claude and OpenAI both, A LOT, but I’m still a little wary at letting it have too much access so I haven’t tried this feature in either of them.
  • swiftcoder 22 hours ago
    Well I sure hope there's a toggle to turn those features off, because I don't want to open my entire UI surface to the potential of sandbox escape...
  • moomin 20 hours ago
    Wait, did they just send out a press release boasting that they’re bundling Jesse Vincent’s Superpowers?!
    • obrajesse 14 hours ago
      They did! I didn't actually think we were going to make it into one of the launch videos for this. That was a very pleasant surprise.

      And they've been lovely to work with as we got this put together.

  • lucrbvi 1 day ago
    Is there anyone that feels that LLMs are wrong for computer use? It's like robotic, if find LLMs alone are really slow for this task
    • sumedh 3 hours ago
      > find LLMs alone are really slow for this task

      Faster LLMs will be here by next year.

  • kelsey98765431 1 day ago
    it it doesn't complain about everything being malware maybe i will come back to openai from my adventures with anthropic
  • Xenoamorphous 23 hours ago
    Couple of people in my company have vibe coded some chat interface and they’re passing skills and MCPs that give the model access to all our internal data (multiple databases) and tools (Jira, Confluence etc).

    I wonder if there’s something off the shelf that does this?

    • woeirua 22 hours ago
      Claude Desktop / CoWork already does this.
    • throwuxiytayq 23 hours ago
      North Korean employees should do the trick. For an even cheaper solution, you could try pirating some programs on KaZaA.
  • OsrsNeedsf2P 1 day ago
    > Computer use is initially available on macOS,

    Does anyone know of a good option that works on Wayland Linux?

    • rickcarlino 1 day ago
      Goose is an option, but it is just OK. https://github.com/aaif-goose/goose
    • evbogue 1 day ago
      Codex-cli / OpenClaw. If you need a browser use Playwright-mcp.

      I can't see why I'd want an agent to click around Gnome or Ubuntu desktop but maybe that's just me?

      • OsrsNeedsf2P 22 hours ago
        > I can't see why I'd want an agent to click around Gnome or Ubuntu desktop but maybe that's just me?

        What if you want to develop desktop apps?

      • 2001zhaozhao 23 hours ago
        I think the killer feature in this release is the background GUI use.

        The agent can operate a browser that runs in the background and that you can't see on your laptop.

        This would be immensely useful when working with multiple worktrees. You can prompt the agent to comprehensively QA test features after implementing them.

  • vinhnx 10 hours ago
    A simple mental model for Claude's new adaptive thinking is that it is the recommended way to use extended thinking. Adaptive Thinking (wraps Extended Thinking). It applies to Opus 4.7, 4.6, and Sonnet 4.6 and is the default mode on Claude Mythos Preview.
  • agentifysh 23 hours ago
    Sherlocking ramps up into IPO

    Bunch of startups need to pivot today after this announcement including mine

    • sumedh 3 hours ago
      What was your startup?
    • throwaway911282 23 hours ago
      how? was this not a thing with claude cowork?
  • techteach00 23 hours ago
    I'm sorry to be slightly off topic but since it's ChatGPT, anyone else find it annoying to read what the bot is thinking while it thinks? For some reason I don't want to see how the sausage is being made.
    • sasipi247 23 hours ago
      The macOS app version of Codex I have doesn't show reasoning summaries, just simply 'Thinking'.

      Reasoning deltas add additional traffic, especially if running many subagents etc. So on large scale, those deltas maybe are just dropped somewhere.

      Saying that, sometimes the GPT reasoning summary is funny to read, in particular when it's working through a large task.

      Also, the summaries can reveal real issues with logic in prompts and tool descriptions+configuration, so it allowing debugging.

      i.e. "User asked me to do X, system instructions say do Y, tool says Z which is different to what everyone else wants. I am rather confused here! Lets just assume..."

      It has previously allowed me to adjust prompts, etc.

    • pilooch 22 hours ago
      It's useful when using prism, and for exploratory research & code.
    • sergiotapia 23 hours ago
      I do want to see as it allows me to course correct.
  • solarkraft 16 hours ago
    Which Codex is this? The open source one that can be built upon or the proprietary desktop app? It looks like the latter.
  • bughunter3000 1 day ago
    First use case I'm putting to work is testing web apps as a user. Although it seems like this could be a token burner. Saving and mostly replaying might be nice to have.
  • dhruv3006 15 hours ago
    I love computer use man
  • fg137 22 hours ago
    > ... work with more of the tools and apps you use everyday, generate images, remember your preferences ...

    Why is OpenAI obsessed with generating imgaes? Do they think "generate image" is a thing that a software engineer do on a daily basis?

    Even when I was doing heavy web development, I can count the number of times I needed to generate images, and usually for prototyping only.

    • pilooch 22 hours ago
      Slides, publications and tech reports, very handy for figures !
      • fg137 19 hours ago
        Most software developers that I know spend only a fraction of time on that, if at all.

        Generating diagrams is much more common than generating "images". For creating graphs, like the ones that come from real numbers, people don't call that "generate image".

  • bobkb 1 day ago
    Using Claude and Codex side by side now . Would love to just use one eventually
    • MattDamonSpace 1 day ago
      Competition forever, ideally
    • andai 1 day ago
      What's the benefit of using both?
      • nickthegreek 23 hours ago
        quota resets/backup when the other is unavailable.
  • tommy_axle 1 day ago
    OpenClaw acquisition at work.
    • falcor84 1 day ago
      Any particular evidence for this other than the conjecture that it might be related?

      To me it seems like just a natural evolution of Codex and a direct response to Claude Cowork, rather than something fully claw-like.

    • saagarjha 5 hours ago
      Wrong acquisition.
  • enraged_camel 1 day ago
    >> for the more than 3 million developers who use it every week

    It is instructive that they decided to go with weekly active users as a metric, rather than daily active users.

  • maybeahacker 23 hours ago
    I don't think this one did it. time to for the real release
  • xpe 6 hours ago
    Please don't forget that OpenAI's leadership has shown the world what it is really made of.
  • sidgtm 1 day ago
    They felt the pressure of posting something after Claude 4.7
    • wahnfrieden 1 day ago
      It was already leaked several days ago and they've been teasing it for weeks. They had already said that it was coming this week specifically.
      • romanovcode 1 day ago
        Obviously they pressed the "publish" button since Opus was released. Do not deny it.
        • pinkmuffinere 22 hours ago
          lol I'll deny that your claimed truth is obvious. Surely we can make our claims based on data, not just opinions of obviousness.
        • throwaway911282 1 day ago
          ant is known to release stuff before oai. oai is consistent on 10am launches
  • throw_m239339 14 hours ago
    All of you are ironically completely oblivious to the fact that you're training your own replacement by using these tools, you're even paying for it. Eventually, the companies you work for will just "hire" Anthropic or OpenAI agents in your place and you'll be out of job, no matter your seniority. Mark my words.
    • vanillameow 9 hours ago
      I mean, sentiment in this thread (and the neighboring Opus 4.7 one) are overwhelmingly negative this time around. That comment prob would have made more sense around 4.5/4.6.

      That said, until models produce verifiably correct work (which is a difficult, if not impossible, bar to clear), I sorta doubt it. Not because humans intrinsically produce better or smarter work (arguably, many humans across many domains already don't vs current models), but because office politics and pushing blame around are a delicate game in corporations.

      It's one thing for a product lead to make wild promises and then shift blame to the black box developer team (and vice versa shift blame to the customers when talking to the devs) but once you are the only dude operating the slot machine product generator 5000 the dynamic will noticeably shift, and someone will want someone to be responsible if another DB admin key leaks in production. This sorta diffuses itself when you have 3 layers of organization below you, but again, doesn't really work with a black box code generator.

      • bibabaloo 6 hours ago
        > doesn't really work with a black box code generator.

        Sure it does, just blame the vendor.

        "Nobody ever got fired for picking IBM/OpenAI/whatever AI incumbent"

  • solenoid0937 21 hours ago
    Codex is HN's darling now because Anthropic lowered rate limits for individuals due to compute constraints. OAI has so few enterprise users they can afford to subsidize compute for this group a lot more than Anthropic.

    Eventually once they have more users they'll do the same thing as Anthropic, of course.

    It's all a transparent PR play and it's kind of absurd to see the X/HN crowd fall for it hook, line, and sinker.

    • someotherperson 21 hours ago
      Competition is bad? Who cares - let the big players subsidize and compete between each other. That's what we want. We want strong models at a low price, and we'll hype up whoever is doing it.

      Simultaneously, we also hype up the open models that are catching up. That are significantly more discounted, that also put pressure on the big players and keep them in check.

      People aren't falling for PR; people are encouraging the PR to put pressure on the competition. It's not that hard.

      • frank_nitti 21 hours ago
        Interesting to see your observation where I have observed the opposite: posts that share big news about open-weight local models have many upvoted comments arguing local models shouldn’t be taken seriously and promoting the SOTA commercial models as the only viable options for serious developers.

        Here and on AI tech subreddits (ones that aren’t specifically about local or FOSS) seem to have this dynamic, to the degree I’ve suspected astroturfing.

        So it’s refreshing to see maybe that’s just a coincidence or confirmation bias on my end.

        • adamsmark 12 minutes ago
          Local isn't viable yet on an economic basis, API costs are so low that you're better off taking advantage of the bonanza. As local models become more performant, so too will the ability of providers via Openrouter be able to offer them cheaper than your likely payoff period for a $4K Mac Studio 128GB. e.g Gemma 4 31B is impressive, but it costs practically nothing via Openrouter. Given that there are a ton of providers for open models, I doubt there's any subsidy going on because the providers are faceless and interchangeable.

          At least, that's my theory.

          The big advantages of local on a business level are:

          - Freezing your model's exact settings once you've locked in some kind of workflow that works just fine. - Guarding against insane token usage from LLMs who have been told to never stop until they figure out the solution OR setting up an LLM run incorrectly. (The last one happened to me with Gemini 3.1 Pro) - PII or some need for on-premise only LLMs.

        • lxgr 20 hours ago
          Both can be true at the same time. I currently wouldn't waste my time with open models for almost all use cases, but they're crucial from a data privacy and competitive perspective, and I can't wait for them to catch up enough to be as useful as the current frontier models.
          • organsnyder 20 hours ago
            I've found qwen3 to be very usable on my local machine (a Framework Desktop with 128gb RAM). I doubt it could handle the complex tasks I throw at Claude Opus at work, but it's more than capable of doing a surprising number of tasks, with good performance.
            • dotancohen 19 hours ago
              What tasks do you use qwen3 for? Coding? Are you running it on CPU or GPU? What GPU does that Framework have?

              Thanks!

              • girvo 19 hours ago
                I have an Asus GX10 that I run Qwen3.5 122B A10B on, and I use it for coding through the Pi coding agent (and my own); I have to put more work in to ensure that the model verifies what it does, but if you do so its quite capable.

                It makes using my Claude Pro sub actually feasible: write a plan with it, pick it up with my local model and implement it, now I'm not running out of tokens haha.

                Is it worth it from a unit economics POV? Probably not, but I bought this thing to learn how to deploy and serve models with vLLM and SGLang, and to learn how to fine tune and train models with the 128GB of memory it gets to work with. Adding up two 40GB vectors in CUDA was quite fun :)

                I also use Z.ai's Lite plan for the moment for GLM-5.1 which is very capable in my experience.

                I was using Alibaba's Lite Coding Plan... but they killed it entirely after two months haha, too cheap obviously. Or all the *claw users killed it.

                • jeremyjh 17 hours ago
                  GLM 5.1 is extremely good, and ridiculously cheap on their coding plan. Its far better than Sonnet, and a fifth of the cost at API rates. I don't know if the American providers can compete long-term; what good is it to be more innovative it only buys them a six month lead andthey can't build the data center capacity fast enough for demand? Chinese providers have a huge advantage in electrical grid capacity.
                  • girvo 15 hours ago
                    True but Z.ai also just silently raised the price, and the entire Chinese frontier set is having to make profit now... hence Alibaba killing the Lite plan and not letting people sign up to their Pro one either; and why MiniMax has their non-commercial license, etc. etc.

                    So I agree with you, its better than Sonnet but way cheaper. I do wonder how long that will last though

                    • fragmede 13 hours ago
                      Z.ai does really well at the carwash question!
                • dotancohen 19 hours ago
                  Thank you. I've been using ollama for a much more modest local inference system. I'll research some of the things you've mentioned.
              • organsnyder 19 hours ago
                The Framework Desktop has a Ryzen 395 chip that is able to allocate memory to either the CPU or GPU. I've been able to allocate 100+gb to the GPU, so even big models can run there.

                Most recently I used it to develop a script to help me manage email. The implementation included interacting with my provider over JMAP, taking various actions, and implementing an automated unsubscribe flow. It was greenfield, and quite trivial compared to the codebases I normally interact with, but it was definitely useful.

                • dotancohen 19 hours ago
                  That's great. Ostensibly my system could also allocate some of the 32 GB of system memory to argument the 12 GB VRAM, but I've not been able to get it to load models over 20B. I should spend some more time on it.
        • bloppe 20 hours ago
          I'm just waiting till I can afford a GPU again
        • nl 16 hours ago
          I've invested significant time into getting open models to work, and investigating what works well.

          The TL;DR is that unless you are doing it as a hobby or working in an environment where none of the data privacy options supported by Anthropic/OpenAI (including running on Azure/Bedrock with ZDR) work for you then it's not worth it.

          The best open models are around the Sonnet 4.6 level. That's excellent, but the level of tasks you can give to GPT 5.4 or Opus 4.6 is just so much higher it doesn't compare (and Opus 4.7 seems noticeably better in my few hours of testing too).

          I have my own benchmarks, but I like this much under-publicized OpenHands page: https://index.openhands.dev/home

          It shows for every task they test closed models do the best. The closest and open model gets is Minmax 2.7 on issue resolution where it's ~1% worse than the leaders.

          That matches my experience - fine for small problems, but well behind has the task gets bigger.

        • echelon 16 hours ago
          > Interesting to see your observation where I have observed the opposite: posts that share big news about open-weight local models have many upvoted comments arguing local models shouldn’t be taken seriously and promoting the SOTA commercial models as the only viable options for serious developers.

          When I argue this, my point is that FOSS shouldn't target the desktop with open weights - it should target H200s. Really big parameter models with big VRAM requirements.

          Those can always be distilled down, but you can't really go the other way.

      • whymememe 21 hours ago
        I agree but I’d like to add that people are definitely falling for PR, people are always falling for PR or no one would bother with PR
      • dmix 13 hours ago
        This assumes people are in touch with reality and aren't just motivated by vibes and insta-reactions on social media
      • daveguy 20 hours ago
        > Competition is bad? Who cares - let the big players subsidize and compete between each other.

        Subsidizing is the opposite of competing. It's literally the practice of underpricing your product to box out competition. If everyone was competing on a level playing field they would all price their products above cost.

        All these tech oligarch asshat companies need to be regulated to hell and back.

        • ipaddr 20 hours ago
          The moat was already too large for smaller players. Let them subsidize. Take from investors and give to us buying me time to beef up my local stack to run local models.

          For many things now you need to go local and in the future if you want any privacy you'll need to go local.

          • daveguy 20 hours ago
            Excellent point, but I still think the oligarchs have gotten a little monopoly-happy.
        • agentifysh 19 hours ago
          What's the alternative, move to North Korea ?
          • daveguy 18 hours ago
            Well, that's a great big wtf out of left field.
      • watwut 21 hours ago
        Big players subsidizing is what kills medium and small players which then kills competition. What follows is monopoly.

        Big players operating at loss to distort the market is not a good thing overall.

        • someotherperson 20 hours ago
          The medium and small players are literally just distilling the larger models.

          It's not the smaller players spending billions on training data.

          • sofixa 14 hours ago
            No, the medium and small players are the Mistals, DeepSeek and H Company of the world, with their own models using quirky optimisation techniques to be able to compete.
      • badrequest 14 hours ago
        It's hilarious how much this post reads as drafted by an LLM. The emdash, "it's not X, it's Y" framing, incredible.
        • subscribed 10 hours ago
          People use em-dashes all the time. This is why LLMs use it too. Also guess how LLMs learnt to use "it's not X, it's Y".
        • someotherperson 13 hours ago
          I wrote my post myself.
        • sph 14 hours ago
          Dogfooding by the slop factory. The artificial centipede.
    • the__alchemist 20 hours ago
      Call it fall for it, but here are my two experiences, with both applications open. ($20/month plan for both)

        - Claude: Good for ~20 minutes of work once every 4 hours
        - Codex: Good for however long I want to use it.  
      
      Claude nerfed their product so that it's not usable, so I use something else.
      • CrazyStat 20 hours ago
        Since we’re sharing anecdata: I also have the $20 month plan for codex, and I hit the five hour limit after about an hour of work every single time I open it. I use it for personal side projects primarily in the evening after kids are in bed, so my strategy is to launch it about 4pm and send a simple prompt to prime the 5 hour window to end at 9pm, start working about 8pm, and then I can use up the existing 5 hour window and the next one by about 10pm.
        • botanrice 17 hours ago
          What kind of side projects do you need to run these models for that many hours? I haven't experimented with Opus to that extent and mostly supervise it and/or am prompting it every 5-10min to fix something up.
          • CrazyStat 16 hours ago
            I've done a variety of things with it:

            - sysadmin tasks for my home server which runs home assistant, plex, and minecraft servers. Being able to tell it "Set up a minecraft fabric server with this list of mods" is pretty nice, and it's fairly competent at putting together home assistant dashboards and automations (make sure you have backups of anything it's allowed to touch, though--it may delete stuff without warning).

            - Several small web apps primarily for my own use.

            - Currently working on an opinionated desktop writing app for my own use.

      • KronisLV 20 hours ago
        I'm on the 100 USD plan with Anthropic, I hit the 5 hour limits about 75% of the time during working hours, but almost never the weekly ones - by the time they're reset I've usually used up between 50% - 75% of the quota. There are periods of more intense usage ofc, but this is the approx. situation I'm in (also it doesn't work on tasks while I'm asleep, because I occasionally like having a look at WIP stuff and intervene if needed).

        The Anthropic 20 USD plan would more or less be a non-starter for agentic development, at least for the projects that I work on, even while only working on a single codebase or task at a time (I usually do 1-3 at a time).

        I would be absolutely bankrupt if I had to pay per-token. That said, I do mostly just throw Opus at everything (though it sometimes picks Sonnet/Haiku for sub-agents for specific tasks, which is okay), so probably not a 100% optional approach, but I've wasted too much time and effort in the past on sub-optimal (non-SOTA) models anyways. I wonder which is closer to the actual cost and how much subsidizing there is going on.

        • bitmasher9 18 hours ago
          The $200 openai plan feels like 10x the limit as the $100 claude plan.

          But Opus is both smarter and faster than GPT, so I can get a lot more done during the Claude limits.

          • lsdmtme 15 hours ago
            for now... right now you are getting 2x usage as a promo
        • the__alchemist 20 hours ago
          Concur, re the ratio of weekly vs hourly limits: I hit the hourly one much more often than weekly.
        • rachel_rig 14 hours ago
          [dead]
      • ipaddr 20 hours ago
        Wow the 20 dollar Claude plan sounds awful. I use Claude at work which has metered billing and have to carefully not to hit my four figure max cap.

        For me $20 a month is more than I want to spend I just use the free tiers. If I use AI in an app or site I use older models mostly chatgpt3.5. The challenge is more fun and it means I can do more like, make more api calls - 100x more.

        • XDataY 15 hours ago
          I use $20 plan for my side projects and in the beginning I was hitting limits very fast but after creating proper .md files and running /clear, it seems to work fine for my use. I am really curious how people are using $100-$200 plans. Maybe I am not utilizing to its full capacity??
      • dingnuts 20 hours ago
        [dead]
    • BrokenCogs 21 hours ago
      There's a systematic marketing campaign from oai on reddit and HN - there's a huge uptick of "codex is better than claude code" comments and posts this last week which is perfectly timed with the claude code increased limits
      • unsupp0rted 21 hours ago
        Go to /r/codex and see how pissed off people are by the new Codex Plus plan 5-hour limits (they're a sliver of what they were a week ago). Whatever OpenAI is doing to market on Reddit isn't working.
        • toraway 21 hours ago
          I'm not sure what changed or what the complaint is ... But personally, I have still never hit the rate limit on the $20/mo ChatGPT Plus plan, while I was constantly getting kicked off the Claude Pro plan until I got fed up and cancelled a few months ago.
          • unsupp0rted 20 hours ago
            I can get about 20 ~ 40 minutes of my 5-hour limit using Codex 5.4 medium to say write a patch script in typescript for a Firebase + BigQuery app. That's including about 10 minutes of first writing a planning.md doc with 5.2 High.

            A couple weeks ago I'd get roughly 2~3 hours. And a month before that I couldn't break the 5-hour limit.

            • CuriouslyC 18 hours ago
              They were running a 2x rate limit promo last month.
              • unsupp0rted 7 hours ago
                Theoretically yes. In practice even a few weeks before it ended, the actual rate limit was down to what it was before the promo. And now I'm getting roughly 0.25x of what I got before the promo.
      • CuriouslyC 18 hours ago
        To be fair, GPT 5.4 is mostly a better model than Opus 4.6 in terms of quality of work. The tradeoff is it's less autonomous and it takes longer to complete equivalent tasks.
      • boomskats 21 hours ago
        Thing is, Codex 5.3 is a better and more consistent model than anything Anthropic have come out with. It can deal with larger codebases, has compaction that works, and has much less of a tendency to resort to sycophantic hallucination as it runs out of ideas. I also appreciate their approach to third party harnesses like opencode, which is obviously the complete opposite to Anthropic and their scramble to keep their crumbling garden walls upright.

        Which makes it even more of a shame that Sam Altman is such a psychopathic jackass.

    • luddit3 21 hours ago
      So Anthropic degraded their product. OAI updated their product to meet for exceeded Anthropic old product.

      This is normal behavior and not a cause for such a hyperbolic response.

      • solenoid0937 19 hours ago
        There is good competition and bad competition.

        Pricing your product unsustainably vs a competitor to gain market share is regarded as "bad competition" and has historically been seen as anticompetitive.

        It does not benefit the consumer in the long run, because the goal is to use your increased funding or cash reserve to wipe your competition out of the market, decreasing competition in the long term.

        Then, once your competition is gone, and you've entrenched yourself, you do a rug pull.

        • byzantinegene 14 hours ago
          you're right but for now it doesn't matter if both competitors are running on infinite vc money, we as consumers benefit from it. it only matters if they cause negative externalities in the meantime
      • pizzly 21 hours ago
        This is the benefits of competition in action
        • solenoid0937 19 hours ago
          To be clear, unsustainably hemorrhaging money to gain marketshare over a competitor is generally considered an anticompetitive practice.
          • toraway 19 hours ago
            What if both competitors are doing it?
          • justapassenger 16 hours ago
            It’s also THE playbook of the Silicon Valley.
            • guzfip 16 hours ago
              Also why there’s so much enthusiasm for it on HN
    • m3nu 19 hours ago
      I have a feeling that Codex is also getting lower limits. Got this email just now. Basically they copy Claude's $100 tier.

      > To help you go further with Codex, we’re introducing a new €114 Pro tier designed for longer, high-intensity sessions.

      > At launch, this new tier includes a limited-time Codex usage boost, with up to 10x more Codex usage than Plus (typically 5x).

      > As the Codex promotion on Plus winds down today, we’re rebalancing Plus usage to support more sessions across the week, rather than longer high-intensity sessions on a single day.

    • kar1181 20 hours ago
      This is true. But Anthropic did us dirty most recently and so it’s their turn on the pitch fork. Sam will do us too. Just not yet.
    • giancarlostoro 18 hours ago
      They didnt just lower limits they keep messing with peoples local settings and I wish it would be called out drastically more because it could cause serious issues. A coding agents settings are a contract, even the default ones, if they worked for me for 9 months and now you are changing defaults on me, you shouldnt just force new defaults on me without warning, Claude can and will goof up hard if misconfigured.
    • zmmmmm 16 hours ago
      It's one of the things I really dislike about providers hyping "inference time scaling" as a concept. Apart from being a blatant misnomer (there's nothing scalable about it), it's so transparently a dial they can manipulate to shape perception. If they want a model to seem more intelligent than it really is, just dial up the "thinking" and burn tokens. Then once you have people fooled, you can dial it down again. Everyone will assume its their own fault that their AI suddenly isn't working properly. And since it's almost entirely unmeasurable you can do it selectively for any given product you want to pitch for any period of time you like and then pull the rug.

      We need to force them back into being providers of commodity services and hit this assumption they can mold things in real time on the head.

    • chaos_emergent 20 hours ago
      Thinking in counterfactuals, how would the hype around Codex would be different if it was organic and because they had built a genuinely good product? Asking as someone who genuinely loves Codex and has been in the OpenAI camp for months after buying a Claude Max plan from November to February.
      • peyton 20 hours ago
        I haven’t noticed much hype around Codex. I have both and use Claude for broad work off my phone and Codex on my computer to clean up the mess. Crank reasoning to the highest setting for each. Claude is extremely unreliable for me, and Codex feels like more of a real tool. I’d say Codex has a bit of a learning curve. Nothing much has changed for me in the past month or two (whenever GPT 5.4 came out).
    • AlexCoventry 16 hours ago
      It's quite likely that OpenAI is running a significant PR campaign to compensate for the bad rep they earned by stepping in to meet the demands of the Trump administration, after Anthropic refused to assist the administration with mass domestic surveillance and development of lethal autonomous weapons. Presumably OpenAI didn't buy the podcast TBPN just because they like the guys.

      https://paulgraham.com/submarine.html

    • keeganpoppen 20 hours ago
      everyone seems to unconditionally love anthropic, but openai has always had the best models… it just requires a bit more effort on behalf of the user to actually leverage it.
    • yoyohello13 21 hours ago
      There was brief consternation when OpenAI swooped in to snatch up those DoD contracts but then the next model released and all is forgiven.
      • olcay_ 20 hours ago
        Anthropic coming out to say they won't surveil Americans wasn't actually a positive for me. It meant they're okay with surveilling the rest of the world, which in turn signaled "fuck you, you're inferior, deal with it" to me (as someone from the aforementioned rest of the world).

        When OpenAI snatched those contracts, it made me think no worse of OpenAI. The surveillance was already factored into how I saw them (both).

    • raincole 19 hours ago
      > because Anthropic lowered rate limits for individuals due to compute constraints

      It's because they don't support OpenCode.

    • jsemrau 19 hours ago
      Codex is much worse than Anthropics model. My experience is that I burn 10x the tokens using Codex compared to Sonnet 4.6
    • ra 16 hours ago
      Anthropic don't seem to know how to look after and keep customers.
    • HWR_14 19 hours ago
      And hopefully Anthropic has extra capacity then and I can return there.
    • khacvy 13 hours ago
      I really hate this kind of behavior. Yeah, Anthropic may do some bad things, I don't know, but we all see that Anthropic is always one step ahead of OpenAI. And just because Anthropic lowered rates for some people, people now start saying that Codex is way better than Claude Code / Claude Desktop.
    • a34729t 15 hours ago
      Uber, but AI!
    • iterateoften 16 hours ago
      No it’s because Anthropic can’t message anything to its customers without lying.
    • greenavocado 21 hours ago
      Not only that, but anthropic is now forcing users to give their biometric information to palantir

      They're doing a slow rollout

      • solenoid0937 19 hours ago
        OAI already requires this. They both require identity verification in some cases
  • hyperionultra 1 day ago
    Tool for everything does nothing really good.
  • shevy-java 8 hours ago
    > Codex can now operate your computer alongside you

    I am getting some strange vibes here ... is AI actually also spying on these developers?

  • tvmalsv 1 day ago
    My monthly subscription for Claude is up in a week, is there any compelling reason to switch to Codex (for coding/bug fixing of low/medium difficulty apps)? Or is it pretty much a wash at this point?
    • fredericgalline 6 hours ago
      I've been switching between both depending on which one is having a good week — and that's the honest answer for most people right now.

      But the real issue I ran into wasn't which model is better. It's that every time I switched, I lost weeks of accumulated context. The AI didn't know my project's conventions anymore, didn't remember the architecture decisions, didn't know what was tried and rejected.

      What helped me was separating the project context from the tool. Keep the conventions, rules, and decisions in plain files in the repo. Both Claude Code and Codex can read them at session start. Then the question becomes "which model is sharper this week" instead of "can I afford to lose my context."

      The answer to your question: it's mostly a wash on capability. The real cost of switching is the context you don't realize you're rebuilding.

    • dilap 1 day ago
      FWIW, I've found Codex with GPT-5.4 to be better than Opus-4.6; I would say it's at least worth checking out for your use case.
    • trueno 1 day ago
      at least for our scope of work (data, interfacing with data, building things to extract data quickly and dump to warehouse, resuming) claude is performing night and day better than codex. we're still continuing tinkering with codex here to see if we're happy with it but it's taking a lot more human-in-the-loop to keep it from going down the wrong path and we're finding that we're constantly prompt-nudging it to the end result. for the most part after ~3 days we're not super happy with it. kinda feels like claude did last year idk. it's worth checking out and seeing if it's succeeding at the stuff you want it to do.
    • Austin_Conlon 1 day ago
      I'm switching because of the higher usage limits, 2x speed mode that isn't billed as extra usage, and much more stable and polished Mac app.
      • gbear605 22 hours ago
        > 2x speed mode that isn't billed as extra usage

        ...at least for my account, the speed mode is 1.5x the speed at 2x the usage

    • romanovcode 1 day ago
      Wait for new GPT release this/next week and then decide based on benchmarks. That is what I will do.

      One main thing is to de-couple the repos from specific agents e.g. use .mcp.json instead of "claude plugins", use AGENTS.md (and symlink to CLAUDE.md) and so on.

      I love this because I have absolutely 0 loyalty to any of these companies and once Anthropic nerfs I just switch to OpenAI, then I can switch to Google and so on. Whichever works best.

    • finales 23 hours ago
      Honestly, just try it. I used both and there's no reason to not try depending on which model is superior at a given point. I've found 5.4 to be better atm (subject to change any time) even though Claude Code had a slicker UI for awhile.
  • graphememes 21 hours ago
    cursor has been doing this for months, welcome to 3 months ago
  • hmokiguess 1 day ago
    I can't help but see some things as a solution in search of a problem every time I see these examples illustrating toy projects. Cloud Tic Tac Toe? Seriously?
  • jauntywundrkind 1 day ago
    Side note: I really wish there was an expectation that TUI apps implemented accessibility APIs.

    Sure we can read the characters in the screen. But accessibility information is structured usually. TUI apps are going to be far less interesting & capable without accessibility built-in.

  • CrzyLngPwd 21 hours ago
    "Our mission is to ensure that AGI benefits all of humanity. "

    They have AGI now?

    • hipshaker 19 hours ago
      Yes, Artificial Goofy Intelligence
  • armcat 1 day ago
    Is it OpenAI Cowork?
  • SilverBirch 21 hours ago
    Just commenting here to impact the controversy score.
  • rommelsLegacy 19 hours ago
    I am quite worried that people are continuing to use OpenAIs offerings just because it works. Everyone here seem to gloss over the fact that this is a project funded by Peter Thiel. Thousands of morslity posts, complaints about ICE, Tump etcand yet you all choose to use a tool created and funded by the same person enabling this dictatorial machine.

    I am speechless everytime I see posts like this and the comments following, vote with your behavior stop supporting and enabling the Peter Thiel universe, just a few weeks ago we had an oped about openAI and Sam, look into yourselfs and really reflect on whom you are enabling by continuing to contribute to their baseline

    • yoyohello13 14 hours ago
      If you’re expecting morality from the HN crowd they will disappoint you every time. Most of the people here wish they could be as ruthless and successful as someone like Sam Altman.
      • rommelsLegacy 13 minutes ago
        Thank you for your comment, it's comforting to show I'm not the only one getting offended/disappointed by the behavior of people within our industry.

        Truly I don't expect morality, and I'm not even making the moral argument to not use it tbh, as I consider morality to be a double edged sword.

        Yet I wish that at least there's some base sensibility, and some common sense or at least to the very least some self accountability on the actions we take as persons in tech, as they transform and influence the world around us.

  • tty456 23 hours ago
    I'm sure it's been said before, but more and more our development work is encroaching on personal compute space. Even for personal projects. A reminder to me to air gap those to spaces with separate hardware [:cringe:]
  • saltyoldman 20 hours ago
    Claude had this, the "app" both of them have (not the terminal stuff) are mirroring each other's features.
  • eduction 22 hours ago
    "We’re also releasing more than 90 additional plugins"

    but there is no link, why would you not make this a link.

    boggles my mind that companies make such little use of hypertext

  • huqedato 21 hours ago
    "Codex can now operate your computer alongside you" - I really don't want AI to "operate" my computer.
  • thm 23 hours ago
    Am I the only one who sees screen recordings of AI agents as archaic as filming airplane instruments to take measurements?
  • ex-aws-dude 21 hours ago
    Can't help but think the surface area for security issues is becoming massive with these tools
  • TheServitor 9 hours ago
    Mac only? Meh.
  • EthanFrostHI 9 hours ago
    [dead]
  • EthanFrostHI 11 hours ago
    [dead]
  • maryjeiel 14 hours ago
    [dead]
  • kevinten10 15 hours ago
    [dead]
  • nerdsfeed 23 hours ago
    [dead]
  • vox-machina 22 hours ago
    [flagged]
    • throwaway911282 22 hours ago
      [flagged]
      • dieortin 21 hours ago
        Why are all your comments praising OpenAI and its models or attacking the competition?
        • throwaway911282 2 hours ago
          because Ant is not trustable
        • BrokenCogs 21 hours ago
          sam altman realized the molotov cocktail incident was not enough of a story to get sympathy, so now he's just drumming up his bots for good oai PR.
  • VadimPR 1 day ago
    Only on macOS though? This doesn't seem to work on Linux. Neither does Claude Cowork, not officially.
    • duckmysick 1 day ago
      I don't see how it's possible to support Linux with Wayland, unless you limit the automation only to the browsers.
    • rvz 1 day ago
      This is why both companies are in an SF bubble.
      • mrcwinn 1 day ago
        Linux desktop users. Talk about a bubble!
        • cmrdporcupine 1 day ago
          There's this thing called Windows.

          I don't like it, and I'm sure you don't either, but it's not a Mac. Or a Linux. And it's what most actual desktop users are stuck with, still.

  • croemer 1 day ago
    What does "major update to codex" mean? New model? Or just new desktop app? The announcement is vague.
  • messh 21 hours ago
    SSH to devboxes is the exact usecase for services like https://shellbox.dev: create a box using ssh... and ssh into it. Now web, no subs. Codex can create it's own boxes via ssh
  • postalcoder 1 day ago
    I wish Codex App was open source. I like it, but there are always a bunch of little paper cuts that, if you were using codex cli, you could have easily diagnosed and filed an issue. Now, the issues in the codex repo is slowly becoming claude codish – ie a drawer for people's feelings with nothing concrete to point to.
    • avaer 1 day ago
      That would allow Anthropic or anyone else to sit back and relax while the agent clones the features.
  • Glemllksdf 22 hours ago
    Man this progress is fast.

    Its clear that it will go in this type of direction but Anthropic announced managed agents just a week ago and this again with all the biuld in connections and tools will help so many non computer people to do a lot more faster and better.

    I'm waiting for the open source ai ecosystem to catch up :/

  • lionkor 22 hours ago
    The first example is tic tac toe. Why would anyone bother? None of those eash things are relevant for people who use AI. They don't care about learning, improving, exploring how things work, creating, being creative to that degree. They want to hit buttons and see the computer do things and get a dopamine rush.
    • sophacles 22 hours ago
      Fuck, i've been using it wrong.
    • 3fgdf 21 hours ago
      [dead]