OpenClaw isn't fooling me. I remember MS-DOS

(flyingpenguin.com)

142 points | by feigewalnuss 5 hours ago

28 comments

  • piker 4 hours ago
    Is anyone finding value in these things other than VCs and thought leaders looking for clicks and “picks and shovels” folks? I just personally have zero interest in letting an AI into my comms and see no value there whatsoever. Probably negative.
    • bitmasher9 50 minutes ago
      Yep, I’m seeing real value. I use them for tasks that an assistant might have done in the past. It’s much cheaper than hiring a human, and setup is much faster than finding a good assistant. I’m honestly considering giving it access to accounts with payment information so it can book flights and hotels for me.

      You can ask it questions like “what classes does my gym offer between 6-8pm today” and just get a good answer instead of wasting time finding their schedule. You can tell it to check your favorite band’s website everyday to see if they announce any shows in your city. You can tell it to read your emails and automatically add important information to your calendar.

      This isn’t the space where I get the most value from AI, but it’s nice to have a hyper connected agent that can quickly take care of more smaller and more personal tasks.

      • piker 32 minutes ago
        No offense but all of those are near zero value except entertainment to the orchestrator. That’s without understanding the failure rate and modes. It’s telling that you haven’t yet given it your credit card.
    • TheDong 4 hours ago
      I find some value as kinda a better alexa.

      I have it hooked up to my smart home stuff, like my speaker and smart lights and TV, and I've given it various skills to talk to those things.

      I can message it "Play my X playlist" or "Give me the gorillaz song I was listening to yesterday"

      I can also message it "Download Titanic to my jellyfin server and queue it up", and it'll go straight to the pirate bay.

      It having a browser and the ability to run cli tools, and also understand English well enough to know that "Give me some Beatles" means to use its audio skill, means it's a vastly better alexa

      It only costs me like $180 a month in API credits (now that they banned using the max plan), so seems okay still.

      • swiftcoder 4 hours ago
        > It only costs me like $180 a month in API credits (now that they banned using the max plan), so seems okay still.

        I have a hard time imagining how much better Alexa would have to be for me to spend $180/month on it...

        • TheDong 2 hours ago
          I mean, I'm getting $180/mo worth of fun out of playing with it and figuring out what it can do that it's worth it.

          Like, no one bats an eye at all the people paying $100/mo for Hulu + Live TV, or paying $350/mo for virtual pixels in candy crush / pokemon go / whatever, and I'm having at least that much fun in playing with openclaw.

          • hunter-gatherer 1 hour ago
            Everyone in my circle would seriously bat an eye at all those numbers. Congrats on making it to the upper class.
          • whilenot-dev 1 hour ago
            Just for reference: I pay 8€ for mobile, 40€ for internet and some occasional 5€ for VPNs each month. That's all the digital service subscriptions I'll need to have fun.
          • pydry 1 hour ago
            I think quite a lot of people would bat an eyelid at those things.

            If any of my friends admitted to spending $350/mo on candy crush i'd think that they'd badly need help for a gambling problem.

        • miroljub 3 hours ago
          Just to clarify to people focusing on the $180/month price tag.

          OpenClaw is not a CC-only product. You can configure it to use any API endpoint.

          Paying $180/month to Anthropic is a personal choice, not a requirement to use OpenClaw.

          • ThunderSizzle 2 hours ago
            So that leads to a question: Is there a physical box I could buy that an amortize over 5-7 years to be half the API cost?

            In other words, assuming no price increase, 7 years of that pricing is $15k. Is there hardware I could buy for $7k or less that would be able to replace those API calls or alternativr subs entirely?

            I've personally been trying to determine if I should buy a new GC on my aging desktop(s), since their graphic cards can't really handle LLMs)

            • ekidd 2 hours ago
              You can't realistically replace a frontier coding model on any local hardware that costs less than a nice house, and even then it's not going to be quite as good.

              But if you don't need frontier coding abilities, there are several nice models that you can run on a video card with 24GB to 32GB of VRAM. (So a 5090 or a used 3090.) Try Gemma4 and Qwen3.5 with 4-bit quantization from Unsloth, and look at models in the 20B to 35B range. You can try before you buy if you drop $20 on OpenRouter. I have a setup like this that I built for $2500 last year, before things got expensive, and it's a nice little "home lab."

              If you want to go bigger than this, you're looking at an RTX 6000 card, or a Mac Studio with 128GB to 512GB of RAM. These are outside your budget. Or you could look at a Mac Minis, DGX Spark or Strix Halo. These let you bigger models much slower, mostly.

            • TheDong 2 hours ago
              You can buy a roughly $40k gpu (the h100) which will cost $100/mo in electricity on top of that to get about 30-80% the performance of OpenAI or Anthropic frontier models, depending what you're doing.

              Over 5 years, that works out to ~$45k vs ~$10k, and during that duration, it's possible better open models will come available making the GPU better, but it's far more likely that the VC-fueled companies advance quicker (since that's been the trend so far).

              In other words, the local economics do not work out well at a personal scale at all unless you're _really_ maxing out the GPU at close to 50% literally 24/7, and you're okay accepting worse results.

              As long as proprietary models advance as quickly as they are, I think it makes no sense to try and run em locally. You could buy an H100, and suddenly a new model that's too large to run on it could be the state of the art, and suddenly the resale value plummets and it's useless compared to using this new model via APIs or via buying a new $90k GPU with twice the memory or whatever.

              • vrganj 2 hours ago
                This feels like it should be state infrastructure, the way roads, railroads and the postal system are.
                • dsr_ 1 hour ago
                  This feels like a market which hasn't settled into long-term profitability and is being subsidized by investors.
                • TheDong 1 hour ago
                  Note that the (edit: US) postal system is a for-profit system.

                  Given the trends of the capitalist US government, which constantly cedes more and more power to the private sector, especially google and apple, I assume we'll end up with a state-run model infrastructure as soon as we replace the government with Google, at which point Gemini simply becomes state infrastructure.

                  • fineIllregister 1 hour ago
                    > Note that the (edit: US) postal system is a for-profit system.

                    That's not correct. If USPS makes more revenue than their expenses for a year, they can't pay it out as profits to anyone.

                    It's true that USPS is intended to be self-funded, covering it's costs through postage and services sold, and not tax revunue. That doesn't mean there's profit anywhere.

                  • vrganj 1 hour ago
                    > Note that the postal system is a for-profit system.

                    That depends on the country in question :-)

            • wasfgwp 59 minutes ago
              You can use several times cheaper models than Claude as well, its not like you need anything big to handle all the uses cases listed above
              • swiftcoder 54 minutes ago
                Yeah, something like MiniMax m2.7 should be perfectly capable for this sort of thing, and is 10-20x cheaper
            • rcxdude 2 hours ago
              For something the size of Claude, probably not. But for smaller models, maybe (though they also are much cheaper to buy tokens for)
        • vovavili 3 hours ago
          I do see how a very busy businessman or a venture capitalist would gladly pay 180$/month to offload chores and mundane work from his schedule. That comes down to 6$/month, which probably matches his monthly coffee budget.
          • ThunderSizzle 2 hours ago
            Chores, yes. If there was a $180/month where ALL my families chores could be accomplished, I'd consider it.

            That means picking up and cleaning the house after 3 kids and a dog. Grocery shopping. Dishes. Laundry. Chores.

            Tech crap? Nope.

            • vovavili 2 hours ago
              I would imagine that the list of digital chores of a very busy businessman are a bit more extensive. Even in your list, groceries is something that becomes digital once you're high enough in income.
              • StilesCrisis 2 hours ago
                My grocery store has offered a pick-up or delivery option ever since COVID. Pick-up actually cost nothing extra. It's been years since we used it so I can't say definitively that it's still free, but the downside wasn't cost: it was the ability to pick the best item. If you let the store choose, you'll get the saddest looking produce every time, and the meat that's set to expire tomorrow.
      • retired 4 hours ago
        > It only costs me like $180 a month in API credits

        In The Netherlands you can get a live-in au-pair from the Philippines for less than that. She will happily play your Beatles song, download the Titanic movie for you, find your Gorillaz song and even cook and take care of your children.

        It's horrible that we have such human exploitation in 2026, but it does put into perspective how much those credits are if you can get a real-life person doing those tasks for less.

        • quietbritishjim 4 hours ago
          I'm surprised to read that. Here in the UK, having a live-in au pair doesn't excuse you from paying the minimum wage for all the hours that they're working (approx $2300/month for a 35 hour week). You can deduct an amount to account for the fact that you're providing accomodation but it's strictly limited (approx $400/month).
          • swiftcoder 4 hours ago
            The Netherlands has a weird and exploitative setup where you can classify your au pair as a "cultural exchange", and then pay them literal peanuts (room and board plus a token amount of "pocket money")
            • __alexs 3 hours ago
              Another weird cultural quirk of the Dutch that will hopefully go the way of Zwarte Piet one day.
          • retired 4 hours ago
            From what I can see online, the average compensation that an au-pair in The Netherlands receives is 300 euro per month, with living expenses being covered by the family. There is no minimum wage requirement for au-pairs like in the UK or the US.
            • spockz 2 hours ago
              The added cost of having an additional person to provide room and food for way exceeds that €300/month. Especially, when taking into consideration that you might have to extend/renovate the house to lodge another person. Adding an extra bedroom and possibly bathroom is not cheap.
              • jjcob 2 hours ago
                Even if you assume the cost of lodging was 1000€ (which it isn't) then the au-pair would still be significantly underpaid.

                A normal full time employee costs at least 2000€ a month (salary, tax, pension plan, health insurance, etc). If you are paying less than that you are definietly exploiting them.

            • aianus 3 hours ago
              A semi-skilled English-speaking customer service agent in PH makes less than $700 a month to put this into perspective.

              Working abroad is a totally reasonable proposition compared to working in the Philippines.

            • throwthrowuknow 2 hours ago
              So in reality you’re paying for their food, electricity and heat, letting them rent a room for free, and allowing them the use of the other facilities in your home and on top of that you’re giving them a spending allowance of 300 euro.
              • swiftcoder 56 minutes ago
                The marginal cost of food/electricity/bed for adding one additional person to a family is drastically less than those things would cost for a person living alone. Whichever way you slice this, the employer is making out like a bandit under this scheme.
              • balamatom 1 hour ago
                In fact, you could do this for a homeless person today, in any city on the globe! And never even ask them to do anything for you!
          • redsocksfan45 2 hours ago
            [dead]
        • kombine 4 hours ago
          We shouldn't have to "import" people from poorer countries to do the mundane tasks we got too lazy to do ourselves.
          • grosswait 2 hours ago
            The concept of having this kind of help is totally foreign to me, but with the exception of one, every family I’ve encountered that had an au pair have been two very busy high earning parents, neither of them lazy. I think you could argue that perhaps priorities have been misplaced, but not lazy.
        • vovavili 2 hours ago
          Machines don't get tired, don't have to sleep, don't face principal-agent problems and can accumulate Skill.md instructions for decades without getting replaced. I definitely see the potential of something like OpenClaw for those who can afford it.
        • DrewADesign 4 hours ago
          Surely that’s subsidized?

          A lot of people in the Silicon Valley area spend that much ($6/day) on coffee. What they don’t realize is how out of touch they are in thinking makes sense for the rest of the fucking world. $180/mo is about 5% of the median US per capita income. It’s not going to pick your kids up from school, do your taxes, fix your car, or do the dishes. It’s going to download movies and call restaurants and play music. It’s a hobby, high-touch leisure assistant that costs a lot of money.

          • wasfgwp 53 minutes ago
            Realistically you certainly don’t Anthropic’s models for those things and can get something for a fraction of the price on OpenRouter/etc.
          • duskdozer 3 hours ago
            They aren't selling it to the median US earner. They're selling it (and trying to generate FOMO) to the out of touch people so that it becomes so entrenched that the median earner will be forced to use it in some capacity through their interaction with businesses, schools, the government, etc.
        • cameronh90 2 hours ago
          You're paying the au pair partly in accommodation, food, bills and a visa. The visa isn't coming out of your bank account, but it's definitely part of the incentive, so you could see it as a government subsidy.

          For comparison, a full time "virtual assistant" with fluent English from the Philippines costs upwards of $700/month nowadays.

        • CalRobert 3 hours ago
          How is that remotely possible without committing enormous violations of labor law?
        • throwatdem12311 1 hour ago
          Framed this way - then “replacing” this kind of human exploitation is definitely a good for humanity. If someone doing a job is practically a slave, then replacing them with an electron to token converter is a good thing.

          The number one goal of AI should be to eliminate human exploitation. We want robots mining the minerals we use for our phones, not children. We should strive to free all of humanity from dangerous labour and the need for such jobs to exist.

          If Elon Musk wants Optimus robots to help colonize Mars shouldn’t he be trying to create robots that can mine cobalt or similar minerals from dangerous mines and such?

          • esseph 29 minutes ago
            > The number one goal of AI should be to eliminate human exploitation.

            I have some bad news.

        • _zoltan_ 3 hours ago
          I doubt this is true in .nl. 180 a month is low for a live-in au-pair.
        • huflungdung 3 hours ago
          > In The Netherlands you can get a live-in au-pair from the Philippines for less than that.

          And you see nothing wrong with that?

      • tikotus 4 hours ago
        I don't want to be judgemental, but I do find it funny that you're paying $180 for this convenience, and use it to pirate movies.
        • llmocallm 3 hours ago
          Then allow me to be judgemental in your stead. I've done a similar setup as the above and completely locally. I dunno how they're paying so much, but that's ridiculously overpriced.
          • TheDong 1 hour ago
            All the other models performed much worse for the skills I'm using. I tried gpt-5.1 (and then 5.4 again recently), and also tried pointing it at OpenRouter and using a few of the cheaper models, and all of them added too much friction for me.

            Be judgemental all you want, but I feel like I'm paying for less friction, and also more security since my experiments also showed claude to be the least vulnerable to prompt injection attempts.

            • wasfgwp 50 minutes ago
              > models performed much worse for the skills I'm using

              Hard to believe unless your are doing something much more complex than the things you listed

        • TeMPOraL 4 hours ago
          It's not the only thing they're doing with it. I mean, the logic is sound - $180 goes into automating bunch of manual processes in personal life, one of which is getting movies, which in some cases involves going out on the high seas.
        • LeCompteSftware 4 hours ago
          Let's also point out the $180 is going to a hideously evil AI company which pirated millions of books and movies.
      • puelocesar 4 hours ago
        180 grand a month for PA is a lot of money. But I guess each person has its own priority. I mean, I can pay a very fancy gym with that price instead of the shitty popular one I go, which would probably improve my well being much more than asking to play Gorillaz
        • quietbritishjim 4 hours ago
          "a grand" means a thousand (dollars or pounds or whatever). $180k / month really would be a lot of money. I'd be your PA for that!
      • bluedel 4 hours ago
        Am I right to be a little concerned by the phrase "it'll go straight to the pirate bay"?

        Not to be a narc or anything, but is OpenClaw liable to just perform illegal acts on your behalf just because it seemed like that's what you meant for it to do?

        • jappgar 2 hours ago
          Seems like the only people using pirate bay in 2026 are "privacy obsessed" rich middle-aged guys.

          I think they do it mostly to feel young and edgy.

        • esseph 27 minutes ago
          > Not to be a narc or anything, but is OpenClaw liable to just perform illegal acts on your behalf just because it seemed like that's what you meant for it to do?

          There's at least a couple of dozen instances right now, somewhere, getting very close to designing boutique chemical weapons.

      • Hendrikto 3 hours ago
        180$/month to queue playlists does not “seem okay” at all. We must be living in different worlds.
      • jappgar 2 hours ago
        You're spendin 180 a month on tokens and still refusing to buy media like Titanic?
        • TheDong 1 hour ago
          If you've figured out how to pirate Anthropic's models and enough GPUs to run it for less than my API costs, I'm all ears
      • philipallstar 1 hour ago
        > "Download Titanic to my jellyfin server and queue it up", and it'll go straight to the pirate bay

        You could build up a legitimate collection for much less than $180/mo.

      • tempaccount5050 1 hour ago
        Using OpenClaw for that is nuts. Claude or GPT could just one shot an app for you that does all that and uses 0 tokens once you've built it.
      • coldtea 1 hour ago
        Regarding Alexa, none of those use cases sound that useful to have an ever-present listening device at home, except if one is bedbound or something.
      • qsera 3 hours ago
        I have the almost same thing using a network connected raspberry-pi and no AI.
      • bigger_fish 3 hours ago
        [dead]
    • vbezhenar 3 hours ago
      Many wealthy people use human assistants to offload mundane work.

      This is cheap replacement for ordinary people.

      It's going to be big. But probably it's best to wait for Google and Apple to step up their assistants.

      • piker 3 hours ago
        Yes, and that's because the workflow of those people generally requires managing a crazy, dynamic schedule including travel, meetings, comms, etc. Those folks need real humans with long-term memories and incentives to establish trust for managing these high-stakes engagements. Their human assistants might find these things useful, but there's zero chance Bill Gates is having an AI schedule his travel plans or draft his text messages.

        OTOH, this isn't an issue for "ordinary people". They go to work, school, children's sports events, etc. If they had an assistant for free, most of them would probably find it difficult to generate enough volume to establish the muscle memory of using them. In my own professional life, this occurred with junior lawyers and legal assistants--the juniors just never found them useful because they didn't need them even though they were available. Even the partners ended up consolidating around sharing a few of them for the same reason.

        Down in this thread someone mentions it being an advanced Alexa, which seems apt. Yes, a party novelty but not useful enough to be top of mind in the every day work flow.

        • Terr_ 2 hours ago
          Side rant: A disproportionate amount of AI assistant marketing involves scenarios that look middle class, but actually require customers wealthy enough risk money on errors. Like buying the wrong thing, or even buying the right thing at the wrong price.
        • nainachirps_ 3 hours ago
          I am ordinary people. I have adhd. I have been dying for assistance in scheduling and planning. Am not employed enough to afford hiring a human yet. Am hopeful these will reach maturity for me to he able to host one on my own device. Or find a private provider with good security model and careful data handling.
          • user_7832 2 hours ago
            Not +1, but +100 to your comment (fellow ADHD'er here). Even a virtual friend who'd help me stay on track would be excellent, and if I had a physical human assistant... that would legitimately make many aspects of my life much better. (Simple example: I could ask them to nag me to exercise.)
        • vbezhenar 3 hours ago
          Going to the shop and buying groceries is not hard work. But I don't do that since delivery became available. I'm lazy and delivery is free. Same for ordinary people needs. It's not a big deal to manage my life, but if I can avoid doing that for free, that's probably what I'll do. For $200? Not sure. For $20? Absolutely. So the question is already about price.
          • spockz 2 hours ago
            Off-Topic: Are you sure delivery is free? When comparing prices online vs my local supermarket of the same brand, online prices trend higher. Locally the store also has more products on sale than available online. Only recently online shopping has become slightly cheaper because they now have “bulk” deals for 5-20% discount.
      • andai 2 hours ago
        I'm not sure how solvable it is. It only takes one screw up to ruin the reputation, and a screw up is basically guaranteed.

        The tech has existed for a while but nobody sane wants to be the one who takes responsibility for shipping a version of this thing that's supposed to be actually solid.

        Issues I saw with OpenClaw:

        - reliability (mostly due to context mgmt), esp. memory, consistency. Probably solvable eventually

        - costs, partly solvable with context mgmt, but the way people were using it was "run in the background and do work for me constantly" so it's basically maxing out your Claude sub (or paying hundreds a day), the economics don't work

        - you basically had to use Claude to get decent results, hence the costs (this is better now and will improve with time)

        - the "my AI agent runs in a sandboxed docker container but I gave it my Gmail password" situation... (The solution is don't do that, lol)

        See also simonw's "lethal trifecta":

        >private data, untrusted content, and external communication

        https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/

        The trifecta (prompt injection) is sorta-kinda solved by the latest models from what I understood. (But maybe Pliny the liberator has a different opinion!)

      • torginus 1 hour ago
        My 2 cents is that so far LLMs have had a bad track record in replacing people in jobs where simple software logic and flowcharts wouldn't do the job.
      • eloisant 3 hours ago
        $180 a month is huge for "ordinary people".

        So I guess that leaves the in-between people who don't care about spending $180 every month but don't have any personal staff yet or even access to concierge services.

      • lionkor 2 hours ago
        Those human assistants can be held accountable.
        • sekh60 1 hour ago
          I deleted your calendar, I'm sorry.
      • LeCompteSftware 2 hours ago
        The problem is that if you're wealthy enough to hire someone to do your errands, those errands likely aren't very mundane - the exception is a socialite giving their friend a low-effort job, but executive assistants are paid well because their jobs are cognitively demanding.

        OTOH a lower-middle-class Joe like me really does have a lot of mundane social/professional errands, which existing software has handled just fine for decades. I suppose on the margins AI might free up 5 minutes here or there around calendar invites / etc, but at the cost of rolling snake eyes and wasting 30 minutes cleaning up mistakes. Even if it never made mistakes, I just don't see the "personal assistant" use case really taking off. And it's not how people use LLMs recreationally.

        Really not trying to say that LLM personal assistants are "useless" for most people. But I don't think they'll be "big," for the same reason that Siri and Alexa were overhyped. It's not from lack of capability; the vision is more ho-hum than tech folks seem to realize.

      • kqp 1 hour ago
        [dead]
    • ZeroGravitas 4 hours ago
      I see the appeal, but I also see the risks.

      If you ignore the risks I don't see why it's hard to see value.

      The AI can read all your email, that's useful. It can delete them to free up space after deciding they are useless. It can push to GitHub. The more of your private info and passwords you give it the more useful it becomes.

      That's all great, until it isn't.

      Putting firewalls in place is probably possible and obviously desirable but is a bit of a hassle and will probably reduce the usefulness to some degree, so people won't. We'll all collectively touch the stove and find out that it is hot.

      • theshrike79 1 hour ago
        Just limit the tooling. There's no reason for the AI to be able to delete emails for example.

        I built a fastmail CLI tool for my *claw and it can only read mails, that's it. I might give it the ability to archive and label later on, with a separate log of actions so I can undo any operation it did easily.

        It's pretty decent at going "hey, there's a sale on $thing at $store", for mails, but that's about it.

    • littlecranky67 2 hours ago
      I can see a value in a smarter email-inbox sorting algorithm - but only because all major players (except google which I don't trust with my mails) have abandoned bayesian email filtering with training. This was standard in 2005 in such basic clients such as the Opera browser, but somehow we lost this technology along the way.
      • easygenes 2 hours ago
        I was an original Thunderbird pre-1.0 (from 2003) user and prior to that, Netscape Mail, and am quite certain it has had bayesian spam filtering all this time, at least since the late ‘90s. That was a headline feature in the early days. My first email account used POP3 through a shared web host for my own domain in that era.

        Edit: Yes it’s still there https://support.mozilla.org/en-US/kb/thunderbird-and-junk-sp...

      • Terr_ 2 hours ago
        I can't recall the name, but I vaguely remember a Bayesian spam filter for arbitrary POP3 accounts in the 2000s that had a local web frontend, and how excited I was at its effectiveness.

        I believe that the shift from "my one computer" to multiple clients (computer + phone + webmail) probably has something to do with it. Even with IMAP sharing state, you still don't have a great way to see and control the filtering, except by moving things in/out of spam folders.

    • jstummbillig 3 hours ago
      > letting an AI into my comms

      Idk, it's strange for me to think of it that way. It's tech. If it does something useful, that's cool.

      Data protection is always a consideration. I just don't consider a LLM to be a special case or a person, the same way that I don't have strong feelings about "AI" being applied in google search since forever. I don't have special feelings or get embarrassed by the thought of a LLM touching my mails.

      Right now for me, agentic coding is great. I have a hard time seeing a future where the benefits that we experience there will not be more broadly shared. Explorations in that direction is how we get there.

      • piker 2 hours ago
        My issues aren’t really with privacy so much as what the failure modes look like, and, more fundamentally, with becoming a passenger to my own life.
      • rowanG077 2 hours ago
        The problem for me is not the LLM reading it. The problem is the company behind it can most likely recover the sessions. That is a problem since they could share it with whomever they want. Even if they are fully incorruptable it's also not uncommon that they simply get hacked and all this data ends up on the open market.
    • stingraycharles 2 hours ago
      This is being asked on pretty much every Openclaw thread, and the use cases brought up seem roughly similar: digital assistant.

      It of course depends heavily on your work, but my work is 50% communication / overseeing, and I simply lose track of everything.

      I don’t give it any credentials of any sort, but I run data pipelines on an hourly basis that ingest into the agent’s workspace.

    • pizza234 2 hours ago
      > Is anyone finding value in these things other than VCs and thought leaders looking for clicks and “picks and shovels” folks?

      Mostly (but of course, not exclusively), porn for the techies. Receiving a phone notification every time a PR is opened on a project of yours? Exciting or sad, depends on one's outlook on life.

      • moffkalast 2 hours ago
        I thought emails from github already did that?
        • mgkimsal 2 hours ago
          I think the more useful part is the parts that checks a ticket, fixes a bug, then opens the PR automatically. Whether you get an email or a phone text or call from a voice agent is ... somewhat secondary, im.
    • _pdp_ 4 hours ago
      There is value but it is hard to discover and extract outside of a few known areas - like coding, etc.
      • piker 4 hours ago
        Yes, I can see the (potential) value in working with agents in software development. The “claw” movement I understood to suggest value in less constrained access to my inbox, personal messages, calendar etc like some sort of PA. It’s hard to quantify how much damage a bad PA can do to someone’s personal and professional life, so if my understand is correct, this seems like a dead end.
        • _pdp_ 3 hours ago
          I posted this comment in another thread so reposting it here because it seems to be on topic.

          ---

          IMHO, the biggest problem with OpenClaw and other AI agents is that the use-cases are still being discovered. We have deployed several hundred of these to customers and I think this challenge comes from the fact that AI agents are largely perceived as workflow automation tools so when it comes to business process they are seen as a replacement for more established frameworks.

          They can automate but they are not reliable. I think of them as work and process augmentation tools but this is not how most customers think in my experience.

          However, here are a several legit use-case that we use internally which I can freely discuss.

          There is an experimental single-server dev infrastructure we are working on that is slightly flaky. We deployed a lightweight agent in go (single 6MB binary) that connects to our customer-facing API (we have our own agentic platform) where the real agent is sitting and can be reconfigured. The agent monitors the server for various health issues. These could be anything from stalled VMs, unexpected errors etc. It is firecracker VMs that we use in very particular way and we don't know yet the scope of the system. When such situations are detected the agent automatically corrects the problems. It keeps of log what it did in a reusable space (resource type that we have) under a folder called learnings. We use these files to correct the core issues when we have the type to work on the code.

          We have an AI agent called Studio Bot. It exists in Slack. It wakes up multiple times during the day. It analyses our current marketing efforts and if it finds something useful, it creates the graphics and posts to be sent out to several of our social media channels. A member of staff reviews these suggestions. Most of the time they need to follow up with subsequent request to change things and finally push the changes to buffer. I also use the agent to generate branded cover images for linkedin, x and reddit articles in various aspect ratios. It is a very useful tool that produces graphics with our brand colours and aesthetics but it is not perfect.

          We have a customer support agent that monitors how well we handle support request in zendesk. It does not automatically engage with customers. What it does is to supervise the backlog of support tickets and chase the team when we fall behind, which happens.

          We have quite a few more scattered in various places. Some of them are even public.

          In my mind, the trick is to think of AI agents as augmentation tools. In other words, instead of asking how can I take myself out of the equation, the better question is how can I improve the situation. Sometimes just providing more contextually relevant information is more than enough. Sometimes, you need a simple helper that own a certain part of the business.

          I hope this helps.

          • bsenftner 1 hour ago
            Great information post. don't let the AI fad boi's down votes lead you to think this is not a very worthwhile contribution.
          • esseph 15 minutes ago
            > They can automate but they are not reliable.

            This is why I won't use them for anything externally facing or with high or even moderate damage potential.

            Which basically means they don't get used at all.

    • coldtea 1 hour ago
      Newb technies love it.
    • pjmlp 3 hours ago
      Same here, I care to the extent I am obligated to, and staying relevant for finding a job.
    • andai 3 hours ago
      It's pretty much just Claude Code, except hooked up to your Telegram / WhatsApp / iMessage.

      I don't know why they don't make an official integration for it. Probably cause they're already out of GPUs lol

    • mark_l_watson 1 hour ago
      I ran OpenClaw in a container, on a VPS without connection to messaging systems, so perhaps that is why I didn't get value.

      Similarly, I have been using Hermes Agent also inside a container, and on a VPS with only access to a local directory in the VPS with a dozen active projects on GitHub. I don't give it access to my GitHub credentials, but allow it to work in whatever branch is checked out.

      This setup is fabulously productive. I use it about every other day to perform some meaningful task for me. It is inexpensive also. A task might take 20 minutes and cost $0.25 in GLP-5.1 API costs.

      So TLDR: out of the box, I use Hermes at least one hour a week and find it to be a wonderful tool.

    • onchainintel 4 hours ago
      It all depends on what you do aka your use case. If you're in the content creatio business, which is part of my responsibilities, then yes has been massively helpful. For other roles, I can absolutely see no use case or benefit. Context matters, like with everything.
    • cl0ckt0wer 1 hour ago
      Mostly it's fun. It'll so some light infra management for me too.
    • mathgladiator 4 hours ago
      Agent environments like OpenClaw are in the toy phase, and OpenClaw is teaching people how to build things with agents in a toy-like and unreliable way. I used my understanding of OpenClaw to build scalable + secure + auditable agent infrastructure in my platform such that I can build products that other people can use.
      • bayindirh 4 hours ago
        We had better agent infrastructures (namely JADE) back in the day. I worked with them, and now these things look like flimsy 50¢ plastic toys to me, too.
    • rimliu 3 hours ago
      I am also surprised by the number of people willing to outsource their lives.
    • dankobgd 3 hours ago
      no, it's only for scammers
    • iugtmkbdfil834 4 hours ago
      Eh, buddy says he uses them for his network and, apparently, some light IT maintenance for his family members. So far it seems to be working for him. I am not that brave.
    • surgical_fire 2 hours ago
      No.

      But I am someone that, for example, dislikes home automation. Know that thing that you ask Alexa to open your curtains? I think that is cringe af.

      Maybe there's an overlap with the crowd that likes that.

  • stared 3 hours ago
    I don’t get this OpenClaw hype.

    When people vibe-code, usually the goal is to do something.

    When I hear people using OpenClaw, usually the goal seems to be… using OpenClaw. At a cost of a Mac Mini, safety (deleting emails or so), and security (litelmm attack).

    • eloisant 2 hours ago
      The idea is to get a virtual personal assistant. Like Siri or Gemini but with access to all of your accounts, computers, etc. (Well whatever you give it access to). Like having a butler with access to your laptop.

      From what I understand, the main appeal isn't the end result, but building that AI personal assistant as a hobby is the appeal.

      • valeena 1 hour ago
        With a goal like this I could, at least on paper, find it useful... But I'm curious to see if this goal is really achievable, or if it easily is
        • Gareth321 47 minutes ago
          That is my goal and I invested a few dozen hours into the endeavour. My honest review is:

          1. Something like OpenClaw will change the world.

          2. OpenClaw is not yet ready.

          The heart of OpenClaw (and the promise) is the autonomy. We can already do a lot with the paid harnesses offered by OpenAI and Anthropic, so the secret sauce here is agents doing stuff for us without us having to babysit them or even ask them.

          The problem is that OpenClaw does this is an extreme rudimentary way: with "heartbeats." These are basically cron jobs which execute every five minutes. The cron job executes a list of tasks, which in turn execute other tasks. The architecture is extremely inefficient, heavy in LLM compute, and prone to failure. I could enumerate the thousand ways it can and will fail but it's not important. So the autonomy part of the autonomous assistant works very badly. Many people end up with a series of prescriptive cron jobs and mistakenly call that OpenClaw.

          Compounding this is memory. It is extremely primitive. Unfortunately even the most advanced RAG solutions out there are poor. LLMs are powerful due to the calculated weights between parametric knowledge. Referring to non-parametric knowledge is incredibly inefficient. The difference between a wheelchair and a rocket ship. This compounds over time. Each time OpenClaw needs to "think" about anything, it preloads a huge amount of "memories" into the query. Everything from your personal details to architecture to the specific task. Something as simple as "what time is it" can chew through tens of thousands of tokens. Now consider what happens over time as the agent learns more and more about you. Does that all get included in every single query? It eventually fails under its own weight.

          There is no elegant solution to this. You can "compress" previous knowledge but this is very lossy and the LLMs do a terrible job of intelligently retaining the right stuff. RAG solutions are testing intelligent routing. One method is an agentic memory feedback loop to seek out knowledge which might exist. The problem is this is circular and mathematically impossible. Does the LLM always attempt to search every memory file in the hope that one of the .md files contains something useful? This is hopelessly slow. Does it try to infer based on weekly/monthly summaries? This has proven extremely error-prone.

          At this point I think this will be first solved by OpenAI and/or Anthropic. They'll create a clean vectorised memory solution (likely a light LLM which can train itself in the background on a schedule) and a sustainable heartbeat cadence packaged into their existing apps. Anthropic is clearly taking cues from OpenClaw right now. In a couple of years we might have a competent open source agent solution. By then we might also have decent local LLMs to give us some privacy, because sending all my most intimate info to OpenAI doesn't feel great.

    • d0gsg0w00f 2 hours ago
      I have OC on a VPS. So far it's a way for me to play with non-Claude models and try to get them to get OC under control. So far I'm about $200 all in and OC is still not under control. Every few weeks it goes on an ACP bender and blows my credits in hidden sub-agents for no damn reason. I'm determined to break this horse though, it's like a fun video game with a glitchy end boss.
      • valeena 1 hour ago
        For how long have you been using it for it to have consumed $200? For me it sounds like a lot (still a student) but it doesn't seem to be the same for you
    • Someone 2 hours ago
      In the early 1980’s, what did people use home computers such as Atari’s and Commodore 64’s for? Mostly playing games; nerds also used their computer with the goal seeming to be… using their computer.

      It wasn’t (only) that, though; they also learned, so that, when people could afford to buy computers that were really useful, there were people who could write useful programs, administer them, etc.

      Same thing with 3D printers a decade or so ago. What did people use them for? Mostly tinkering with hard- and software for days to finally get them to print some teapot or rabbit they didn’t need or another 3D printer.

      This _may_ be similar, with OpenClaw-like setups eventually getting really useful and safe enough for mere mortals.

      But yes, the risks are way larger than in those cases.

      Also, I think there are safer ways to gain the necessary expertise.

    • Havoc 1 hour ago
      It’s basically a reimagined n8n like low code platform with LLM magic. Digital glue

      That’s why there isn’t a coherent use story because like glue the answer is whatever the user needs to glue/get done

    • SlinkyOnStairs 3 hours ago
      The main "sales pitch" appears to be "You can have the computer do things for you without having to learn how to use a computer" (at the cost of now having to learn how to use a massively overcomplicated and fundamentally unreliable system; It's just an illusion of ease of use.)

      The thread's linked article is about comparing MS-DOS' security, but the comparison works on another level as well: I remember MS-DOS. When the very idea of the home/office computer was new. When regular people learned how to use these computers.

      All this pretension that computers are "hard to use", that LLMs are making the impossible possible, it's all ahistoric nonsense. "It would've taken me months!" no, you would've just had to spend a day or two learning the basics of python.

      • stared 2 hours ago
        I was one of those using MS-DOS (still I remember blue Norton Commander). I didn't understand people mocking it later - as it just worked. Enough to run the Prince of Persia, Doom or so. Or edit text files. (As an excuse, I was just ~7 yo back then.)
    • leonidasrup 3 hours ago
      OpenClaw, the ultimate arbitrary code execution
      • classified 1 hour ago
        Didn't you always want to let everyone else do remote code execution on your computer?
    • thenthenthen 3 hours ago
      To me openclaw sounds like a software clickfarm?
  • repelsteeltje 4 hours ago
    One could argue that the discussion is once again about tech debt.

    Both OpenClaw and MSDOS gaining a lot a traction by taking short cuts, ignoring decades of lessons learned and delivering now what might have been ready next year. MSDOS (or the QDOS predecessor) was meant to run on "cheap" microcomputer hardware and appeal to tinkerers. OpenClaw is supposed to appeal to YOLO / FOMO sentiments.

    And of course, neither will be able to evolve to their eventual real-world context. But for some time (much longer than intended), that's where it will be.

    • Schlagbohrer 3 hours ago
      It worked to launch the creator into a gig at OpenAI.

      Similar YOLO attitude to OpenAI's launch of modern LLMs while Google was still worrying about all the legal and safety implications. The free market does not often reward conservative responsible thinking. That's where government regulation comes in.

      • classified 1 hour ago
        > It worked to launch the creator into a gig at OpenAI.

        True, but it doesn't scale. No amount of YOLO will let anyone else repeat that feat.

    • TeMPOraL 4 hours ago
      OpenClaw was an inevitability. An obvious idea that predates LLMs. It took this long for models and pricing to catch up. As much as I dislike this term, if there's one clear example of "Product Model Fit", it's OpenClaw - well, except that arguably what made it truly possible was subscription pricing introduced with Claude Code; before, people were extremely conservative with tokens.

      But the point is, OpenClaw is just the first that lucked and got viral. If not for it, something equivalent would. Much like LangChain in the early LLM days.

    • leonidasrup 3 hours ago
      OpenClaw, the ultimate example of Facebook's motto "Move Fast and Break Things"
    • Earw0rm 2 hours ago
      MSDOS and similar single-user OS were not originally designed for networked computers with persistent storage. Different set of constraints.
  • nryoo 3 hours ago
    $180/month to control your lights and music. A Raspberry Pi + Home Assistant does this for $0/month and doesn't exfiltrate your home network topology to a third-party API. The value proposition only makes sense if your time is worth more than your privacy.
    • eloisius 1 hour ago
      The comparison to smart home gadgetry seems apt to me. I actually want to hack on something LLM agent-related to practice what is clearly a marketable skill, but I can't find anything I'd actually want it to do for me in my real life, other than maybe sort my emails for me, but there's no way I'm going to pipe every one of my emails to an LLM company.

      I remember circa 2015 all my nerdy colleagues were going wild with home automation stuff, and I felt like I wanted to play with it too at first. But then I started to observe that these guys weren't spending less time than me turning on their lights. They were spending way more time than me, in fact, tinkering with their thermostats and curtains. I'm perfectly happy hitting a light switch when I walk in the door.

      I can't envision one of these Telegram bots reliably completing tasks for me. Maybe the closest one would be what I've seen in this thread. Downloading torrents and putting them in Jellyfin for me, but really, I don't hate curating my own media collection.

      • slfnflctd 20 minutes ago
        > my nerdy colleagues were going wild with home automation stuff [...] I wanted to play with it too [...] these guys weren't spending less time than me turning on their lights

        Yep. The IoT home automation stuff is still less performant than much older, wired solutions where whole systems were designed at once in a set-and-forget mode and didn't have weird sync issues or delays. I remember seeing the 'home of the future' exhibit at Epcot like 20+ years ago and these IoT setups are often still a total joke in comparison because of all the protocol issues and fiddling with various interfaces needed.

        Just like how the analog wired POTS phone systems were more performant in many ways than pretty much any IP based voice setup.

        I simply got tired of messing with stuff that kept breaking in unexpected ways. It wasn't saving time, it was adding a lot of totally unnecessary stress and actually taking time away from me-- for little more than an occasional spark of novelty. Being able to use voice accurately & repeatably for simple task requests is probably the only standout advancement.

        My 'nerdy colleagues' and myself can get a lot of enjoyment out of tinkering with this new agentic hotness. However, very few of us I think are really getting something that's actually saving us time in the long run (at least in our personal lives), and it's going to take a while to figure out what's actually realistically reproducible toward that end at a reasonable cost.

    • UqWBcuFx6NV4r 3 hours ago
      This comparison is dishonest, and you know that it is. This is coming from someone that uses Home Assistant and wouldn’t touch OpenClaw with a 10 foot pole. If I had a horse in this race it’d be your horse, but to pretend that these achieve the same goals is just… not in the spirit of an actual discussion.
      • everforward 1 hour ago
        I have the voice assistant on Mike hooked up to Claude and it does most of the things I’d want OpenClaw to do.

        I’m not generally interested in having it read my email or calendar. I have a digital calendar in the kitchen, and I rarely get important email. I do really enjoy being able to control my house by voice in natural language. I had it set all my lights to Easter colors a while back in a single instruction.

      • albatrosstrophy 3 hours ago
        Kindly elaborate? Coming from someone who still uses AI mainly to draft emails and raspberry Pi as sandboxed automation project.
  • pantulis 2 hours ago
    This weekend I installed Hermes on my computer. My M4 Max Studio started spinning its fans as if it wanted to fly, so I went for some cloud hosted models. The thing works as advertised, but token consumption is through the roof. of course ymmv depending on the LLM you choose.

    But my main takeaway is that from the security standpoint this is a ticking bomb. Even under Docker, for these things to be useful there is no going around giving it credentials and permissions that are stored in your computer where they can be accessed by the agent. So, for the time being, I see Telegram, my computer, the LLM router (OpenRouter) and the LLM server as potential attack/exfiltration surfaces. Add to that uncontrolled skills/agents from unknown origins. And to top it off, don't forget that the agent itself can malfunction and, say, remove all your email inboxes by mistake.

    Fascinating technology but lacking maturity. One can clearly see why OpenAI hired Clawdbot's creator. The company that manages to build an enterprise-ready platform around this wins the game.

    • mentalgear 2 hours ago
      > One can clearly see why OpenAI hired Clawdbot

      Hype, mainly buying Hype before their IPO. The project is open source and the thinking behind it is not difficult, if they truly wanted they could have done it a long time ago or even without the guy. It was a pure hype 'acquisition' of a project that become popular for amateur programmers that got into it through vibe-coding and are unaware of the consequences and security exposure they subject themselves at.

      • bitmasher9 44 minutes ago
        This is the Siri-brained explanation. The Apple AI assistant has been stagnant for 10 years. Therefore assistants as a whole cannot be good.

        This is so clearly the next step from Siri to Alexa to {Openclaw like technology}, that is an interface to technology that loads of people find value in everyday, and loads of people complain doesn’t have enough capabilities.

    • azmz 1 hour ago
      The credentials-on-device thing is the real blocker for a lot of people. I built atmita.com going the other way: cloud-hosted so nothing lives on your box, OAuth handled on the server, and a safe mode where destructive actions wait for phone approval before they fire. Not based on OpenClaw, built from scratch, so the Docker/token-exfil surface isn't part of the stack.
  • saidnooneever 2 hours ago
    DOS didn't have certain protections because the hardware it targeted did not have those protections. For UNIX on the same machines, they also had no such protections. On 8086 there were no CPU rings, no virtual memory and no other features to help there.

    Memory isolation is enforced by the MMU. This is not software.

    Maybe you were confused with Linux, which came later, and landed in a soft x32 bed with CPU rings and Page Tables/VirtualMemory. ("Protected Mode", named for that reason...)

    That being said, OpenClaw is criminally bad, but as such, fits well in our current AI/LLM ecosystem.

    • TacticalCoder 2 hours ago
      > DOS didn't have certain protections because the hardware it targeted did not have those protections. For UNIX on the same machines, they also had no such protections. On 8086 there were no CPU rings, no virtual memory and no other features to help there.

      Those arrived with the 386 (286? Don't remember but 386 for sure) and DOS was well alive late into the 386 and even late in the 486 days.

      > For UNIX on the same machines, they also had no such protections.

      I was already running Linux on my 486 before Windows 95 arrived. Linux and DOS. One had those protections, the other didn't.

  • teach 4 hours ago
    This isn't especially related to the article, but when I was at university my first assembler class taught the Motorola 680x0 assembly. I didn't own a computer (most people didn't) but my dorm had a single Mac that you could sign up to use so I did some assignments on that.

    Problem is, I was just learning and the mac was running System 7. Which, like MS-DOS, lacked memory protection.

    So, one backwards test at the end of your loop and you could -- quite easily -- just overwrite system memory with whatever bytes you like.

    I must have hard-locked that computer half a dozen times. Power cycle. Wait for it to slowly reboot off the external 20MB SCSI HDD.

    Eventually I took to just printing out the code and tracing through it instead of bothering to run it. Once I could get through the code without any obvious mistakes I'd hazard a "real" execution.

    To this day, automatic memory management still feels a little luxurious.

  • electroglyph 4 hours ago
  • nopurpose 4 hours ago
    I agree that sandboxing whole agent is inadequate: I am fine sharing my github creds with the gh CLI, but not with the npm. More granular sunboxing and permission is what I'd like to see and this project seems interesting enough to have a closer look.

    I am not interested in the "claw" workflow, but if I can use it for a safer "code" environment it is a win for me.

    • mkesper 3 hours ago
      When the agent uses your GH credentials to nuke all your projects or put out a lot of crap, this separation will not save you.
      • nopurpose 3 hours ago
        whitelisting `gh` args should solve it. Event opencode's primitive permission system allows that.
  • ymolodtsov 2 hours ago
    I run OpenClaw on a $4 VPS with read-only access to most of the accounts. Just this morning I asked it to confirm how exactly our company is paying for a particular service and whether we ever switched to the vendor directly. In about 30s it found all the necessary emails and provided me with a timeline.

    It's like your actual asssitant. Now, most of this can be done inside ChatGPT/Claude/Codex now. Their only remaining problem for certain agentic things is being able to run those remotely. You can set up Telegram with Claude Code but it's somehow even more complicated than OpenClaw.

  • Schlagbohrer 3 hours ago
    Why am I totally unable to understand this post. I have been a long time computer user but this has way too much jargon for me.
    • wccrawford 3 hours ago
      There's a difference between using a thing and understanding how it works. There's a lot of stuff in this that reference things that only hardware and software creators are going to understand, and only if they're deep enough into their craft.

      "Interrupts", for example, are an old concept that is rarely talked about anymore until you get into low-level programming. At a high level, you don't even think about them, let alone talk about them.

      • khalic 2 hours ago
        cries in rust interrupts
  • falense 4 hours ago
    Very cool project! I am working on something similar myself. I call mine TriOnyx. Its based on Simon Willison's lethal trifecta. You get a star from me :D

    https://www.tri-onyx.com/

  • the__alchemist 1 hour ago
    The analogies the author highlights the multi-purpose nature of these machines, which I believe persists to this day, and is why some people have a hard time adopting Linux (Or why UAC was controversial in an older Win version): The conflation of personal computers, and a multi-user IT systems or servers. The IT story of Wal-Mart used to make the analogy is in the latter category. My dad typing up documents for work, or me playing The Lost Mind of Dr Brain and Mario Teaches Typing have different security requirements.
  • Havoc 1 hour ago
    That’s a great deal of technical isolation but does little to address the real problem. If the agent has access to both your info (email, files etc) and reads things on say the open internet then it’s vulnerable to prompt injection and Data exfiltration.

    And if you remove either access to data or access to internet then you kill a good chunk of usefulness

  • jimmypk 1 hour ago
    The thread went straight to cost/ROI but the article's actual argument is about security architecture: 'sandbox around the whole agent' vs. 'enforce at the tool layer.' OpenClaw/NemoClaw's setup — binding Ollama to 0.0.0.0 across a network namespace, pairing through the chat channel, approving connections at the netns boundary — are each workarounds for a foundation that didn't separate concerns early. The Unix principle wasn't 'wrap your DOS program in a safer shell' — it was address space and identity separation built in from below. Whether local inference is worth $180/mo is a separate question from whether the permission model belongs at the network boundary or at the tool dispatch layer.
  • raincole 1 hour ago
    And MS-DOS was a massive success. Even 'massive' is such an understatement and English probably needs to invent a new word for that level of world-changing business.

    So yeah, perhaps it isn't fooling the author, but it doesn't matter for the other billions of people.

  • LudwigNagasena 3 hours ago
    And I remember OSes today, 1 year ago, 5 years ago, 10 years ago, etc. Security was always a problem. People blindly delegate admin privileges to scripts and programs from the internet all the time. It’s hard to make something secure and usable at the same time. It’s not like agent harnesses suddenly broke all adopted best practices around software and sandboxing.

    I remember Apple introducing sandboxing for Mac apps, extending deadlines because no one was implementing it. AFAIK, many apps still don’t release apps there simply because of how limiting it is.

    Ironically, the author suggests to install his software by curl’ing it and piping it straight into sh.

  • tomasol 2 hours ago
    I believe the codegen must be separated from the runtime. Every time you ask AI for a new task, it must be deployed as a separate app with the least amount of privileges possible, potentially with manual approvals as the app is executing. So essentially you need a workflow engine.
  • sriku 3 hours ago
    "Fast" is not always a virtue and "efficiency" is not always the only consideration.
  • trilogic 4 hours ago
    Great article. Been skeptical of it since the beginning with this Python "Cli" agents. Been looking for local ai driven Agentic GUI that offers real privacy but coulnt find it anywhere. Finally what we call real local and ClI agents pipeline local ai driven with llama.cpp engine is done. Just pure bash and c++, model isolated, no http, no python, no api, no proprietary models. There is the native version (in c++) and the community version in Electron. Is electron Good enough to protect users Wrapping all the rest? This is exciting.
  • classified 1 hour ago
    The value proposition seems clear: OpenClaw lets you speedrun the Why of application security and sandboxing from first principles. Start with putting all of your money and your valuables in a box without a lock stored in a public place. If you learn something from that, you may proceed with the next step.
  • pointlessone 4 hours ago
    Wow. Much security.

    I too remember DOS. Data and code finely blended and perfectly mixed in the same universally accessible block of memory. Oh, wait… single context. nwm

  • tnelsond4 2 hours ago
    I think we should be giving AI access to something like templeos where there is no permissions and everything runs unrestricted and you can rewrite the os while it's running.
  • npodbielski 1 hour ago
    I does not look like it support streaming of responses from llm into channel. Big issue for local inferrence.
  • nurettin 1 hour ago
    It wasn't entirely DOS's fault. DOS was a relic from the end of single-process single-user era. Corporate took that and bent it to their use instead of settling for something more complex and harder required an entire department to maintain.

    *Claw is more like windows 98. Everyone knows it is broken, nobody really cares. And you are almost certainly going to be cryptolocked (or worse) because of it. It isn't a matter of if, but when.

  • TacticalCoder 2 hours ago
    > curl-pipe-sh as well. The installer verifies the release signature with ssh-keygen against an embedded key, fail-closed on every failure path. The installer’s own SHA is pinned in the README for readers who want to check the script before piping.

    Packages shipping as part of Linux distros are signed. Official Emacs packages (but not installed by the default Emacs install) are all signed too.

    I thankfully see some projects released, outside of distros, that are signed by the author's private key. Some of these keys I have saved (and archived) since years.

    I've got my own OCI containers automatically verifying signed hashes from known author's past public keys (i.e. I don't necessarily blindly trust a brand new signature key as I trust one I know the author has been using since 10 years).

    Adding SHA hashes pinning to "curl into bash" is a first step but it's not sufficient.

    Software shipped properly aren't just pinning hashes into shell scripts that are then served from pwned Vercel sites. Because the attacker can "pin" anything he wants on a pwned JavaScript site.

    Proper software releases are signed. And they're not "signed" by the 'S' in HTTPS as in "That Vercel-compromised HTTPS site is safe because there's an 'S' in HTTPS".

    Is it hard to understand that signing a hash (that you can then PIN) with a private key that's on an airgapped computer is harder to hack than an online server?

    We see major hacks nearly daily know. The cluestick is hammering your head, constantly.

    When shall the clue eventually hit the curl-basher?

    Oh wait, I know, I know: "It's not convenient" and "Buuuuut HTTPS is just as safe as a 10 years old private key that has never left an airgapped computer".

    Here, a fucking cluestick for the leftpad'ers:

    https://wiki.debian.org/Keysigning

    (btw Debian signs the hash of testing release with GPG keys that haven't changed in years and, yes, I do religiously verify them)

  • 2muchcoffeeman 3 hours ago
    [dead]
  • maxbeech 5 hours ago
    [dead]