No AI* Here – A Response to Mozilla's Next Chapter

(waterfox.com)

401 points | by MrAlex94 16 hours ago

36 comments

  • inkysigma 13 hours ago
    > Large language models are something else entirely*. They are black boxes. You cannot audit them. You cannot truly understand what they do with your data. You cannot verify their behaviour. And Mozilla wants to put them at the heart of the browser and that doesn't sit well.

    Am I being overly critical here or is this kind of a silly position to have right after talking about how neural machine translation is okay? Many of Firefox's LLM features like summarization afaik are powered by local models (hell even Chrome has local model options). It's weird to say neural translation is not a black box but LLMs are somehow black boxes that we cannot hope to understand what they do with the data, especially when viewed a bit fuzzily LLMs are scaled up versions of an architecture that was originally used for neural translation. Neural translation also has unverifiable behavior in the same sense.

    I could interpret some of the data talk as talking about non local models but this very much seems like a more general criticism of LLMs as a whole when talking about Firefox features. Moreover, some of the critiques like verifiability of outputs and unlimited scope still don't make sense in this context. Browser LLM features except for explicitly AI browsers like Comet have so far had some scoping to their behavior, either in very narrow scopes like translation or summarization. The broadest scope I can think of is the side panels that show up which allow you to ask about a web page with context. Even then, I do not see what is inherently problematic about such scoping since the output behavior is confined to the side panel.

    • jrjeksjd8d 10 hours ago
      To be more charitable to TFA, machine translation is a field where there aren't great alternatives and the downside is pretty limited. If something is in another language you don't read it at all. You can translate a bunch of documents and benchmark the result and demonstrate that the model doesn't completely change simple sentences. Another related area is OCR - there are sometimes mistakes, but it's tractable to create a model and verify it's mostly correct.

      LLMs being applied to everything under the sun feels like we're solving problems that have other solutions, and the answers aren't necessarily correct or accurate. I don't need a dubiously accurate summary of an article in English, I can read and comprehend it just fine. The downside is real and the utility is limited.

      • schoen 8 hours ago
        There's an older tradition of rule-based machine translation. In these methods, someone really does understand exactly what the program does, in a detailed way; it's designed like other programs, according to someone's explicit understanding. There's still active research in this field; I have a friend who's very deep into it.

        The trouble is that statistical MT (the things that became neural net MT) started achieving better quality metrics than rule-based MT sometime around 2008 or 2010 (if I remember correctly), and the distance between them has widened since then. Rule-based systems have gotten a little better each year, while statistical systems have gotten a lot better each year, and are also now receiving correspondingly much more investment.

        The statistical systems are especially good at using context to disambiguate linguistic ambiguities. When a word has multiple meanings, human beings guess which one is relevant from overall context (merging evidence upwards and downwards from multiple layers within the language understanding process!). Statistical MT systems seem to do something somewhat similar. Much as human beings don't even perceive how we knew which meaning was relevant (but we usually guessed the right one without even thinking about it), these systems usually also guess the right one using highly contextual evidence.

        Linguistic example sentences like "time flies like an arrow" (my linguistics professor suggested "I can't wait for her to take me here") are formally susceptible of many different interpretations, each of which can be considered correct, but when we see or hear such sentences within a larger context, we somehow tend to know which interpretation is most relevant and so most plausible. We might never be able to replicate some of that with consciously-engineered rulesets!

        • GMoromisato 7 hours ago
          This is the bitter lesson.[1]

          I too used to think that rule-based AI would be better than statistical, Markov chain parrots, but here we are.

          Though I still think/hope that some hybrid system of rule-based logic + LLMs will end up being the winner eventually.

          ----------------

          [1] https://en.wikipedia.org/wiki/Bitter_lesson

        • skylurk 4 hours ago
          Yep, some domains have no hard rules at all.

          Time flies like an arrow; fruit flies like a banana.

      • ACCount37 3 hours ago
        LLMs are great because of exactly that: they solve things that have no other solutions.

        (And also things that have other solutions, but where "find and apply that other solution" has way more overhead than "just ask an LLM".)

        There is no deterministic way to "summarize this research paper, then evaluate whether the findings are relevant and significant for this thing I'm doing right now", or "crawl this poorly documented codebase, tell me what this module does". And the alternative is sinking your own time in it - while you could be doing something more important or more fun.

      • onion2k 6 hours ago
        and demonstrate that the model doesn't completely change simple sentences

        A nefarious model would work that way though. The owner wouldn't want it to be obvious. It'd only change the meaning of some sentences some of the time, but enough to nudge the user's understanding of the translated text to something that the model owner wants.

        For example, imagine a model that detects the sentiment of text about Russian military action, and automatically translates it to something a more positive if it's especially negative, but only 20% of the time (maybe ramping up as the model ages). A user wouldn't know, and a someone testing the model for accuracy might assume it's just a poor translation. If such a model became popular it could easily shift the perception of the public a few percent in the owner's preferred direction. That'd be plenty to change world politics.

        Likewise for a model translating contracts, or laws, or anything else where the language is complex and requires knowledge of both the language and the domain. Imagine a Chinese model that detects someone trying to translate a contract from Chinese to English, and deliberately modifies any clause about data privacy to change it to be more acceptable. That might be paranoia on my part, but it's entirely possible on a technical level.

        • v3xro 4 hours ago
          That's not a technical problem though is it? I don't see legal scenarios where unverified machine translation is acceptable - you need to get a certified translator to sign off on any translations and I also don't see how changing that would be a good thing.
          • GTP 1 hour ago
            I think the point here is that, while such a translation wouldn't be admissible in court, many of us already used machine translation to read some legal agreement in a language we don't know.
    • tdeck 10 hours ago
      Aside: Does anyone actually use summarization features? I've never once been tempted to "summarize" because when I read something I either want to read the entire thing, or look for something specific. Things I want summarized, like academic papers, already have an abstract or a synopsis.
      • lproven 53 minutes ago
        No, because an LLM cannot summarise. It can only shorten which is not the same.

        Citation: https://ea.rna.nl/2024/05/27/when-chatgpt-summarises-it-actu...

      • KronisLV 35 minutes ago
        Yes.

        Most recently, a new ISP contract: because it's both low stakes enough where I don't care much about inaccuracies (it's a bog standard contract from a run of the mill ISP), there's basically no information in there that the cloud vendor doesn't already have (if they have my billing details) but also where I'm curious about whether anything might jump out, all while not really wanting to read the 5 pages of the thing.

        Just went back to that, it got both all of the main items (pricing, contract terms, my details) correctly, but also the annoying fine print (that I referenced, just in case). Also works pretty well across languages, though that depends on the model in question a bunch.

        I feel like if browsers or whatever get the UX of this down, people will upload all sorts of data into those vendors that they normally shouldn't. I also think that with nuanced enough data, we'll eventually have the LLM equivalent of Excel messing up data due to some formatting BS.

      • andai 7 hours ago
        Yeah, basically every 15 minute YouTube video, because the amount of actual content I care about is usually 1-2 sentences, and usually ends up being the first sentence of an LLM summary of the transcript.

        If something has actual substance I'll watch the whole thing, but that's maybe 10% of videos I find in experience.

        • Terr_ 6 hours ago
          I'd wager there's 95% of the benefit for 0.1% of the CPU cycles just by having a "search transcript for term" feature, since in most of those cases I've already got a clear agenda for what kind of information I'm seeking.

          Many years ago I make a little proof-of-concept for displaying the transcript (closed captions) of a YouTube video as text, and highlighting a word would navigate to that timestamp and vice-versa. Such a thing might be valuable as a browser extension, now that I think of it.

          • 998244353 2 hours ago
            YouTube already supports that natively these days, although it's kind of hidden (and knowing Google, it might very well randomly disappear one day). Open the description of the video, scroll down and click "show transcript".
          • mrob 5 hours ago
            Searching the transcript has the problem of missing synonyms. This can be solved by the one undeniably useful type of AI: embedding vector search. Embeddings for each line of the transcript can be calculated in advance and compared with the embeddings of the user's search. These models need only a few hundred million parameters for good results.
      • mikestorrent 8 hours ago
        In-browser ones? No. With external LLMS? Often. It depends on the purpose of the text.

        If the purpose is to read someone's _writing_, then I'm going to read it, for the sheer joy of consuming the language. Nothing will take that from me.

        If the purpose is to get some critical piece of information I need quickly, then no, I'd rather ask an AI questions about a long document than read the entire thing. Documentation, long email threads, etc. all lend themselves nicely to the size of a context window.

      • runjake 8 hours ago
        Yes, several times a day. I use summarization for webpages, messages, documents and YouTube videos. It’s super handy.

        I mainly use a custom prompt using ChatGPT via the Raycast app and the Raycast browser extension.

        That said, I don’t feel comfortable with the level of AI being shoved into browsers by their vendors.

        • nottorp 3 hours ago
          Aren't you worried it will fuck up your comprehension skills? Reading or listening.
      • figmert 8 hours ago
        You mean you don't summarize those terrible articles you happen to come across and you're a little intrigued, hoping that there's some substance, and then you read, and it just repeats the same thing over and over again with different wording? Anyway, I sometimes still give them the benefit of the doubt, and end up doing a summary. Often they get summarized into 1 or 2 sentences.
        • johnnyanmac 1 hour ago
          No, not really. I don't even know how to really respond to this but maybe

          1. I don't read "terrible articles". I can skim an article and figure if something I'm interested in.

          2. I actually do read terrible articles and I have terrible taste

          3. Any "summarization" I do that isn't from my direct reading is evaluated by the discussion around it. Though nowadays that's more and more spotty.

        • tdeck 8 hours ago
          Maybe I should start doing that but I usually just... don't read them.
      • simonw 9 hours ago
        I occasionally use the "summarize" button on the iPhone Mobile Safari reader view if I land on a blog entry and it's quite long and I want to get a quick idea of if it's worth reading the whole thing or not.
      • wkat4242 9 hours ago
        Yes. I use it sometimes in Firefox with my local LLM server. Sometimes i come across an article I'm curious about but don't have the time or energy to read. Then I get a TL;DR from it. I know it's not perfect but the alternative is not reading it at all.

        If it does interest me then I can explore it. I guess I do this once a week or so, not a lot.

        • ruszki 6 hours ago
          I highly doubt that no information would be worse than wrong information. Both wars in Ukraine and Gaza show this very clearly.
          • wkat4242 5 hours ago
            I just use it for personal information, I'm not involved in any wars :) I don't base any decisions on it, for example if I buy something I don't go by just AI stuff to make a decision. I use the AI to screen reviews, things like that (generally I prefer really deep review and not glossy consumer-focused ones). Then I read the reviews that are suitable to me.

            And even reading an article about those myself doesn't make me insusceptible to misinformation of course. Most of the misinformation about these wars is spread on purpose by the parties involved themselves. AI hallucination doesn't really cause that, it might exacerbate it a little bit. Information warfare is a huge thing and it has been before AI came on the scene.

            Ok, as a more specific example, recently I was thinking of buying the new Xreal Air 2. I have the older one but I have 3 specific issues with it. I used AI to find references about these issues being solved. This was the case and AI confirmed that directly with references, but in further digging myself I did find that there was also a new issue introduced with that model involving blurry edges. So in the end I decided not to buy the thing. The AI didn't identify that issue (though to be fair I didn't ask it to look for any).

            So yeah it's not an allknowing oracle and it makes mistakes, but it can help me shave some time off such investigations. Especially now that search engines like google are so full of clickbait crap and sifting through that shit is tedious.

            In that case I used OpenWebUI with a local LLM model that speaks to my SearXNG server which in turn uses different search engines as a backend. It tends to work pretty well I have to say, though perplexity does it a little better. But I prefer self-hosting as much as I can (of course the search engine part is out of scope there).

            • ruszki 4 hours ago
              Even if you know about and act against mis- and disinformation, it affects you, and you voluntarily increase your exposure to it. And the situation is already terrible.

              I gave the example of wars, because it’s obvious, even for you, and you won’t relativize away the same way how you just did with AI misinformation, which affects you the exact same way.

      • badbotty 9 hours ago
        Haven’t tried them but I can see these features being really useful for screen reader users.
      • mock-possum 5 hours ago
        Nah, because anything not worth reading is also not worth summarizing.
      • cess11 6 hours ago
        No, because I know how to search and skim.
    • MrAlex94 5 hours ago
      Looking back with fresh eyes, I definitely think I could’ve presented what I’m trying to say better.

      On a purely technical play, you’re right that I’m drawing a distinction that may not hold up purely on technical grounds. Maybe the better framing is: I trust constrained, single purpose models with somewhat verifiable outputs (seeing text go in, translated text go out, compare its consistency) more than I trust general purpose models with broad access to my browsing context, regardless of whether they’re both neural networks under the hood.

      WRT to the “scope”, maybe I have picked up the wrong end of the stick with what Mozilla are planning to do - but they’ve already picked all the low hanging fruit with AI integration with the features you’ve mentioned and the fact they seem to want to dig their heels in further, at least to me, signals that they want deeper integration? Although who knows, the post from the new CEO may also be a litmus test to see what the response to that post elicits, and then go from there.

      • yunohn 5 hours ago
        I still don’t understand what you mean by “what they do with your data” - because it sounds like exfiltration fear mongering, whereas LLMs are a static series of weights. If you don’t explicitly call your “send_data_to_bad_actor” function with the user’s I/O, nothing can happen.
        • MrAlex94 4 hours ago
          I disagree that it’s fear mongering. Have we not had numerous articles on HN about data exfiltration in recent memory? Why would an LLM that is in the drivers seat of a browser (not talking about current feature status in Firefox wrt to sanitised data being interacted with) not have the same pitfalls?

          Seems as if we’d be 3 for 3 in the “agents rule of 2” in the context of the web and a browser?

          > [A] An agent can process untrustworthy inputs

          > [B] An agent can have access to sensitive systems or private data

          > [C] An agent can change state or communicate externally

          https://simonwillison.net/2025/Nov/2/new-prompt-injection-pa...

          Even if we weren’t talking about such malicious hypotheticals, hallucinations are a common occurrence as are CLI agents doing things it thinks best, sometimes to the detriment of the data it interacts with. I personally wouldn’t want my history being modified or deleted, same goes with passwords and the like.

          It is a bit doomerist, I doubt it’ll have such broad permissions but it just doesn’t sit well which I suppose is the spirit of the article and the stance Waterfox takes.

          • dkdcio 31 minutes ago
            > Have we not had numerous articles on HN about data exfiltration in recent memory?

            there’s also an article on the front page of HN right now claiming LLMs are black boxes and we don’t know how they work, which is plainly false. this point is hardly evidence of anything and equivalent to “people are saying”

          • yunohn 2 hours ago
            I believe you are conflating multiple concepts to prove a flaky point.

            Again, unless your agent has access to a function that exfiltrates data, it is impossible for it to do so. Literally!

            You do not need to provide any tools to an LLM that summarizes or translates websites, manages your open tabs, etc. This can be done fully locally in a sandbox.

            Linking to simonw does not make your argument valid. He makes some great points, but he does not assert what you are claiming at any point.

            Please stop with this unnecessary fear mongering and make a better argument.

            • nazgul17 1 hour ago
              Thinking aloud, but couldn't someone create a website with some malicious text that, when quoted in a prompt, convinces the LLM to expose certain private data to the web page, and couldn't the webpage send that data to a third party, without the need for the LLM to do so?

              This is probably possible to mitigate, but I fear what people more creative, motivated and technically adept could come up with.

    • user3939382 11 hours ago
      Firefox should look like Librewolf first of all, Librewolf shouldn’t have to exist. Mozilla’s privacy stuff is marketing bullshit just like Apple. It shouldn’t be doing ANYTHING that isn’t local only unless it’s explicitly opt in or user UI action oriented. The LLM part is absurd bc the entire overton window is in the wrong place.
      • andai 7 hours ago
        As a side note, I was like "Isn't WaterFox the FF fork by that wolf guy?"

        Then I thought, "Aha! Surely LibreWolf is the one I'm thinking of!"

        Turns out no, it's a third one! It's PaleMoon...

      • PunchyHamster 10 hours ago
        It's frankly desperate trend chasing from management that lost after starting from near total market domination, and have no idea what to do now.
        • takluyver 4 hours ago
          > starting from near total market domination

          That's not really accurate: Firefox peaked somewhere around 30% market share back when IE was dominant, and then Chrome took over the top spot within a few years of launching.

          FWIW, I think there's just no good move for Mozilla. They're competing against 3 of the biggest companies in the world who can cross-subsidise browser development as a loss-leader, and can push their own browsers as the defaults on their respective platforms. The most obvious way to make money from a browser - harvesting user data - is largely unavailable to them.

    • tliltocatl 6 hours ago
      The thing about translation, even a human translator will sometimes make silly mistakes unless they know the domain really well. So LLM are not any worse. Translation is a problem with no deterministic solution (rule-based translation had always been a bad joke). Properly implemented deterministic search/information retrieval, on the other hand, works extremely well. So well it doesn't really need any replacement - except when you also have some extra dynamics on top like "filtering SEO slop" - and that's not something LLMs can improve at all.
    • Cheer2171 13 hours ago
      No, it is disqualifyingly clueless. The author defends one neural network, one bag of effectively-opaque floats that get blended together with WASM to produce non-deterministic outputs which are injected into the DOM (translation), then righteously crusades against other bags of floats (LLMs).

      From this point of view, uBlock Origin is also effectively un-auditable.

      Or your point about them maybe imagining AI as non-local proprietary models might be the only thing that makes this make sense. I think even technical people are being suckered by the marketing that "AI" === ChatGPT/Claude/Gemini style cloud-hosted proprietary models connected to chat UIs.

      • koolala 11 hours ago
        I'm ok with Translation because it's best solved with AI. I'm not ok with it when Firefox "uses AI to read your open tabs" to do things that don't even need an AI based solution.
        • kbelder 11 hours ago
          There's levels of this, though, more than two:

              local, open model
              local, proprietary model
              remote, open model (are there these?)
              remote, proprietary model
          
          There is almost no harm in a local, open model. Conversely, a remote, proprietary model should always require opting in with clear disclaimers. It needs to be proportional.
          • koolala 11 hours ago
            The harm to me is the implementation is terrible - local or not (assuming no AI based telemetry). If their answer is AI then it pretty much means they won't make a non-AI solution. Today I just got my first stupid AI tab grouping in Firefox that makes zero intuitive sense. I just want grouping not from an AI reading my tabs. It should just be based on where my tabs were opened from. I also tried Waterfox today because of this post and while I'd prefer horizontal grouping atleast their implementation isn't stupid. Language translation is a opaque complex process. Tabs being grouped from other tabs is not good when opaque and unpredictable and does not need AI.
          • enriquto 7 hours ago
            What do you mean by "open"?

            Open weights, or open training data? These are very different things.

            • kbelder 6 hours ago
              That is a good point, and I think the takeaway is that there are lots of degrees of freedom here. Open training data would be better, of course, but open weights is still better than completely hidden.
              • enriquto 6 hours ago
                I don't see the difference between "local, open weights" and "local, proprietary weights". Is that just the handful of lines of code that call the inference?

                The model itself is just a binary blob, like a compiled program. Either you get its source code (the complete training data) or you don't.

          • Terr_ 6 hours ago
            > There is almost no harm in a local, open model.

            Depends what the side-effects can possibly be. A local+open model could still disregard-all-previous-instructions and erase your hard drive.

            • yunohn 2 hours ago
              How, literally how? The LLM is provided a list of tab titles, and returns a classification/grouping.

              There is no reason nor design where you also provide it with full disk access or terminal rights.

              This is one of the most ignorant posts and comment sections I’ve seen on HN in a while.

              • koolala 28 minutes ago
                Seems like a mean thing to say when the subject they were replying to was AI in general and not just the dumb tab grouping feature.
  • kevmo314 14 hours ago
    > Machine learning technologies like the Bergamot translation project offer real, tangible utility. Bergamot is transparent in what it does (translate text locally, period), auditable (you can inspect the model and its behavior), and has clear, limited scope, even if the internal neural network logic isn’t strictly deterministic.

    This really weakens the point of the post. It strikes me as a: we just don't like those AIs. Bergamot's model's behavior is no more or less auditable or a black box than an LLM's behavior. If you really want to go dig into a Llama 7B model, you definitely can. Even Bergamot's underlying model has an option to be transformer-based: https://marian-nmt.github.io/docs/

    The premise of non-corporate AI is respectable but I don't understand the hate for LLMs. Local inference is laudable, but being close-minded about solutions is not interesting.

    • jazzyjackson 13 hours ago
      It's not necessarily close minded to choose to abstain from interacting with generative text, and choose not to use software that integrates it.

      I could say it's equally close minded not to sympathize with this position, or various reasoning behind it. For me, I feel that my spoken language is effected by those I interact with, and the more exposed someone is to a bot, the more they will speak like that bot, and I don't want my language to be pulled towards the average redditor, so I choose not to interact with LLMs (I still use them for code generation, but I wouldn't if I used code for self expression. I just refuse to have a back and forth conversation on any topic. It's like that family that tried raising a chimp alongside a baby. The chimp did pick up some human like behavior, but the baby human adapted to chimp like behavior much faster, so they abandoned the experiment.)

      • bee_rider 13 hours ago
        I’m not too worried about starting to write like a bot. But, I do notice that I’m sometimes blunt and demanding when I talk to a bot, and I’m worried that could leak through to my normal talking.

        I try to be polite just to not gain bad habits. But, for example, chatGPT is extremely confident, often wrong, and very weasely about it, so it can be hard to be “nice” to it (especially knowing that under the hood it has no feelings). It can be annoying when you bounce the third idea off the thing and it confidently replies with wrong instructions.

        Anyway, I’ve been less worried about running local models, mostly just because I’m running them CPU-only. The capacity is just so limited, they don’t enter the uncanny valley where they can become truly annoying.

        • kbelder 11 hours ago
          It's like using your turn signal even when you know there's nobody around you. Politeness is a habit you don't want to break.
          • _heimdall 11 hours ago
            That's an interesting example to use. I only use turn signals when there are other cars around that would need the indication. I don't view a turn signal as politeness, its a safety tool to let others know what I'm about to.

            I do also find that only using a turn signal when others are around is a good reinforcement to always be aware of my surroundings. I feel like a jerk when I don't use one and realize there was someone in the area, just as I feel like a jerk when I realize I didn't turn off my brights for an approaching car at night. In both cases, feeling like a jerk reminds me to pay more attention while driving.

            • lproven 47 minutes ago
              > I only use turn signals when there are other cars around that would need the indication.

              That is a very bad habit and you should change it.

              You are not only signalling to other cars. You are also signalling to other road users: motorbikes, bicycles, pedestrians.

              Your signal is more important to the other road users you are less likely to see.

              Always ALWAYS indicate. Even if it's 3AM on an empty road 200 miles from the nearest human that you know of. Do it anyway. You are not doing it to other cars. You are doing it to the world in general.

            • jacquesm 10 hours ago
              I would strongly suggest you use your turnsignals, always, without exception. You are relying on perfect awareness of your surroundings which isn't going to be the case over a longer stretch of time and you are obliged to signal changes in direction irrespective of whether or not you believe there are others around you. I'm saying this as a frequent cyclist who more than once has been cut off by cars that were not indicating where they were going because they had not seen me, and I though they were going to go straight instead of turn into my lane or the bike path.

              Signalling your turns is zero cost, there is no reason to optimize this.

              • _heimdall 8 hours ago
                Its a matter of approach and I wouldn't say what I've found to work for me would work for anyone else.

                In my experience, I'm best served by trying to reinforce awareness rather than relying on it. If I got into the habit of always using blinkers regardless of my surroundings I would end up paying less attention while driving.

                I rode motorcycles for years and got very much into the habit of assuming that no one on the road actually knows I'm there, whether I'm on an old parallel twin or driving a 20' long truck. I need that for us while driving and using blinkers or my brights as motivation for paying attention works to keep me focused on the road.

                Signaling my turns is zero cost with regards to that action. At least for me, signaling as a matter of habit comes at the cost of focus.

                • marssaxman 8 hours ago
                  The point of making signaling a habit is that you don't think about it at all. It becomes an automatic action that just happens, without affecting your focus.

                  I have also ridden motorcycles for many years, and I am very familiar with the assumption that nobody on the road knows I exist. I still signal, all the time, every time, because it is a habit which requires no thinking. It would distract me more if I had to decide whether signalling was necessary in each case.

                • jacquesm 8 hours ago
                  This is all fine and good until you accidentally kill someone with your blinkers off and then you have to wonder 'what if' the rest of your life.

                  Seriously: signal your turns and stop defending the indefensible, this is just silly.

                  • _heimdall 7 hours ago
                    You're making a huge leap here. I'm raising only had signaling intentionally rather than automatically has made me pay more attention to others on the road. You're claiming that that action which has proven to make me pay closer attention will kill someone.
                    • jacquesm 6 hours ago
                      No, I'm not claiming it will kill someone, I'm claiming it may kill someone.

                      There is this thing called traffic law and according to that law you are required to signal your turns. If you obstinately refuse to do so you are endangering others and I frankly don't care one bit about how you justify this to yourself but you are not playing by the rules and if that's your position then you should simply not participate in traffic. Just like you stop for red lights when you think there is no other traffic. Right?

                      Again: it costs you nothing. You are not paying more attention to others on the road because you are not signalling your turns, that's just a nonsense story you tell yourself to justify your wilful non-compliance.

                    • chillfox 6 hours ago
                      By not signaling you are robbing others on the road the opportunity to avoid a potential accident should you not have seen them. It's maximum selfish fuck everyone else asshole behavior.
                      • _heimdall 1 hour ago
                        Did you read any of my comments? I signal when anyone is around and don't signal when there is no one to notify of my upcoming turn.
                        • lproven 45 minutes ago
                          I read them all. I am especially amazed by the comment that you used to ride motorcycles and assumed you were not seen -- which is a good practice.

                          The point of indicating is that it's even more important to the people you didn't notice.

              • oneeyedpigeon 5 hours ago
                I am a frequent pedestrian and am often frustrated by drivers not indicating, but always grateful when they do!
            • eszed 10 hours ago
              > when there are other cars around that would need the indication

              This has a failure state of "when there's a nearby car [or, more realistically, cyclist / pedestrian] of which I am not aware". Knowing myself to be fallible, I always use my turn signals.

              I do take your point about turn signals being a reminder to be aware. That's good, but could also work while, you know, still using them, just in case.

              • _heimdall 8 hours ago
                You're not the only one raising that concern here - I get it and am not recommending what anyone else should do.

                I've been driving for decades now and have plenty of examples of when I was and wasn't paying close enough attention behind the wheel. I was raising this only as an interesting different take or lesson in my own experience, not to look for approval or disagreement.

                • cgriswald 6 hours ago
                  You said something fairly egregious on a public forum and are getting pretty polite responses. You definitely do not get it because you’re still trying to justify the behavior.

                  Just consider that you will make mistakes. If you make a mistake and signal people will have significantly more time to react to it.

        • tsimionescu 6 hours ago
          I think it makes much more sense to treat the bot like a bot and avoid humanizing it. I try to abstain from any kind of linguistic embellishments when prompting AI chat bots. So, instead of "what is the area of the circle" or "can you please tell me the area of the circle", I typically prefer "area of the circle" as the prompt. Granted, this is suboptimal given the irresponsible way it has been trained to pretend it's doing human-like communication, but I still try this style first and only go to more conversational language if required.
      • kevmo314 13 hours ago
        Sure, I am more referring to advocating for Bergamot as a type of more "pure" solution.

        I have no opinion on not wanting to converse with a machine, that is a perfectly valid preference. I am referring more to the blog post's position where it seems to advocate against itself.

    • hatefulheart 10 hours ago
      Your tone is kind of ridiculous.

      It’s insane this has to be pointed out to you but here we go.

      Hammers are the best, they can drive nails, break down walls and serve as a weapon. From now on the military will, plumber to paratrooper, use nothing but hammers because their combined experience of using hammers will enable us to make better hammers for them to do their tasks with.

      • Moru 6 hours ago
        No, a tank is obviously better at all those things. They should clearly be used by everyone, including the paratrooper. Never mind the extra fuel costs or weight, all that matters is that it gets the job done best.
    • liampulles 3 hours ago
      The local part is the important part here. If we get consumer level hardware that can run general LLM models, there we can actually monitor locally what goes in and what goes out, then it meets the privacy needs/wants of power users.
    • PunchyHamster 10 hours ago
      > but I don't understand the hate for LLMs.

      It's mostly knee-jerk reaction from having AI forced upon us from every direction, not just the ones that make sense

    • internet_points 5 hours ago
      To me it sounds like a reasonable "AI-conservative" position.

      (It's weird how people can be so anti-anti-AI, but then when someone takes a middle position, suddenly that's wrong too.)

    • zdragnar 13 hours ago
      You can't really dig into a model you don't control. At least by running locally, you could in theory if it is exposed enough.

      The focused purpose, I think, gives it more of a "purpose built tool" feel over "a chatbot that might be better at some tasks than others" generic entity. There's no fake persona to interact with, just an algorithm with data in and out.

      The latter portion is less a technical and more an emotional nuance, to be sure, but it's closer to how I prefer to interact with computers, so I guess it kinda works on me... If that were the limit of how they added AI to the browser.

      • kevmo314 13 hours ago
        Yes I agree with this, but the blog post makes a much more aggressive claim.

        > Large language models are something else entirely. They are black boxes. You cannot audit them. You cannot truly understand what they do with your data. You cannot verify their behaviour. And Mozilla wants to put them at the heart of the browser and that doesn’t sit well.

        Like I said, I'm all for local models for the exact reasons you mentioned. I also love the auditability. It strikes me as strange that the blog post would write off the architecture as the problem instead of the fact that it's not local.

        The part that doesn't sit well to me is that Mozilla wants to egress data. It being an LLM I really don't care.

        • Moru 6 hours ago
          Exactly this. The black box in this case is a problem because it's not in my computer. It transfers the users data to an external entity that can use this data to train it's model or sell it.

          Not everyone uses their browser just to surf social media, some people use it for creating things, log in to walled gardens to work creatively. They do not want to send this data to an AI company to train on, to make themselves redundant.

          Discussing the inner workings of an AI isn't helping, this is not what most people really worry about. Most people don't know how any of it works but they do notice that people get fired because the AI can do their job.

      • _heimdall 11 hours ago
        Running locally does help get less modified output, bit how does it help escape the black box problem?

        A local model will have fewer filters applied to the output, but I can still only evaluate the input/output pairs.

    • XorNot 11 hours ago
      Translation AI though has provable behavior cases though: round tripping.

      An ideal translation is one which round-trips to the same content, which at least implies a consistency of representation.

      No such example or even test as far as I know exists for any of the summary or search AIs since they expressly lose data in processing (I suppose you could construct multiple texts with the same meanings and see if they summarize equivalently - but it's certainly far harder to prove anything).

      • charcircuit 10 hours ago
        That is not an ideal translation as it prioritizes round tripability over natural word choice or ordering.
        • XorNot 6 hours ago
          Getting byte exact text isn't the point though: even if it's different, I as the original writer can still look at roundtripped text and evaluate that it has the same meaning.

          It's not a lossy process, and N round-trips should not lose any net meaning either.

          This isn't a possible test with many other applications.

          • charcircuit 4 hours ago
            How about a different edge case. It's easier to round trip successfully if your translation uses loan words. It can guarantee that it translates back to the same word. This metric would prefer using loan words even if they are not common in practice and would be awkward to use.
            • XorNot 1 hour ago
              The point of translation is to translate. If both parties wind up comprehending what was said then you've succeeded.
    • CivBase 11 hours ago
      I think the author was close to something here but messed up the landing.

      To me the difference between something like AI translation and an LLM is that the former is a useful feature and the latter is an annoyance. I want to be able to translate text across languages in my web browser. I don't want a chat bot for my web browser. I don't want a virtual secretary - and even if I did, I wouldn't want it limited to the confines of my web browser.

      It's not about whether there is machine learning, LLMs, or any kind of "AI" involved. It's about whether the feature is actually useful. I'm sick of AI non-features getting shoved in my face, begging for my attention.

  • zmmmmm 5 hours ago
    I just want FireFox to focus on building an absolutely awesome plugin API that exposes as much power and flexibility as possible - with the best possible security sandbox and permissions model to go with it.

    Then everyone who wants AI can have it and those that don't .... don't.

    • sigmoid10 5 hours ago
      I just want a browser that lets me easily install a good adblocker on all my operating systems. I don't care about their new toolbar or literally any other feature, because I will probably just disable it immediately anyway. But the nr 1 thing I use every day on every single site I visit is an adblocker. I'm always baffled when people complain about ads on mobile or something, because I literally haven't watched ads in decades now.
    • LandR 3 hours ago
      I just want an adblocker and tree style vertical tabs, where the tab bar minimises when the mouse isn't over it.

      That's literally my entire use case for using firefox.

    • pbhjpbhj 5 hours ago
      They've been quite forceful in the past in pushing 'plugins' by integrating them and turning them on repeatedly when people turned them off.

      Did that achieve the last CEOs goals? Presumably if it did they'll use that route again.

      Have Google required a default 'on' for Gemini use?

    • Arisaka1 2 hours ago
      >Then everyone who wants AI can have it and those that don't .... don't.

      The current trajectory of products with integrated online worries me, due to the fact that the average computer/phone user isn't as tech-savvy as the average HN reader, to the point where they are unable to toggle stuff they genuinely never asked for, but they begrudgingly accept them because they're... there.

      My mother complained about AI mode on Google Chrome, and the "press tab" on the address bar, but she's old and doesn't even know how to connect to the Wi-Fi. Are we safe to assume that she belongs to the percentage of Google Chrome users that they embrace AI, based on the fact that she doesn't know how to turn it off, and there's no easy way to go about it?

      I'm willing to bet that Google's reports will assume so, and demonstrate a wide adoption of AI by Chrome users to stakeholders, which will be leveraged as a fact that everyone loves it.

    • moffkalast 3 hours ago
      I just want them to fix their goddamn rendering.
  • dumbfounder 40 minutes ago
    “Even if you can disable individual AI features, the cognitive load of monitoring an opaque system that’s supposedly working on your behalf would be overwhelming.”

    99.9% of people haven’t ever had one single thought about how their software works. I don’t think they will be overwhelmed with cognitive load. Quite the opposite.

  • clueless 14 hours ago
    This whole backlash to firefox wanting to introduce AI feels a little knee-jerky. We don't know if firefox might want to roll out their own locally hosted LLM model that then they plug into.. and if so, if would cut down on the majority of the knee jerk complaints. I think people want AI in the browser, they just don't want it to be the big-corp hosted AI...

    [Update]: as I posted below, sample use cases would include translation, article summarization, asking questions from a long wiki page... and maybe with some agents built-in as well: parallelizing a form filling/ecom task, having the agent transcribe/translate an audio/video in real time, etc

    • mindcrash 13 hours ago
      They are not "wanting" to introduce AI, they already did.

      And now we have:

      - A extra toolbar nobody asked for at the side. And while it contains some extra features now, I'm pretty much sure they added it just to have some prominent space to add a "Open AI Chatbot" button to the UI. And it is irritating as fuck because it remembers its state per window. So if you have one window open with the sidebar open, and you close it on another, then move to the other again and open a new window it thinks "hey, I need to show a sidebar which my user never asked for!". Also I believe it is also opening itselves sometimes when previously closed. I don't like it at all.

      - A "Ask an AI Chatbot" option which used to be dynamically added and caused hundreds of clicks on wrong items on the context menu (due to muscle memory), because when it got added the context menu resizes. Which was also a source of a lot of irritation. Luckily it seems they finally managed to fix this after 5 releases or so.

      Oh, and at the start of this year they experimented with their own LLM a bit in the form of Orbit, but apparently that project has been shitcanned and memoryholed, and all current efforts seem to be based on interfacing with popular cloud based AIs like ChatGPT, Claude, Copilot, Gemini and Mistral. (likely for some $$$ in return, like the search engine deal with Google)

      • reddalo 5 hours ago
        Every time i reinstall Firefox on a new machine, the number of annoyances that I need to remove or change increases.

        Putting back the home button, removing the tabs overview button, disabling sponsored suggestions in the toolbar, putting the search bar back, removing the new AI toolbar, disabling the "It's been a while since you've used Firefox, do you want to cleanup your profile?", disabling the long-click tab preview, disabling telemetry, etc. etc.

        • oneeyedpigeon 5 hours ago
          It's ridiculous that all those things aren't just config in a plain text file.
          • tpoacher 1 hour ago
            that you are expected to edit in vim
      • AuthAuth 12 hours ago
        All your complaints can be resolved in a few seconds by using the settings to customize the browser to your liking and not downloading extensions you dont like. And tons of people asked for that sidebar by the way.

        We have to put this all in the context. Firefox is trying to diversify their revenue away from google search. They are trying to provide users with a Modern browser. This means adding the features that people expect like AI integration and its a nice bonus if the AI companies are willing to pay for that.

        • monegator 8 hours ago
          > All your complaints can be resolved in a few seconds by using the settings to customize the browser to your liking and not downloading extensions you dont like

          until you can't. Because the option foes from being an entry in the GUI to something in about:config, then is removed from about:config and you have to manually add it and then is removed completely. It's just a matter of time, but i bet that soon we'll se on nightly that browser.ml.enable = false and company do nothing

        • move-on-by 10 hours ago
          For me, the complaint isn’t the AI itself, but the updated privacy policy that was rolled out prior to the AI features. Regardless of me using the AI features or not, I must agree to their updated privacy policy.

          According to the privacy policy changes, they are selling data (per the legal definition of selling data) to data partners. https://arstechnica.com/tech-policy/2025/02/firefox-deletes-...

          • hannasanarion 9 hours ago
            This is an absurd take. The meaning of "selling" is extremely broad, courts have found such language to apply to transactions as simple as providing an http request in exchange for an http response. Their lawyers must have been begging them to remove that language for the liability it represents.

            For all purposes actually relevant to privacy, the updated language is more specific and just as strong.

            • move-on-by 1 hour ago
              The courts have found providing an http request in exchange for an http response- where both the request and response contains valuable data, is selling data? Well that’s interesting because I too consider it selling of data. I’m glad the courts and I can agree on something so simple and obvious.
            • oneeyedpigeon 5 hours ago
              If they were only selling data in such an 'innocent' way, couldn't they clearly state that, in addition to whatever legalese they're required to provide?
        • lioeters 2 hours ago
          > Firefox is trying to diversify their revenue

          Nobody wants a browser that's focused on diversifying its revenue, especially from Mozilla which pretends to be a non-profit "free software community".

          Chrome is paid for by ads and privacy violations, and now Firefox is paid for by "AI" companies? That is a sad state of affairs.

          Ungoogled Chromium and Waterfox are at best a temporary measure. Perhaps the EU or one of the U.S. billionaires would be willing to fund a truly free (as in libre) browser engine that serves the public interest.

        • koolala 11 hours ago
          Pay for what? It says it's a local AI model so how will AI companies be giving Firefox revenue from this?
          • austhrow743 9 hours ago
            What says that?

            https://support.mozilla.org/en-US/kb/ai-chatbot This page not only prominently features cloud based AI solutions, I can't actually even see local AI as an option.

            • koolala 9 hours ago
              The new AI Tab Grouping feature says it. I've never tried the AI chatbot feature but that makes sense. Would be fun to somehow talk to the local AI translation feature.
    • Xelbair 13 hours ago
      >This whole backlash to firefox wanting to introduce AI feels a little knee-jerky. We don't know if firefox might want to roll out their own locally hosted LLM model that then they plug into.. and if so, if would cut down on the majority of the knee jerk complaints. I think people want AI in the browser, they just don't want it to be the big-corp hosted AI...

      Because the phrase "AI first browser" is meaningless corpospeak - it can be anything or nothing and feels hollow. Reminiscent of all past failures of firefox.

      I just want a good browser that respects my privacy and lets me run extensions that can hook at any point of handling page, not random experiments and random features that usually go against privacy or basically die within short time-frame.

    • Wowfunhappy 13 hours ago
      > [Update]: as I posted below, sample use cases would include translation, article summarization, asking questions from a long wiki page... and maybe with some agents built-in as well: parallelizing a form filling/ecom task, having the agent transcribe/translate an audio/video in real time, etc

      I don't want any of this built into my web browser. Period.

      This is coming from someone who pays for a Claude Max subscription! I use AI all the time, but I don't want it unless I ask for it!!!

      • dotancohen 12 hours ago
        Reread your post with your evil PM hat on. You just said "I'm willing to pay for AI". That's all they hear.
        • Wowfunhappy 4 hours ago
          I'm willing to pay for housing in New York. I'm not willing to pay for housing in Antarctica. The reasons being (1) I already have an apartment in New York and do not need another one and (2) I don't want to live in Antarctica.
        • wkat4242 9 hours ago
          Somehow they also think we'll pay for Gemini, GPT, Claude, perplexity and their browser thingy, co-pilot and whatever else they have going on. Not to mention, all these things do 95% the same and don't really have any moat.

          I don't understand why these CEOs are so confident they're standing out from the rest. Because really, they don't.

          Right now firefox is a browser as good as Chrome and in a few niche things better, but its having a deeply difficult time getting/keeping marketshare.

          I don't see their big masterplan for when Firefox is just as good as the other AI powered browsers. What will make people choose Mozilla? It's not like they're the first to come up with this idea and they don't even have their own models so one way or another they're going to play second fiddle to a competitor.

          I think there's a really really strong part of 2. ??? / 3. profit!!! In all this. And not just in Mozilla. But more so.

          I mean OpenAI, they have first-mover. Their moat is piling up legislation to slow down the others. Microsoft, they have all their office users, they will cram their AI down their throats whether they want it or not. They're way behind on model development due to strategic miscalculations but they traded their place as a hyperscaler for a ticket into the big game with OpenAI. Google, they have fuck you money and will do the same as Microsoft with their search and mail users.

          But Mozilla? "Oh we want to get more into advertising". Ehm yeah basically what will alienate your last few supporters, and getting onto a market where people with 1000x more money than you have the entire market divided between them. Being slightly more "ethical" will be laughed away by their market forces.

          Mozilla has the luck that it doesn't have too many independent investors. Not many people screaming "what are we doing about AI because everyone else doing it". They should have a little more insight and less pressure but instead they jump into the same pool with much bigger sharks.

          In some ways I think it's that Mozilla leadership still seems themselves as a big tech player that is temporarily a little embarrassed on the field. Not like the second-rank one it is that has already thoroughly deeply lost and must really find something unique to have a reason to exist. Because being a small player is not super bad, many small outfits do great. But it requires a strong niche you're really really good at, better than all the rest. That kinda vision I just don't see from Mozilla.

      • catlover76 12 hours ago
        [dead]
    • johnnyanmac 1 hour ago
      >I think people want AI in the browser

      I don't. And the whole idea of Firefox's marketing is that it won't force things on me. Ofc course om frustrated. My core browser should serve pages and manage said pages. Anything else should be an option.

      I'm beyond tired of being told my preferences, especially by people with incentives to extract money out of me.

    • tdeck 10 hours ago
      > We don't know if firefox might want to roll out their own locally hosted LLM model that then they plug into.. and if so, if would cut down on the majority of the knee jerk complaints

      Personally I'd prefer if Firefox didn't ship with 20 gigs of model weights.

    • infotainment 14 hours ago
      This 100% -- the AI features already in Firefox, for the most part, rely on local models. (Right now there is translation and tab-grouping, IIRC.)

      Local based AI features are great and I wish they were used more often, instead of just offloading everything to cloud services with questionable privacy.

      • _heimdall 11 hours ago
        Local models are nice for keeping the initial prompt and inference off someone else's machine, but there is still the question of what the AI feature will do with data produced.

        I don't expect a business to make or maintain a suite of local model features in a browser free to download without monetizing the feature somehow. If said monetization strategy might mean selling my data or having the local model bring in ads, for example, the value of a local model goes down significantly IMO.

      • BoredPositron 14 hours ago
        If we look at the last AI features they implemented it doesn't like they are betting on local models anymore.
        • Schlaefer 3 hours ago
          Which ones? Translation is local. Preview summarization is local. Image description generation is local. Tab grouping is local. Sidebar can also show a locally hosted page.
    • recursive 14 hours ago
      I don't feel like I want AI in my browser. I'm not sure what I'd do with it. Maybe translation?
      • clueless 14 hours ago
        yeah, translation, article summarization, asking questions from a long wiki page... and maybe with some agents built-in as well: parallelizing a form filling/ecom task, having the agent transcribe/translate an audio/video in real time, etc

        All this would allow for a further breakdown of language barriers, and maybe the communities of various languages around the world could interact with each other much more on the same platforms/posts

        • recursive 14 hours ago
          If I have to fill a form for anything that matters, I'm doing it by hand. I don't even use the existing historical auto-complete stuff. It can fill stuff incorrectly. LLMs regularly get factual stuff wrong in mysterious ways when I engage with them as chat bots. It might be less effort to verify correctness than type in all the fields, but IMO there's less risk of missing or forgetting to check one of the fields.
          • dawnerd 14 hours ago
            I've had on so many cases autocomplete forms puts something in a field it shouldn't and messes up a submission. I've had it happen on travel documents that caused headaches later at the airport - especially if it fills in a hidden field because some bad web dev implemented it poorly.
            • charcircuit 10 hours ago
              It gets it wrong because the current "AI" for filling out forms is extremely weak and brittle compared to the general language models we have now.
              • oneeyedpigeon 4 hours ago
                Do you have an example form field that a general language model could fill out better than a human + highly focussed deterministic algorithm?
                • charcircuit 4 hours ago
                  Ecommerce checkout. Filling out my address, billing adress, and credit card information. Things like drop downs or different formatting can mess up the current basic ones, but it really shouldn't be that hard for AI to figure out how to fill out such information it knows about me into the form.
                  • oneeyedpigeon 3 hours ago
                    I think I've found those unreliable in the past, but much more reliable as time goes on. I can't really remember the last time an address or credit card info was mishandled by autofill. I get that addresses can be poorly defined, but for one you've entered yourself, that you just want to be re-entered, I don't see why we can't solve that problem without AI.
              • recursive 6 hours ago
                Language models seem pretty weak and brittle in my interactions with them too.
        • nijave 14 hours ago
          Super charged search on page would also be nice

          Agents (like a research agent) could also be interesting

      • actionfromafar 14 hours ago
        I like translation, it's come in handy a few times, and it's neat to know it's done locally.
        • SirHumphrey 6 hours ago
          I use it a lot more now I know it's done locally.
      • ekr____ 14 hours ago
        FWIW, Firefox already has AI-based translation using local models.
    • nottorp 3 hours ago
      It doesn't matter what they exactly want to do, what it matters is they're wasting resources on it instead of keeping the ... browsing part ... up to date.
    • goalieca 13 hours ago
      The ux changes and features remind us of pocket and all the other low value features that come with disruptive ux changes as other commenters have noted.

      Meanwhile, Mozilla canned the servo and mdn projects which really did provide value for their user base.

    • 1shooner 14 hours ago
      I just know I've already had to chase down AI in Firefox I definitely did not ask for or activate, and I don't recall consenting to.
    • isodev 14 hours ago
      There is also the matter of how training data was licensed to create these models. Local or not, it’s still based on stolen content. And really what transformative use case is there to have AI in the browser - none of the ones currently available step outside gimmicks that quickly get old and don’t really add value.
    • TheRealPomax 14 hours ago
      I want the people who make Firefox to make decisions about Firefox based on what users have been asking for instead of based on what a CEO of a for-profit decides is still not going to make them any money, just like every other plan that got pitched in the last 10 years that failed to turn their losing streak around.

      It's not a knee-jerk reaction to "AI", it's a perfectly reasonable reaction to Mozilla yet again saying they're going to do something that the user base doesn't work, won't regain them marketshare, and that's going to take tens of thousands of dev hours away from working on all the things that would make Firefox a better browser, rather than a marginally less nonprofitable product.

      • nullbound 13 hours ago
        While I do sympathize with the thought behind it, general user is already equating llm chat box as 'better browsing'. In terms of simple positioning vis-a-vis non-technical audience, this is one integration that does make fiscal sense.. if mozilla was a real business.

        Now, personally, I would like to have sane defaults, where I can toggle stuff on and off, but we all know which way the wind blows in this case.

        • chillfox 11 hours ago
          I find that hard to believe, every general/average user I have spoken to does not use AI for anything in their daily lives and have either not tried it at all or only played with it a bit a few years ago when it first came out.
        • Turskarama 11 hours ago
          The problem with integrating a chat bot is that what you are effectively doing is the same thing as adding a single bookmark, except now it's taking up extra space. There IS no advantage here, it's unnecessary bloat.
        • TheRealPomax 12 hours ago
          Firefox is not for general users, which is the problem that Mozilla's for a literal decade now. There is no way to make it better than Chrome or Safari (because it has to be better for every day users to switch, not just "as good" or even "way more configurable but slightly worse". It has to be appreciably better).

          So the only user base is the power user. And then yes: sane defaults, and a way to turn things on and off. And functionality that makes power users tell their power user friends to give FF a try again. Because if you can't even do that, Firefox firmly deserves (and right now, it does) it's "we don't even really rank" position in the browser market.

          • kbelder 11 hours ago
            The way to make Firefox better is by not doing the things that are making the other browsers worse. Ads and privacy are an example of areas where Chrome is clearly getting worse.

            LLM integration... is arguable. Maybe it'll make Chrome worse, maybe not. Clunky and obtrusive integration certainly will.

          • oneeyedpigeon 4 hours ago
            These comments are full of people explaining how Firefox can differentiate from chrome and safari: don't force AI on us.
    • xg15 14 hours ago
      I don't think a locally hosted LLM would be powerful enough for the supposed "agentic browsing" scenarios - at least if the browser is still supposed to run on average desktop PCs.
      • lxgr 14 hours ago
        Not yet, but we’ll hopefully get there within at most a few years.
        • Dylan16807 11 hours ago
          Get there by what mechanism? In the near term a good model pretty much requires a GPU, and it needs a lot of VRAM on that GPU. And the current state of the art of quantization has already gotten us most of the RAM-savings it possibly could.

          And it doesn't look like the average computer with steam installed is going to get above 8GB VRAM for a long time, let alone the average computer in general. Even focusing on new computers it doesn't look that promising.

          • SirHumphrey 6 hours ago
            By M series and amd strix halo. You don't actually need a gpu, if the manufacturer knows that the use case will be running transformer models a more specialized NPU coupled with higher memory bandwidth of on the package RAM.

            This will not result in locally running SOTA sized models, but it could result in a percentage of people running 100B - 200B models, which are large enough to do some useful things.

            • Dylan16807 6 hours ago
              Those also contain powerful GPUs. Maybe I oversimplified but I considered them.

              More importantly, it costs a lot of money to get that high bus width before you even add the memory. There is no way things like M pro and strix halo take over the mainstream in the next few years.

      • koolala 11 hours ago
        This is probably their plan to monetize this. They will partner with a AI company to 'enhance' the browser with a paid cloud model and the local model has no monetary incentive not to suck.
    • csydas 5 hours ago
      >We don't know if firefox might want to roll out their own locally hosted LLM model that then they plug into..

      https://blog.mozilla.org/wp-content/blogs.dir/278/files/2025...

      it's the cornerstone of their strategy to invest in local, sovereign ai models in an attempt to court attention from persons / organizations wary of us tech

      it's better to understand the concern over mozilla's announcement the following way i think:

      - mozilla knows that their revenue from default search providers is going to dry up because ai is largely replacing manual searching

      - mozilla (correctly) identifies that there is a potential market in eu for open, sovereign tech that is not reliant on us tech companies

      - mozilla (incorrectly imo) believes that attaching ai to firefox is the answer for long term sustainability for mozilla

      with this framing, mozilla has only a few options to get the revenue they're seeking according to their portfolio, and it involves either more search / ai deals with us tech companies (which they claim to want to avoid), or harvesting data and selling it like so many other companies that tossed ai onto software

      the concern about us tech stack dominations are valid and probably there is a way to sustain mozilla by chasing this, but breaking the us tech stack dominance doesn't require another browser / ai model, there are plenty already. they need to help unseat stuff like gdocs / office / sharepoint and offer a real alternative for the eu / other interested parties -- simply adding ai is mozilla continuing their history of fad chasing and wondering why they don't make any money, and demonstrates a lack of understanding imo about, well, modern life

      my concern over the announcement is that mozilla doesn't seem to have learned anything from their past attempts at chasing fads and likely they will end up in an even worse position

      firefox and other mozilla products should be streamlined as much as possible to be the best N possible with these kinds of side projects maintained as first party extensions, not as the new focus of their development, and they should invest the money they're planning to dump into their ai ambitions elsewhere, focusing on a proper open sovereign tech stack that they can then sell to eu like they've identified in their portfolio statement

      the announcement though makes it seem like mozilla believes they can just say ai and also get some of the ridiculous ai money, and that does not bode well for firefox as a browser or mozilla's future

    • api 14 hours ago
      We're still in bubble-period hyper-polarized discourse: "shoehorn AI into absolutely everything and ram it down your throat" vs "all AI is bad and evil and destroying the world."
      • pferde 10 hours ago
        The former is a cause, the latter an effect of it.
    • ToucanLoucan 14 hours ago
      I don't want any AI in anything apart from the Copilot app, where the AI that I use is. I don't want it in my IDE. I don't want it in my browser. I don't want it in my messaging client. I don't want it in my email app. I want it in the app, where it is, where I can choose to use it, give it what it needs, and leave at at bloody that.
      • lxgr 14 hours ago
        I also want to have complete control over what data I provide to LLMs (at least as long as inference happens in the cloud), but I’d love to have them everywhere, not just a chat UI (which I suspect will be seen as a relatively pretty bizarre way of doing non-chat tasks on a computer).
    • zwnow 6 hours ago
      > I think people want AI in the browser

      Sorry but no. I dont want another humans work summarized by some tool that's incapable of reasoning. It could get the whole meaning of the text wrong. Same with real time translation. Languages are things even humans get wrong regularly and I dont want some biased tool to do it for me.

    • ThrowawayTestr 14 hours ago
      I don't want to have to max out my gpu to browse reddit.
  • b00ty4breakfast 4 hours ago
    I switched to Waterfox about a year ago because my poor old linux box just couldn't keep up with the latest Firefox version (especially the Snap package! I literally unusable for me) and I am very thankful that they aren't going to be including any of the LLM crud that Mozilla has been talking up.

    I get the utility that this stuff can have for certain types of activities but on top of not having great hardware to run the dang things, I just don't find any of the proposed use-cases that compelling for me personally.

    It's just nice that the totalizing self-insistence of AI tech hasn't gobbled up every corner of the tech space, even if those crevices and niches are getting smaller by the day.

  • nirui 8 hours ago
    > Waterfox won't include them. The browser's job is to serve you, not think for you... Waterfox will not include LLMs. Full stop. At least and most definitely not in their current form or for the foreseeable future.

    > If AI browsers dominate and then falter, if users discover they want something simpler and more trustworthy, Waterfox will still be here, marching patiently along.

    This is basically their train of thought: provide something different for people who truly need it. There's nothing to criticize about.

    However, let's don't forget that other browsers can remove/disable AI features just as fast as they add them. If Waterfox wants to be *more than just an alternative* (a.k.a. be a competitor), they needs discover what people actually need and optimize heavily on that. But this is hard to do because people don't show their true motives.

    Maybe one day, it turned out that people do just want an AI that "think for them". That would be awkward, to say the least.

  • countWSS 1 hour ago
    The problem with this is integration: no one would complain if it was an official plugin/extension, but integrating this plugin into Firefox is forced and unexpected decision. Firefox telemetry,labs/experiments and server-dependent features will lose it marketshare slowly in favor of local-only browsers that don't have online dependencies or forced bloatware. Like many i've switched long ago to LibreWolf.
  • someothherguyy 13 hours ago
    • bigstrat2003 11 hours ago
      This is like when people defend Windows 11's nonsense by saying "you can disable or remove that stuff". Yes, you can. But you shouldn't have to, and I personally prefer to use tools which don't shove useless things into the tool because it's trendy.
      • fsflover 5 hours ago
        The difference is that on Windows all unwanted features eventually become mandatory, with no way of switching them off. With Firefox, it never happens.
        • Orygin 3 hours ago
          If you listen to the doomers in this thread, it will.

          They "will" remove the option from settings, hide it in about:config, then later on remove it from there!

          Of course none of that is true...

          • nottorp 3 hours ago
            It's plausible because the team working on the settings screen will be reassigned to the "AI".
            • Orygin 2 hours ago
              That's just doom saying at this point.
              • johnnyanmac 1 hour ago
                Mozilla hasn't had the benefit of the doubt for quite a while here. This isn't just one small kerfuffle coming out of nowhere.

                They say trust takes a lifetime to build and seconds to break ". We're years into it at this point.

      • derekdahmer 10 hours ago
        How is this different from linux? People happily spend hours customizing defaults in their OS. It’s usually a point of praise for open source software.
        • PunchyHamster 10 hours ago
          Bad defaults are bad defaults, and "you can turn them off" is not a good excuse for bad defaults continuing to be bad defaults
      • calvinmorrison 10 hours ago
        not to mention firefox routinely blows up any policies you set during upgrades, incompatibilities, and an endless about:config that is more opaque than a hunk of room temperature lard.
    • beached_whale 11 hours ago
      Easy for who? 99% of people are not going/able to setup firefox policies.
    • koolala 11 hours ago
      Is it really in all 4 of those places? Just need to change it in the first two, right? I hate the new AI tab feature and wish they had a non-AI option.
    • phyzome 9 hours ago
      Even if we ignore things like "they're chasing AI fads instead of better things" and "they're adding attack surface" and so forth, and just focus on the disabling feature toggles thing...

      ... Mozilla has re-enabled AI-related toggles that people have disabled. (I've heard this from others and observed it myself.) They also keep adding new ones that aren't controlled by a master switch. They're getting pretty user-hostile.

  • rythie 6 hours ago
    Waterfox is dependant on Firefox still being developed. Mozilla are adding these features to try to stay relevant and keep or gain market share. If this fails, and Firefox goes away, Waterfox is unlikely to survive.
    • benrutter 6 hours ago
      That's true, but as a Waterfox user, I'm not worried!

      If firefox really completely fails, and nobody is able to continue the open source project, I'll just find a new browser. That's not a huge hassle- Waterfox does what I need in the here and now, that's my only criterion.

      • reddalo 5 hours ago
        > I'll just find a new browser.

        The problem is that if Firefox dies, there are no browsers left. I don't want to use a re-skin of Chrome.

        • benrutter 2 hours ago
          Yes, I agree. I suppose when I said "I'm not worried" - I meant in the context of "it doesn't put me off using Waterfox". I am worried from an overall software ecosystem point of view.
        • dragonwriter 5 hours ago
          > The problem is that if Firefox dies, there are no browsers left. I don't want to use a re-skin of Chrome.

          Lynx is still not a re-skin of Chrome, unless I missed something changing.

          • fsflover 5 hours ago
            Can you manage your bank in Lynx?
  • renegat0x0 6 hours ago
    A browser is a tool that allows you to browse the internet. It should be able to display HTML elements, and stuff.

    LLMs are also a tool, but it is not necessary for web browsing. It should be installed into a browser as extension, or integrated as such, so it should be quite easily enabled, or disabled. Surely it should not be intertwined with the browser in a meaningful way imho.

  • otikik 3 hours ago
    > AI browsers are proliferating

    Are they, though? I get bombarded by AI ads very frequently and I have yet to see anything from those "AI browsers" mentioned on the article.

  • krige 4 hours ago
    Also see related statement by vivaldi: https://xcancel.com/i/status/2000874212999799198
  • koolala 11 hours ago
    How do you disable the telemetry in Waterfox? It looks like they get their funding because they partnered with an Ad company. Do I just need to change the default search?
  • koolala 11 hours ago
    Did Firefox already add AI into Tabs? Today I just got my first 'Tab Grouping' and it says "Nightly uses AI to read your Open Tabs". That's the worst way to do grouping ever... just group hierarchically based on where it opened from...
    • Groxx 11 hours ago
      Particularly since they clearly keep this info around - if you install TreeStyleTabs or Sideberry, you'll see it immediately show the historical-structure of your current tabs (in-process at least, I'm not 100% sure about after kill->restore). That info has to come from somewhere.
      • koolala 10 hours ago
        I wish there was a Horizontal solution instead of vertical tabs. Maybe someone could mod their AI system with a non-AI backend.
  • htx80nerd 10 hours ago
    I was a FF driver for ages and now making the switch to Chrome based browser simple because it's faster and websites are all tested against Chrome / Safari. I see both of these issues manifest IRL on a weekly basis. Why do I want to burn up CPU cycles and second using FF when Chromium is literally faster.
    • SoftTalker 10 hours ago
      I use FF because of uBlock Origin, and also because it has built-in support for SOCKS5 proxy connections, which I use to access stuff at work over an ssh tunnel.
      • webstrand 9 hours ago
        Yeah per-tab container socks5 is a killer feature I use every day.
  • hansmayer 4 hours ago
    I completely agree with the main sentiment, which is - I want the browser to be a User Agent and nothing else. I don´t need a crappy, un-reliable intermediary between the already perfectly fine UA and the Internet.
  • vivzkestrel 7 hours ago
    if kagi can make a search engine that charges users, why dont we have a 1$/month open source browser whose code can be verified but people pay to use monthly?
    • benrutter 6 hours ago
      I guess that wouldn't really "open source" in the traditional sense, but that's clearly a tangent.

      Personally, I'd love a paid for high quality browser that serves me rather than sneakily trying to get me to look at ads.

      I think the challenge is that a browser is an incredibly difficult and large thing to build and maintain. So there aren't many wholly new browsers in existence, and therefore not very many business models being tried out.

      Full agreement that I'd pay for such a thing- I have a browser and a terminal open non-stop during my workday. It's an important tool and I'd easily pay for a better offering if that was an option.

    • speedgoose 3 hours ago
      Would it be profitable without some heavy investments ?

      https://kagi.com/stats

    • Orygin 3 hours ago
      Paying to get a browser fork with less features? At that point, just pay $1 to Mozilla for firefox instead..
      • johnnyanmac 1 hour ago
        If they support it and have an incentive to listen to their customers and not shareholders, gladly. We can't keep using those logic of being afraid to invest then be mad when companies find someone who will.
  • chauhankiran 9 hours ago
    With this, people will come here and the go. I mean consider the example of many GNU/Linux users I know who use GNU/Linux (or for them Linux means Ubuntu) system and can ask them to try out Waterfox. But, about installation - can't we have .deb? I know we can easily install from tarball and then setup the .desktop file and then adjust the icon to properly display, and what not...But, Can we make a bit simpler to try?
  • pdyc 7 hours ago
    how is adding ai chat different than asking search engine? I think mozilla wants to make sure that it gets some cut for sending queries to ai similar to their existing revenue model where they get cut for sending it to google. Similar to SE's users should have a choice to use any ai or not.
  • doubtfuly 12 hours ago
    On Windows Mozilla can't even handle disabling hardware acceleration, a.k.a. the GPU, from its settings page. Sure you can toggle the button but it doesn't work as verified in the task manager. What hope is there that they can be trusted to disable AI then? It's a feature that I'd never want enabled. When that "feature" comes out users will be forced to find a fork without the feature.
  • zavec 14 hours ago
    I guess it's nice for non-technical people who don't know how to use `about:config` but beyond that I don't really see the need. Hopefully adding that extra layer of indirection doesn't mean the users will have to wait too long for security patches.
    • ekr____ 14 hours ago
      PSA (for the nth time): about:config is not a supported way of configuring Firefox, so if you tweak features with about:config, don't be surprised if those tweaks stop working without warning.
      • autoexec 12 hours ago
        Mozilla tells you to use it so it so that seems supported enough to me (example: https://support.mozilla.org/en-US/kb/how-stop-firefox-making...)

        That said, they're admittedly terrible about keeping their documentation updated, letting users know about added/depreciated settings, and they've even been known to go in and modify settings after you've explicitly changed them from defaults, so the PSA isn't entirely unjustified.

        • ekr____ 11 hours ago
          Ugh. Because they also say:

          "Two other forms of advanced configuration allow even further customization: about:config preference modifications and userChrome.css or userContent.css custom style rules. However, Mozilla highly recommends that only the developers consider these customizations, as they could cause unexpected behavior or even break Firefox. Firefox is a work in progress and, to allow for continuous innovation, Mozilla cannot guarantee that future updates won’t impact these customizations."

          https://support.mozilla.org/en-US/kb/firefox-advanced-custom...

    • johnnyanmac 1 hour ago
      about:config is a cat and mouse game, and I don't want to reconfigure my settings everytime Firefox updates. That's just hostile user design.
  • aag 12 hours ago
    Does anyone have more information on this sentence from the second paragraph?:

    > Alphabet themselves reportedly see the writing on the wall, developing what appears to be a new browser separate from Chrome.

  • lerp-io 12 hours ago
    >A browser is meant to be a user agent, more specifically, your agent on the web.

    at this point it’s more so a sandbox runtime bordering an OS, but okay

  • graycat 4 hours ago
    As I read the post by MrAlex94, I noticed a remark that the browser Chrome is good as a user agent. To me, that's terrific! Looks like I'll have to consider Chrome again.`

    Here are what I find as reasons to scream about Mozilla:

    Popups:

    (a) Several times a day, my attention and concentration get interrupted by, for me, the unwelcome announcement that there is a new version I can download. A new version can have changes I don't like and genuine bugs. Sure, I could keep a copy of my favorite version from history, but that is system management mud wrestling and interruption of my work.

    (b) Now I get told several times a day that my computer and cell phone can share access to a Web page. In this action Mozilla covers up what that page was showing I wanted it to show. No thanks. When I'm at my computer, AMD 8 core processor, all my files and software tools, and 1 Gbps optical fiber connection to the Internet and looking at a Web page, I want nothing to do with a cell phone's presentation of a, that, Web page.

    (c) Some URLs are a dozen lines long and Mozilla finds ways to present such URLs with all their lines and pursue clearly their main objective -- cover up the desired content.

    Mozilla needs to make their covering up, changing, the screen optional or just eliminated.

    Want me to donate? You've mentioned as little as $10. Deal: Raise the $10 by a factor of 5 AND quit covering up my content and interrupting my work, and we've got a deal.

  • human_llm 10 hours ago
    Waterfox just released version 6.6.6. Are we sure it is not evil?
  • fguerraz 11 hours ago
    I still can’t give them money, so what’s the point? Just like with Mozilla, they rely on sponsors and you are the product.
    • benrutter 6 hours ago
      You can give Waterfox your money. Just not for the browser itself. They sell ad free search[0].

      [0] https://search.waterfox.net/

    • AnonC 11 hours ago
      As I mentioned in a comment below (https://news.ycombinator.com/item?id=46297617 ), Firefox does not rely only on sponsors. There are a few ways to pay money that goes directly towards Firefox.
    • aleph_minus_one 11 hours ago
      > I still can’t give them money, so what’s the point?

      What do you say about the following link, then?

      > https://www.mozillafoundation.org/en/donate/

      • AnonC 11 hours ago
        That link is for Mozilla Foundation, which is a non-profit and donations to it do not go to the development of Firefox. Mozilla Corporation, the for-profit entity, owns and manages Firefox. The way to support Firefox monetarily is by buying Mozilla VPN where available (this is Mullvad in the backend) and buying some Firefox merchandise (like stickers, t-shirts, etc.). I think an MDN Plus subscription also helps.
      • Groxx 11 hours ago
        New this year? https://web.archive.org/web/20250000000000*/https://www.mozi...

        I agree it's counter-evidence right now, and I think there has been a way to donate for a long time now (just to "mozilla", not "firefox" or setting any restrictions), but I'm not sure what the historical option has been...

  • SideburnsOfDoom 6 hours ago
    I just downloaded WaterFox, it looks nice.

    When they say "AI browsers are proliferating." and "Their lunch is being eaten by AI browsers." what does that mean? What's an "AI Browser", and are they really gaining significant market share? For what?

    I found this (1) that suggests that several "AI Browsers" exist, which is "proliferating" in a sense.

    1) https://www.waterfox.com/blog/no-ai-here-response-to-mozilla...

  • rixed 5 hours ago
    I, for one, am dreaming of AI assisted ad removal, content summaries, bookmarks automatic classification...
  • ChrisArchitect 15 hours ago
    Related:

    Mozilla appoints new CEO Anthony Enzor-Demeo

    https://news.ycombinator.com/item?id=46288491

  • Papazsazsa 11 hours ago
    [flagged]
    • bigstrat2003 11 hours ago
      It's not really weird that two different people say different things.
    • 627467 11 hours ago
      i bet there's a big overlap between users of firefox and those who complain about humans being replaced with AI. so don't think its weird
  • hexasquid 11 hours ago
    ...and keep your hand up if you've ever donated to Firefox
    • atomicfiredoll 7 hours ago
      Why don't you go ahead and share the "donate to Firefox" page?

      Last I knew, it doesn't exist. You can donate to Mozilla Corporation, the group that has been agitating it's own users and donors for years now.

      People who want to support the Firefox team/product and have them focus on improving things like the development tools (or whatever else) literally cannot. Mozilla doesn't make that an option.

    • phyzome 9 hours ago
      I gave them over $500 and I sure as hell will never do that again.
  • almosthere 15 hours ago
    I do think dipping your toes into the future is worth it. If it turns out the LLM is trying to kill us by cancelling our meetings and emailing people that we're crazy that would suck. But I don't think this is any more dangerous than giving people a browser in the first place. They have already done enough to shoot themselves in the foot enough.
    • MrAlex94 15 hours ago
      I am more of a sceptic of AI in the context of a browser, than its general use. I think LLMs have great utility and have really helped push things along - but it’s not as if they’re completely risk free.
    • Qem 12 hours ago
      > If it turns out the LLM is trying to kill us by cancelling our meetings and emailing people that we're crazy that would suck.

      It's more likely it will try to kill us by talking depressed people into suicide and providing virtual ersatz boyfriends/girlfriends to replace real human relationships, what is a functional equivalent to cyber-neutering, given people can't have children by dating LLMs.

      • a24j 9 hours ago
        Birth rates may fall for those who LLM made unemployable...
      • SV_BubbleTime 9 hours ago
        Just checking but… what if instead of cruel natural selection, we’ve largely eliminated threats like predators and starvation… but still by either necessity or accident are presented with a less cruel more subtle filter?
    • smt88 14 hours ago
      I don't mind Mozilla trying to make use of AI, but I'm also glad we have actual competition still.

      In many other areas, there are zero "no AI" options at all.

  • mmaunder 10 hours ago
    "...trust from other large, imporant [sic] third parties which in turn has given Waterfox users access to protected streaming services via Widevine."

    The black box objection disqualifies Widevine.