9 comments

  • dvt 2 hours ago
    I genuinely feel disrespected if AI is used to write an article and it's not disclosed in the first paragraph. It's not really that big of a deal, tbh, it's like saying "I took a picture, I didn't paint it."

    Which is fine, but please disclose it. Otherwise, like in this case, I'm going to assume the author is a moron that can't write for shit who thinks their readers are morons that can't read for shit.

    • onoesworkacct 1 hour ago
      Agree. Also because of the way AI writes, it takes SO LONG to read through it (they're trained on blogspam where the page tells you the author's life story as well as the bloody history of bread before telling you how to bake it)
      • az09mugen 21 minutes ago
        That's why in this case I usually ask to another AI to make me a short summary with the main points. I wish the human behind the looong article idea chooses to publish a short summary directly instead.
  • geerlingguy 4 hours ago
    Every week, I get another email asking if I'd review some random AI box from a random company—I don't know if this Pocket Lab was offered for review at some point, but it sounds very similar to others:

      - Commodity Arm SoC (or sometimes N100 or N150 x86)
      - 8/16/32 GB of LPDDR5x RAM
      - 'NPU' (usually unspecified) with ambiguous 'TOPS' number (like 20, 40, 80)
    
    Usually specifics aren't provided, and TOPS is never defined in a technically useful way. The few times it is, are from more established companies (e.g. Asus or Raspberry Pi integrating a well-known NPU chip into one of their products).

    It's worse at this point than the peak of the crypto boom, when I was getting emails touting the next chain-of-proof software, or ledger-this/ledger-that. Now that there are a few actual use cases for this hardware, it requires more nuance to separate the wheat from the chaff.

    And for me, I spend weeks, typically, with any hardware I _do_ review, running as many models and test runs as I can (and documenting everything on GitHub, in depth, with scripts so other people can verify). Most reviewers (like those with publications named in this post) either don't have the time, or sadly, the understanding, to test these devices in a meaningful way.

    Therefore, random blog posts (which are getting harder and harder to find, amidst the AI-laden first 2-4 pages of DuckDuckGo and Google results) are the best source of information. Or sometimes a post on Mastodon, which is never easy to find since search isn't a thing there.

    Edit: Ah, they did reach out around CES time. Funny seeing their pitch deck including a note on Dr. Miles Mi, with a row of logos on that page including Apple, MIT, Berkeley, DJI, VIVO, Tuya, and a few others, as if they were using this project or something?

  • shrikaranhanda 22 minutes ago
    Tinycorp (George Hotz's company) just issued a cease and desist

    https://news.ycombinator.com/item?id=47486257

  • lurkshark 1 hour ago
    Someone posted this analysis to their Kickstarter comments (they dodged)

    https://www.kickstarter.com/projects/tiinyai/tiiny-ai-pocket...

  • fwipsy 5 hours ago
    Tl;Dr it's actually a CIX p1 + 32gb (similar to orange pi 6) and a "160TOPS" NPU accelerator with 48gb - attached via NVME. models will either have to fit in one pool or deal with shuttling data over m.2, the company has some optimizations regarding this but it's still a serious limitation.

    There you go, two sentences without burying the lede.

    Is it maybe competitive value anyways though? Even if you only think of the accelerators, 48gb+160TOPS seems comparable to some Strix Halo mini PCS with 64gb - lower memory bandwidth but a few hundred dollars cheaper. If they sold just the accelerator card for $800 or something that would be potentially very interesting.

  • gnabgib 5 hours ago
    Previously (23 points, 6 days ago, 6 comments) https://news.ycombinator.com/item?id=47395786

    Including questions of LLM origin. Seems like the OP might have submitted that one (47431685) although there's another copy now (beyond this SCP entry from 3 days ago)

  • smartbit 3 days ago
    Great research and write-up, maybe a bit too elaborate.

    Will be interesting to see if a public outcry will happen once these boxes start arriving at those who funded the kickstarter.

    • buildbot 6 hours ago
      It’s LLM slop and very shallow, in my opinion.
      • shotnothing 5 hours ago
        whats shallow about the research? it all seems to check out?
      • Karuma 4 hours ago
        Thank you.

        Every time I complain about this kind of useless AI slop I get downvoted to hell and get dozens of comments saying "it doesn't look AI at all", so I don't even bother anymore. It's incredibly sad, I expected much more from this community... But it looks like it'll soon be dead like the rest of the internet.

      • qwe----3 6 hours ago
        The blogpost?
        • pushfoo 6 hours ago
          ctrl-f for "This isn't" and note how many instances of this pattern there are:

          > This isn't X. It's Y.

          • rkagerer 2 hours ago
            I don't see a single occurrence in the article of the word "isn't".
            • sodality2 2 hours ago
              > That means the lock-in isn’t just product strategy. It’s also architecture.

              > And that omission isn’t some harmless simplification. It’s the entire trick.

              It isn't just once. It's—twice. ;)

              • rkagerer 1 hour ago
                Oof, thanks! (I'm going to blame it on my Android Chrome "find in page" tool not working as expected, and I apologize)
              • kgeist 2 hours ago
                Also stuff like this:

                >That’s not exotic. That’s just model parallelism with extra suffering.

                >That’s not product magic. That’s a checkbox.

                What really triggers my internal AI slop detector is this:

                >Their renders. Their prototype shots. Their exploded views. Their spec sheet.

                >Nobody asked what silicon was inside. Nobody asked how 120B on LPDDR5X was supposed to work. Nobody spent

                >No cloud. No GPU. No subscriptions.

                >wrong class of chip, wrong power envelope, wrong everything

                >The visual geometry matches. The licensing model matches. The China-based semiconductor ecosystem match

                >Real researchers. Real papers. Real contributions.

                LLMs love to overuse this pattern.

                • kgeist 1 hour ago
                  This also smells of an autoregressive model trying to make a point that TiinyAI simply forked another repo and claimed as their own invention, before realizing mid-paragraph it's by the same people:

                  >So no, TiinyAI did not “launch” PowerInfer. SJTU researchers did.

                  >TiinyAI’s GitHub repo is a fork of the original PowerInfer repository. At least one of the original academic authors appears tied to the code history. So there is clearly some real overlap between the research world and the product world.

  • VladVladikoff 3 hours ago
    >No cloud. No GPU. No subscriptions. Private, offline, always on.

    Flagged.

  • neuroelectron 4 hours ago
    I bet they picked the name to be confused with tinygrad