I genuinely feel disrespected if AI is used to write an article and it's not disclosed in the first paragraph. It's not really that big of a deal, tbh, it's like saying "I took a picture, I didn't paint it."
Which is fine, but please disclose it. Otherwise, like in this case, I'm going to assume the author is a moron that can't write for shit who thinks their readers are morons that can't read for shit.
Agree. Also because of the way AI writes, it takes SO LONG to read through it (they're trained on blogspam where the page tells you the author's life story as well as the bloody history of bread before telling you how to bake it)
That's why in this case I usually ask to another AI to make me a short summary with the main points. I wish the human behind the looong article idea chooses to publish a short summary directly instead.
Every week, I get another email asking if I'd review some random AI box from a random company—I don't know if this Pocket Lab was offered for review at some point, but it sounds very similar to others:
- Commodity Arm SoC (or sometimes N100 or N150 x86)
- 8/16/32 GB of LPDDR5x RAM
- 'NPU' (usually unspecified) with ambiguous 'TOPS' number (like 20, 40, 80)
Usually specifics aren't provided, and TOPS is never defined in a technically useful way. The few times it is, are from more established companies (e.g. Asus or Raspberry Pi integrating a well-known NPU chip into one of their products).
It's worse at this point than the peak of the crypto boom, when I was getting emails touting the next chain-of-proof software, or ledger-this/ledger-that. Now that there are a few actual use cases for this hardware, it requires more nuance to separate the wheat from the chaff.
And for me, I spend weeks, typically, with any hardware I _do_ review, running as many models and test runs as I can (and documenting everything on GitHub, in depth, with scripts so other people can verify). Most reviewers (like those with publications named in this post) either don't have the time, or sadly, the understanding, to test these devices in a meaningful way.
Therefore, random blog posts (which are getting harder and harder to find, amidst the AI-laden first 2-4 pages of DuckDuckGo and Google results) are the best source of information. Or sometimes a post on Mastodon, which is never easy to find since search isn't a thing there.
Edit: Ah, they did reach out around CES time. Funny seeing their pitch deck including a note on Dr. Miles Mi, with a row of logos on that page including Apple, MIT, Berkeley, DJI, VIVO, Tuya, and a few others, as if they were using this project or something?
Tl;Dr it's actually a CIX p1 + 32gb (similar to orange pi 6) and a "160TOPS" NPU accelerator with 48gb - attached via NVME. models will either have to fit in one pool or deal with shuttling data over m.2, the company has some optimizations regarding this but it's still a serious limitation.
There you go, two sentences without burying the lede.
Is it maybe competitive value anyways though? Even if you only think of the accelerators, 48gb+160TOPS seems comparable to some Strix Halo mini PCS with 64gb - lower memory bandwidth but a few hundred dollars cheaper. If they sold just the accelerator card for $800 or something that would be potentially very interesting.
As SBC hardware goes, CIX P1 is actually pretty respectable. It uses modern-ish ARM cores when everyone else are using something from 10 years ago. So performance is pretty good:
Including questions of LLM origin. Seems like the OP might have submitted that one (47431685) although there's another copy now (beyond this SCP entry from 3 days ago)
Every time I complain about this kind of useless AI slop I get downvoted to hell and get dozens of comments saying "it doesn't look AI at all", so I don't even bother anymore. It's incredibly sad, I expected much more from this community... But it looks like it'll soon be dead like the rest of the internet.
This also smells of an autoregressive model trying to make a point that TiinyAI simply forked another repo and claimed as their own invention, before realizing mid-paragraph it's by the same people:
>So no, TiinyAI did not “launch” PowerInfer. SJTU researchers did.
>TiinyAI’s GitHub repo is a fork of the original PowerInfer repository. At least one of the original academic authors appears tied to the code history. So there is clearly some real overlap between the research world and the product world.
Which is fine, but please disclose it. Otherwise, like in this case, I'm going to assume the author is a moron that can't write for shit who thinks their readers are morons that can't read for shit.
It's worse at this point than the peak of the crypto boom, when I was getting emails touting the next chain-of-proof software, or ledger-this/ledger-that. Now that there are a few actual use cases for this hardware, it requires more nuance to separate the wheat from the chaff.
And for me, I spend weeks, typically, with any hardware I _do_ review, running as many models and test runs as I can (and documenting everything on GitHub, in depth, with scripts so other people can verify). Most reviewers (like those with publications named in this post) either don't have the time, or sadly, the understanding, to test these devices in a meaningful way.
Therefore, random blog posts (which are getting harder and harder to find, amidst the AI-laden first 2-4 pages of DuckDuckGo and Google results) are the best source of information. Or sometimes a post on Mastodon, which is never easy to find since search isn't a thing there.
Edit: Ah, they did reach out around CES time. Funny seeing their pitch deck including a note on Dr. Miles Mi, with a row of logos on that page including Apple, MIT, Berkeley, DJI, VIVO, Tuya, and a few others, as if they were using this project or something?
https://news.ycombinator.com/item?id=47486257
https://www.kickstarter.com/projects/tiinyai/tiiny-ai-pocket...
There you go, two sentences without burying the lede.
Is it maybe competitive value anyways though? Even if you only think of the accelerators, 48gb+160TOPS seems comparable to some Strix Halo mini PCS with 64gb - lower memory bandwidth but a few hundred dollars cheaper. If they sold just the accelerator card for $800 or something that would be potentially very interesting.
https://wp.pureprogrammer.org/2025/12/20/comparison-of-orang...
But yeah, this should have been priced like 2x of a maxed out Raspberry Pi 5.
Including questions of LLM origin. Seems like the OP might have submitted that one (47431685) although there's another copy now (beyond this SCP entry from 3 days ago)
Will be interesting to see if a public outcry will happen once these boxes start arriving at those who funded the kickstarter.
Every time I complain about this kind of useless AI slop I get downvoted to hell and get dozens of comments saying "it doesn't look AI at all", so I don't even bother anymore. It's incredibly sad, I expected much more from this community... But it looks like it'll soon be dead like the rest of the internet.
> This isn't X. It's Y.
> And that omission isn’t some harmless simplification. It’s the entire trick.
It isn't just once. It's—twice. ;)
>That’s not exotic. That’s just model parallelism with extra suffering.
>That’s not product magic. That’s a checkbox.
What really triggers my internal AI slop detector is this:
>Their renders. Their prototype shots. Their exploded views. Their spec sheet.
>Nobody asked what silicon was inside. Nobody asked how 120B on LPDDR5X was supposed to work. Nobody spent
>No cloud. No GPU. No subscriptions.
>wrong class of chip, wrong power envelope, wrong everything
>The visual geometry matches. The licensing model matches. The China-based semiconductor ecosystem match
>Real researchers. Real papers. Real contributions.
LLMs love to overuse this pattern.
>So no, TiinyAI did not “launch” PowerInfer. SJTU researchers did.
>TiinyAI’s GitHub repo is a fork of the original PowerInfer repository. At least one of the original academic authors appears tied to the code history. So there is clearly some real overlap between the research world and the product world.
Flagged.