We've Been Conned: The Truth about Big LLM

(dolthub.com)

6 points | by midzer 287 days ago

2 comments

  • joegibbs 287 days ago
    It could be $98/hour but you're splitting that up between multiple users. You don't run the instance entirely for an hour, you run it for a few seconds 20-50 times in the hour. If you had Claude spitting out tokens for an hour straight you'd run up a crazy bill.

    It would be uneconomical to run Llama 3 14B on a bunch of A100s unless you're actually going to be using all that throughput. You can run Llama 3 8B locally no problem at all on regular consumer hardware with good speeds.

  • Hackbraten 286 days ago
    I know it's not the point of the article, but anyway: why does the author even allow their IDE to suggest them auto-completion while editing natural language text?

    If they hate it so much, why don't they turn it off once and for all?