GPT‑5.3‑Codex‑Spark

(openai.com)

870 points | by meetpateltech 1 day ago

77 comments

  • ElijahLynn 23 hours ago
    Wow, I wish we could post pictures to HN. That chip is HUGE!!!!

    The WSE-3 is the largest AI chip ever built, measuring 46,255 mm² and containing 4 trillion transistors. It delivers 125 petaflops of AI compute through 900,000 AI-optimized cores — 19× more transistors and 28× more compute than the NVIDIA B200.

    From https://www.cerebras.ai/chip:

    https://cdn.sanity.io/images/e4qjo92p/production/78c94c67be9...

    https://cdn.sanity.io/images/e4qjo92p/production/f552d23b565...

    • dotancohen 16 hours ago

        > 46,255 mm²
      
      To be clear: that's the thousandths separator, not the Nordic decimal. It's the size of a cat, not the size of a thumbnail.
      • ash_091 13 hours ago
        *thousands, not thousandths, right?

        The correct number is fourty six thousand, two hundred and fifty five square mm.

        • tshaddox 8 hours ago
          Whoa, I just realized that the comma is the thousands separator when writing out the number in English words.
        • c0balt 10 hours ago
          or about 0.23 furlongs
          • fifilura 10 hours ago
            More like one thousanth percent of an acre.

            (wow, that is not much?)

      • shwetanshu21 16 hours ago
        Thanks, I was acutally wondering how would someone even manage to make that big a chip.
        • jmalicki 7 hours ago
          It's a whole wafer. Basically all chips are made on wafers that big, but normally it's a lot of different chips, you cut the wafer into small chips and throw the bad ones away.

          Cerebras has other ways of marking the defects so they don't affect things.

      • Sharparam 9 hours ago
        This is why space is the only acceptable thousands/grouping separator (a non-breaking space when possible). Avoids any confusion.
        • xyproto 9 hours ago
          Space is also confusing! Then it looks like two separate numbers.

          Underscore (_) is already used as a decimal separator in programming languages and Mathematics should just adopt it, IMO.

          • awakeasleep 8 hours ago
            A competing format to simplify things?
          • wvbdmp 8 hours ago
            You mean thousands separator, yes? Agreed, it’s annoying when languages don’t have this feature.
          • wongarsu 6 hours ago
            A thin non-breaking space also clears up the confusion without the visual clutter of the underscore. It's just inconvenient to use
        • fcanesin 9 hours ago
          46_255
        • mulmen 3 hours ago
          The problem is our primitive text representation online. The formatting should be localized but there’s not a number type I can easily insert inline in a text box.
    • codyb 18 hours ago
      Wow, I'm staggered, thanks for sharing

      I was under the impression that often times chip manufacture at the top of the lines failed to be manufactured perfectly to spec and those with say, a core that was a bit under spec or which were missing a core would be down clocked or whatever and sold as the next in line chip.

      Is that not a thing anymore? Or would a chip like this maybe be so specialized that you'd use say a generation earners transistor width and thus have more certainty of a successful cast?

      Or does a chip this size just naturally ebb around 900,000 cores and that's not always the exact count?

      20kwh! Wow! 900,000 cores. 125 teraflops of compute. Very neat

      • fulafel 16 hours ago
        Designing to tolerate the defects is well trodden territory. You just expect some rate of defects and have a way of disabling failing blocks.
        • DeathArrow 15 hours ago
          So you shoot for 10% more cores and disable failing cores?
          • joha4270 15 hours ago
            More or less, yes. Of course, defects are not evenly distributed, so you get a lot of chips with different grades of brokenness. Normally the more broken chips gets sold off as lower tier products. A six core CPU is probably an eight core with two broken cores.

            Though in this case, it seems [1] that Cerebras just has so many small cores they can expect a fairly consistent level of broken cores and route around them

            [1]: https://www.cerebras.ai/blog/100x-defect-tolerance-how-cereb...

          • jychang 15 hours ago
            Well, it's more like they have 900,000 cores on a WSE and disable whatever ones that don't work.

            Seriously, that's literally just what they do.

      • graboy 14 hours ago
        IIRC, a lot of design went into making it so that you can disable parts of this chip selectively.
    • vagrantstreet 14 hours ago
      Why is the CEO some shady guy? though https://daloopa.com/blog/analyst-pov/cerebras-ipo-red-flags-...

      "AI" always has some sleazy person behind it for some reason

      • amelius 12 hours ago
        You need the sleazy person because you need a shit-ton of money.
        • vagrantstreet 5 hours ago
          I have "shady CEO that doesn't give back to his country or community" fatigue
    • elorant 21 hours ago
      There have been discussions about this chip here in the past. Maybe not that particular one but previous versions of it. The whole server if I remember correctly eats some 20KWs of power.
      • zozbot234 16 hours ago
        A first-gen Oxide Computer rack puts out max 15 kW of power, and they manage to do that with air cooling. The liquid-cooled AI racks being used today for training and inference workloads almost certainly have far higher power output than that.

        (Bringing liquid cooling to the racks likely has to be one of the biggest challenges with this whole new HPC/AI datacenter infrastructure, so the fact that an aircooled rack can just sit in mostly any ordinary facility is a non-trivial advantage.)

        • mlyle 7 hours ago
          > The liquid-cooled AI racks being used today for training and inference workloads almost certainly have far higher power output than that.

          75kW is a sane "default baseline" and you can find plenty of deployments at 130kW.

          There's talk of pushing to 240kW and beyond...

        • c0balt 10 hours ago
          > Bringing liquid cooling to the racks likely has to be one of the biggest challenges with this whole new HPC/AI

          Are you sure about that? HPC has had full rack liquid cooling for a long time now.

          The primary challenge with the current generation is the unusual increase of power density in racks. This necessitates upgrades in capacity, notably getting 10-20 kWh of heat away from few Us is generally though but if done can increase density.

          • jmalicki 7 hours ago
            HPC is also not a normal data center but also usually doesn't have the scale of hyperscaler AI data centers either.
        • elcritch 14 hours ago
          Well for some. Google has been using liquid cooling to racks for decades.
      • dyauspitr 17 hours ago
        That’s wild. That’s like running 15 indoor heaters at the same time.
      • neya 20 hours ago
        20KW? Wow. That's a lot of power. Is that figure per hour?
        • fodkodrasz 16 hours ago
          What do you mean by "per hour"?

          Watt is a measure of power, that is a rate: Joule/second, [energy/time]

          > The watt (symbol: W) is the unit of power or radiant flux in the International System of Units (SI), equal to 1 joule per second or 1 kg⋅m2⋅s−3.[1][2][3] It is used to quantify the rate of energy transfer.

          https://en.wikipedia.org/wiki/Watt

        • ai-christianson 20 hours ago
          If you run it for an hour, yes.
          • taneq 9 hours ago
            Ah yes, like those EV chargers that are rated at X kWh/hour.
            • t0mas88 5 hours ago
              You would hope that an EV reporting x kWh/hour considers the charge curve when charging for an hour. Then it makes sense to report that instead of the peak kW rating. But reality is that they just report the peak kW rating as the "kWh/hour" :-(
          • neya 20 hours ago
            I asked because that's the average power consumption of an average household in the US per day. So, if that figure is per hour, that's equivalent to one household worth of power consumption per hour...which is a lot.
            • mediaman 18 hours ago
              Others clarified the kW versus kWh, but to re-visit the comparison to a household:

              One household uses about 30 kWh per day.

              20 kW * 24 = 480 kWh per day for the server.

              So you're looking at one server (if parent's 20kW number is accurate - I see other sources saying even 25kW) consuming 16 households worth of energy.

              For comparison, a hair dryer uses around 1.5 kW of energy, which is just below the rating for most US home electrical circuits. This is something like 13 hair dryers going on full blast.

              • mycall 7 hours ago
                At least with GPT-5.3-Codex-Spark, I gather most of the AI inference isn't rendering cat videos but mostly useful work.. so I don't feel tooo bad about 16 households worth of energy.
                • wongarsu 6 hours ago
                  To be fair, this is 16 households of electrical energy. The average household uses about as much electrical energy as it uses energy in form of natural gas (or butane or fuel oil, depending on what they use). And then roughly as much gasoline as they use electricity. So really more like 5 households of energy. And that's just your direct energy use, not accounting for all the products including food consumed in the average household.
              • seaal 17 hours ago
                Which honestly doesn't sound that bad given how many users one server is able to serve.
            • jodrellblank 19 hours ago
              Consumption of a house per day is measured in kiloWatt-hours (an amount of power like litres of water), not kiloWatts (a flow of power like 1 litre per second of water).

              1 Watt = 1 Joule per second.

              • neya 19 hours ago
                Thanks!
            • cortesoft 19 hours ago
              I think you are confusing KW (kilowatt) with KWH (kilowatt hour).

              A KW is a unit of power while a KWH is a unit of energy. Power is a measure of energy transferred in an amount of time, which is why you rate an electronic device’s energy usage using power; it consumes energy over time.

              In terms of paying for electricity, you care about the total energy consumed, which is why your electric bill is denominated in KWH, which is the amount of energy used if you use one kilowatt of power for one hour.

              • neya 19 hours ago
                You're right, I absolutely was mixing them both. Thanks for clarifying!
              • lm28469 9 hours ago
                Acktshually "kW" and "kWh" to be precise
        • ipython 20 hours ago
          It’s 20kW for as long as you can afford the power bill
        • ddalex 16 hours ago
          20 kWh per hour
    • hugh-avherald 21 hours ago
      Maybe I'm silly, but why is this relevant to GPT-5.3-Codex-Spark?
      • tonyarkles 21 hours ago
        It’s the chip they’re apparently running the model on.

        > Codex-Spark runs on Cerebras’ Wafer Scale Engine 3 (opens in a new window)—a purpose-built AI accelerator for high-speed inference giving Codex a latency-first serving tier. We partnered with Cerebras to add this low-latency path to the same production serving stack as the rest of our fleet, so it works seamlessly across Codex and sets us up to support future models.

        https://www.cerebras.ai/chip

      • thunderbird120 21 hours ago
        That's what it's running on. It's optimized for very high throughput using Cerebras' hardware which is uniquely capable of running LLMs at very, very high speeds.
    • lanthissa 11 hours ago
      for cerbras, can we call them chips? you're no longer breaking the wafer we should call them slabs
      • amelius 11 hours ago
        They're still slices of a silicon ingot.

        Just like potato chips are slices from a potato.

      • colordrops 9 hours ago
        Macrochips
    • kumarvvr 21 hours ago
      Is this actually beneficial than, say having a bunch of smaller ones communicating on a bus? Apart from space constraints that is.
      • zamadatix 21 hours ago
        It's a single wafer, not a single compute core. A familiar equivalent might be putting 192 cores in a single Epyc CPU (or, more to be more technically accurate, the group of cores in a single CCD) rather than trying to interconnect 192 separate single core CPUs externally with each other.
      • santaboom 21 hours ago
        Yes, bandwidth within a chip is much higher than on a bus.
    • larodi 15 hours ago
      Is all of it one chip? Seems like a waffer with several at least?
      • txyx303 15 hours ago
        Those are scribe lines where you usually would cut out chips which is why it resembles multiple chips. However, they work with TSMC to etch across them.
    • DeathArrow 15 hours ago
      >Wow, I wish we could post pictures to HN. That chip is HUGE!!!!

      Using a waffer sized chip doesn't sound great from a cost perspective when compared to using many smaller chips for inference. Yield will be much lower and prices higher.

      Nevertheless, the actual price might not be very high if Cerebras doesn't apply an Nvidia level tax.

      • energy123 15 hours ago
        > Yield will be much lower and prices higher.

        That's an intentional trade-off in the name of latency. We're going to see a further bifurcation in inference use-cases in the next 12 months. I'm expecting this distinction to become prominent:

        (A) Massively parallel (optimize for token/$)

        (B) Serial low latency (optimize for token/s).

        Users will switch between A and B depending on need.

        Examples of (A):

        - "Search this 1M line codebase for DRY violations subject to $spec."

        An example of (B):

        - "Diagnose this one specific bug."

        - "Apply this diff".

        (B) is used in funnels to unblock (A). (A) is optimized for cost and bandwidth, (B) is optimized for latency.

      • magicalhippo 15 hours ago
        As I understand it the chip consists of a huge number of processing units, with a mesh network between them so to speak, and they can tolerate disabling a number of units by routing around them.

        Speed will suffer, but it's not like a stuck pixel on an 8k display rendering the whole panel useless (to consumers).

      • Heathcorp 15 hours ago
        Cerebras addresses this in a blog post: https://www.cerebras.ai/blog/100x-defect-tolerance-how-cereb...

        Basically they use very small cores compared to competitors, so faults only affect small areas.

    • kreelman 22 hours ago
      Wooshka.

      I hope they've got good heat sinks... and I hope they've plugged into renewable energy feeds...

      • thrance 21 hours ago
        Fresh water and gas turbines, I'm afraid...
      • King-Aaron 22 hours ago
        Nope! It's gas turbines
        • atonse 7 hours ago
          For now. And also largely because it's easier to get that up and running than the alternative.

          Eventually, as we ramp up on domestic solar production, (and even if we get rid of solar tariffs for a short period of time maybe?), the numbers will make them switch to renewable energy.

        • quaintdev 19 hours ago
          [flagged]
    • odiroot 8 hours ago
      I can imagine how terribly bad their yield must be. One little mistake and the whole "chip" is a goner.
      • tshaddox 8 hours ago
        They have a blog post called "100x Defect Tolerance: How Cerebras Solved the Yield Problem":

        https://www.cerebras.ai/blog/100x-defect-tolerance-how-cereb...

        • svnt 7 hours ago
          One thing is really bothering me: they show these tiny cores, but in the wafer comparison image, they chose to show their full chip as a square bounded by the circle of the wafer. Even the prior GPU arch tiled into the arcs of the circle. What gives?
  • beklein 1 day ago
    I love this! I use coding agents to generate web-based slide decks where “master slides” are just components, and we already have rules + assets to enforce corporate identity. With content + prompts, it’s straightforward to generate a clean, predefined presentation. What I’d really want on top is an “improv mode”: during the talk, I can branch off based on audience questions or small wording changes, and the system proposes (say) 3 candidate next slides in real time. I pick one, present it, then smoothly merge back into the main deck. Example: if I mention a recent news article / study / paper, it automatically generates a slide that includes a screenshot + a QR code link to the source, then routes me back to the original storyline. With realtime voice + realtime code generation, this could turn the boring old presenter view into something genuinely useful.
    • sva_ 1 day ago
      I love the probabilistic nature of this. Presentations could be anywhere from extremely impressive to hilariously embarrassing.
      • clickety_clack 1 day ago
        It would be so cool if it generated live in the presentation and adjusted live as you spoke, so you’d have to react to whatever popped on screen!
        • crystal_revenge 1 day ago
          There was a pre-LLM version of this called "battledecks" or "PowerPoint Karaoke"[0] where a presenter is given a deck of slides they've never seen and have to present on it. With a group of good public speakers it can be loads of fun (and really impressive the degree that some people can pull it off!)

          0. https://en.wikipedia.org/wiki/PowerPoint_karaoke

          • bsharper 1 day ago
            There is a Jackbox game called "Talking Points" that's like this: the players come up with random ideas for presentations, your "assistant" (one of the other players) picks what's on each slide while you present: https://www.youtube.com/watch?v=gKnprQpQONw
          • alfiedotwtf 13 hours ago
            If you like this, search on YouTube for "Harry Mack". Mindblowing
          • hluska 21 hours ago
            That is very cool. Thanks for posting this - I think I’m going to put on a PowerPoint karaoke night. This will rule! :)
        • Etheryte 1 day ago
          Some consulting firms do this, one guy is giving the presentation live while others are in the next meeting room still banging out the slides.
        • dotancohen 16 hours ago
          That would make a great Wii game.

          PowerPoint Hero.

        • onionisafruit 1 day ago
          Every presentation becomes improv
          • deepGem 1 day ago
            Isn't that such a great outcome. No more robotic presentations. The best part is that you can now practice Improv at the comfort of your home.
            • mbreese 1 day ago
              And this product will work great for any industry... can I get a suggestion for an industry from the crowd?

              Audience: Transportation... Education... Insurance...

              Speaker: Great! I heard "Healthcare".

              Right... as we can see from this slide, this product fits the "Healthcare" industry great because of ...

              • lelandfe 1 day ago
                Caro’s first LBJ biography tells of how the future president became a congressman in Texas in his 20s, by carting around a “claque” of his friends to various stump speeches and having them ask him softball questions and applauding loudly after

                Well, hey, who needs friends?

          • DonHopkins 1 day ago
            I had a butterfly take over my live DreamScape slide show demo at the 1995 WWDC.

            https://youtu.be/5NytloOy7WM?t=321

        • nikcub 23 hours ago
          and with neuralink it would generate slides of the audience naked
      • codyb 18 hours ago
        I guess you could have two people per presentation, one person who confirms whether to slide in the generated slide or maybe regenerate. And then of course, eventually that's just an agent
    • m_mueller 1 day ago
      You're describing almost verbatim what we're building at Octigen [1]! Happy to provide a demo and/or give you free access to our alpha version already online.

      [1] https://octigen.com

      • cebert 20 hours ago
        Claude Code is pretty good at making slides already. What’s your differentiator?
        • m_mueller 17 hours ago
          * ability to work with your own PowerPoint native templates; none of the AI slide makers I've seen have any competency in doing that.

          * ability to integrate your corporate data.

          * repeatable workflows for better control over how your decks look like.

    • jorgenveisdal 1 day ago
      As an associate professor who spends a ridiculous amount of time preparing for lectures, I would love to try this in one of my courses
      • cebert 20 hours ago
        Try Claude Code too. It’s surprisingly good at this.
    • deepGem 1 day ago
      I built something similar at a hackathon, a dynamic teleprompter that adjusts the speed of tele-prompting based on speaker tonality and spoken wpm. I can see extending the same to an improv mode. This is a super cool idea.
    • esafak 1 day ago
      Can you show one?
      • beklein 1 day ago
        The end result would be a normal PPT presentation, check https://sli.dev as an easy start, ask Codex/Claude/... to generate the slides using that framework with data from something.md. The interesting part here is generating these otherwise boring slide decks not with PowerPoint itself but with AI coding agents and a master slides, AGENTS.md context. I’ll be showing this to a small group (normally members only) at IPAI in Heilbronn, Germany on 03/03. If you’re in the area and would like to join, feel free to send me a message I will squeeze you in.
    • orochimaaru 1 day ago
      How do you handle the diagrams?
      • beklein 1 day ago
        In my AGENTS.md file i have a _rule_ that tells the model to use Apache ECharts, the data comes from the prompt and normally .csv/.json files. Prompt would be like: "After slide 3 add a new content slide that shows a bar chart with data from @data/somefile.csv" ... works great and these charts can be even interactive.
        • orochimaaru 1 day ago
          What about other ad hoc diagrams like systems architecture, roadmaps, mind maps, etc.

          These are the bane of any staff engineers life - lol. Because people above need to know a plan in art form.

          So seriously interested on how I can make it easier

          • mcamac 1 day ago
            You could try something like mermaid (or ASCII) -> nano banana. You can also go the other way and turn images into embedded diagrams (which can be interactive depending on how you're sharing the presentation)
          • jmalicki 7 hours ago
            I have it do svgs.

            The layout isn't always great on first shot, but you iterate on that.

            They can also natively generate e.g. github markdown mermaid diagrams (github markdown has a lot of extensions like that)

          • beklein 1 day ago
            Not my normal use-case, but you can always fall back and ask the AI coding agent to generate the diagram as SVG, for blocky but more complex content like your examples it will work well and still is 100% text based, so the AI coding agents or you manually can fix/adjust any issues. An image generation skill is a valid fallback, but in my opinion it's hard to change details (json style image creation prompts are possible but hard to do right) and you won't see changes nicely in the git history. In your use case you can ask the AI coding agent to run a script.js to get the newest dates for the project from a page/API, then it should only update the dates in the roadmap.svg file on slide x with the new data. This way you will automagically have the newest numbers and can track everything within git in one prompt. Save this as a rule in AGENTS.md and run this every month to update your slides with one prompt.
          • sleazebreeze 1 day ago
            Claude code can output Excalidraw format files which can be imported directly into the webapp. You can MCP it too if you want.
    • turnsout 1 day ago
      I love the idea of a living slide deck. This feels like a product that needs to exist!
  • postalcoder 1 day ago
    First thoughts using gpt-5.3-codex-spark in Codex CLI:

    Blazing fast but it definitely has a small model feel.

    It's tearing up bluey bench (my personal agent speed benchmark), which is a file system benchmark where I have the agent generate transcripts for untitled episodes of a season of bluey, perform a web search to find the episode descriptions, and then match the transcripts against the descriptions to generate file names and metadata for each episode.

    Downsides:

    - It has to be prompted to do actions in my media library AGENTS.md that the larger models adhere to without additional prompting.

    - It's less careful with how it handles context which means that its actions are less context efficient. Combine that with the smaller context window and I'm seeing frequent compactions.

      Bluey Bench* (minus transcription time):
    
      Codex CLI
      gpt-5.3-codex-spark low        20s
      gpt-5.3-codex-spark medium     41s
      gpt-5.3-codex-spark xhigh   1m 09s (1 compaction)
    
      gpt-5.3-codex low           1m 04s
      gpt-5.3-codex medium        1m 50s
    
      gpt-5.2 low                 3m 04s
      gpt-5.2 medium              5m 20s
    
      Claude Code
      opus-4.6 (no thinking)      1m 04s
    
      Antigravity
      gemini-3-flash              1m 40s
      gemini-3-pro low            3m 39s
    
      *Season 2, 52 episodes
    • HumanOstrich 16 hours ago
      Yea it's been butchering relatively easy to moderate tasks for me even with reasoning set to high. I am hoping it's just tuning that needs to be done since they've had to port it to a novel architecture.

      If instead the model is performing worse due to how much they had to shrink it just so it will fit on Cerebras hardware, then we might be in for a long wait for the next gen of ginormous chips.

      • postalcoder 15 hours ago
        Agree w/ you on the model's tendency to butcher things. Performance wise, this almost feels like the GPT-OSS model.

        I need to incorporate "risk of major failure" into bluey bench. Spark is a dangerous model. It doesnt strongly internalize the consequences of the commands that it runs, even on xhigh. As a result I'm observing a high tendency to run destructive commands.

        For instance, I asked it to assign random numbers to the filename of the videos in my folder to run the bm. It accidentally deleted the files on most of the runs. The funniest part about it is that it comes back to you within a few seconds and says something like "Whoops, I have to keep it real, I just deleted the files in your folder."

        • HumanOstrich 15 hours ago
          Ouch, at least it fesses up. I ran into problems with it first refusing to use git "because of system-level rules in the session". Then later it randomly amended a commit and force pushed it because it made a dumb mistake. I guess it was embarassed.
      • jychang 14 hours ago
        > If instead the model is performing worse due to how much they had to shrink it just so it will fit on Cerebras hardware

        They really should have just named it "gpt-5.3-codex-mini" (served by Cerebras). It would have made it clear what this model really is.

        • HumanOstrich 14 hours ago
          Not if you're suggesting that "(served by Cerebras)" should be part of the name. They're partnering with Cerebras and providing a layer of value. Also, OpenAI is "serving" you the model.

          We don't know how they integrate with Cerebras hardware, but typically you'd pay a few million dollars to get the hardware in your own datacenter. So no, "served by Cerebras" is confusing and misleading.

          Also "mini" is confusing because it's not analagous to gpt-5.1-codex vs gpt-5.1-codex-mini. Gpt-5.3-codex-spark is a unique, _experimental_ offering that doesn't fit the existing naming suffixes.

          I don't understand what's wrong with "spark". It's friendly and evokes a sense of something novel, which is perfect.

          If you want to know more about the model, read the first paragraph of the article. That information doesn't need to be hardcoded into the model name indefinitely. I don't see any "gpt-5.3-codex-nvidia" models.

          • jychang 13 hours ago
            Uh, that paragraph translated from "marketing bullshit" into "engineer" would be "we distilled the big gpt-5.3-codex model into a smaller size that fits on the 44GB of SRAM of a Cerebras WSE-3 multiplied by whatever tensor parallel or layer parallel grouping they're doing".

            (Cerebras runs llama-3.3 70b on 4 WSE-3 units with layer parallelism, for example).

            That's basically exactly what gpt-5.3-codex-mini would be.

            > Also "mini" is confusing because it's not analagous to gpt-5.1-codex vs gpt-5.1-codex-mini.

            So perhaps OpenAI intentionally picked the model's layer param count, MoE expert size, etc to fit onto the Cerebras machines. That's like saying "the DVD producer optimized this movie for you" (they just cropped and compressed it down to 4.7GB so it would fit on a DVD). Maybe the typical mini model is 100gb, and they made it 99gb instead or something like that. It's still analogous to gpt-5.3-codex-mini.

            I'm underselling it a little bit, because it takes a bit more work than that to get models to run on Cerebras hardware (because they're so weird and un-GPU-like), but honestly if Cerebras can get Llama 3.1 405b or GLM 4.7 running on their own chips, it's not that much harder to have Cerebras get gpt-5.3-codex-mini running.

            • HumanOstrich 13 hours ago
              Uh, the combined offering (smaller model + ~800 tps on cerebras) is nothing like the previous mini offerings, and you're hallucinating details about their process of creating it.

              Read more about how Cerebras hardware handles clustering. The limit is not 44 GB or 500GB. Each CS-3 has 1,200 TB of MemoryX, supporting up to ~24T parameter models. And up to 2,048 can be clustered.

              • jychang 12 hours ago
                Yeah, it's pretty clear you're loud mouthed and don't know anything about distilling ML models or anything Cerebras. Distilling ML models into smaller mini versions is basic stuff. How do you think Qwen 3 235b and Qwen 3 30b were made? Or GLM 4.5 355b vs GLM 4.5 Air 105b? Or Meta Llama 4 Maverick and Scout? And everyone knows that the reason Cerebras never served Deepseek R1 or Kimi K2 or any other model bigger than ~500B is because their chips don't have enough memory. People have been begging Cerebras to serve Deepseek forever now, and they never actually managed to do it.

                Cerebras doesn't run inference from MemoryX, the same way no other serious inference provider runs inference off of system RAM. MemoryX is connected to the CS-3 over ethernet! It's too slow. MemoryX is only 150GB/sec for the CS-3![1] If you're running inference at 800tokens/sec, with 150GB/sec that means each token can only load 0.18GB of params. For obvious reasons, I don't think OpenAI is using a 0.18B sized model.

                The limit is 44GB for each WSE-3. [2] That's how much SRAM a single WSE-3 unit has. For comparison, a Nvidia H100 GPU has 80GB, and a DGX H100 server with 8 GPUs have 640GB of VRAM. Each WSE-3 has 44GB to play around with, and then if you have each one handling a few layers, you can load larger models. That's explicitly what Cerebras says they do: "20B models fit on a single CS-3 while 70B models fit on as few as four systems." [3]

                You're reading marketing material drivel about training models that NOBODY uses Cerebras for. Basically nobody uses Cerebras for training, only inference.

                [1] https://www.kisacoresearch.com/sites/default/files/documents... "The WSE-2’s 1.2Tb/s of I/O bandwidth is used for [...] transmitting gradients back to the MemoryX service." That quote is about WSE-2/CS-2, but the CS-3 spec lists the same System I/O: 1.2 Tb/s (12×100 GbE).

                [2] https://cdn.sanity.io/images/e4qjo92p/production/50dcd45de5a... This really makes it obvious why Cerebras couldn't serve Deepseek R1. Deepseek is 10x larger than a 70b model. Since they don't do tensor parallelism, that means each chip has to wait for the previous one to finish before it can start. So not only is it 10x more memory consumption, it has to load all that sequentially to boot. Cerebras' entire market demands 1000 tokens per second for the much higher price that they charge, so there's no profit in them serving a model which they can only do 500 tokens/sec or something slow like that.

                [3] https://www.cerebras.ai/blog/introducing-cerebras-inference-...

                • aurareturn 10 hours ago
                  Yes. In order to serve 1k/s, they must be fitting the entire model on SRAM and not reaching out to off chip RAM. This means they’re likely chaining multiple wafer chips together to serve this model or they shrunk the model to fit one wafer chip. It’s uneconomical for many use cases but for highly valuable tasks, it could be worth it.

                  This is one area Nvidia chips have not been able to do, ultra fast, ultra high value tasks. Hence, the Grog acquisition.

                • HumanOstrich 10 hours ago
                  Yea, it's pretty clear you're loudmouthed and an aggressively arrogant know-it-all (at least you think). You keep moving the goalposts too. First you're acting like they can't run models that don't fit in 44GB or 4x44GB. Then you say they can "only" run a larger model at 500 tps but that wouldn't be profitable.. Lol

                  Cerebras CURRENTLY serves GLM-4.7. I've used it through their API. Look up how big it is. 1,000-1,700 tps. https://www.cerebras.ai/blog/glm-4-7

                  Not interested in further conversation, so have a nice day! You can go ahead and get in the last word though.

    • alexdobrenko 1 day ago
      can we plese make the bluey bench the gold standard for all models always
    • Squarex 1 day ago
      I wonder why they named it so similiarly to the normal codex model while it much worse, while cool of course.
      • HumanOstrich 16 hours ago
        Not sure what you mean. It IS the same model, just a smaller version of it. And gpt-5.3-codex is a smaller version of gpt-5.3 trained more on code and agentic tasks.

        Their naming has been pretty consistent since gpt-5. For example, gpt-5.1-codex-max > gpt-5.1-codex > gpt-5.1-codex-mini.

        • swyx 2 hours ago
          how are you so sure :)
    • yojo 19 hours ago
      I’ve been slow to invest in building flows around parallelizing agent work under the assumption that eventually inference will get fast enough that I will basically always be the bottleneck.

      Excited to see glimpses of that future. Context switching sucks and I’d much rather work focused on one task while wielding my coding power tools.

    • ttul 20 hours ago
      I gave it a run on my Astro website project. It definitely makes more mistakes than Codex-5.3, but the speed is something to behold. The text flashes by way faster than I can understand what's going on. And most of its edits worked. I then used Codex-5.3-xhigh to clean things up...
    • mnicky 1 day ago
      Can you compare it to Opus 4.6 with thinking disabled? It seems to have very impressive benchmark scores. Could also be pretty fast.
      • postalcoder 1 day ago
        Added a thinking-disabled Opus 4.6 timing. It took 1m 4s – coincidentally the same as 5.3-codex-low.
    • varenc 19 hours ago
      How do the agents perform the transcription? I'm guessing just calling out to other tools like Whisper? Do all models/agents take the same approach or do they differ?

      also as a parent, I love the bluey bench concept !

      • postalcoder 18 hours ago
        I am using whisper transcription via the Groq API to transcribe the files in parallel. But (caveat), I cut out the transcription step and had the models operate on a shared transcript folder. So the times you see are pure search and categorization times.

        re. your question about the approach – they all took on the problem in different ways that I found fascinating.

        Codex Spark was so fast because it noticed that bluey announces the episode names in the episode ("This episode of Bluey is called ____.") so, instead of doing a pure matching of transcript<->web description, it cut out the title names from the transcripts and matched only that with the episode descriptions.

        The larger models were more careful and seemed to actually try to doublecheck their work by reading the full transcripts and matching them against descriptions.

        gpt-5.2 went through a level of care that wasn't wrong, but was unnecessary.

        Sonnet 4.5 (non-thinking) took the most frustrating approach. It tried to automate the pairing process with scripting to match the extracted title with the official title via regex. So, instead of just eyeballing the lists of extracted and official titles to manually match them, it relied purely on the script's logging as its eyes. When the script failed to match all 52 episodes perfectly, it went into a six-iteration loop of writing increasingly convoluted regex until it found 52 matches (which ended up incorrectly matching episodes). It was frustrating behavior, I stopped the loop after four minutes.

        In my mind, the "right way" was straightforward but that wasn't borne out by how differently the llms behaved.

      • jiggawatts 18 hours ago
        Most frontier models are multi-modal and can handle audio or video files as input natively.

        I'm experimenting right now with an English to Thai subtitle translator that feeds in the existing English subtitles as well as a mono (centre-weighted) audio extracted using ffmpeg. This is needed because Thai has gendered particles -- word choice depends on the sex of the speaker, which is not recorded in English text. The AIs can infer this to a degree, but they do better when given audio so that they can do speaker diarization.

  • pjs_ 1 day ago
    Continue to believe that Cerebras is one of the most underrated companies of our time. It's a dinner-plate sized chip. It actually works. It's actually much faster than anything else for real workloads. Amazing
    • onlyrealcuzzo 1 day ago
      Nvidia seems cooked.

      Google is crushing them on inference. By TPUv9, they could be 4x more energy efficient and cheaper overall (even if Nvidia cuts their margins from 75% to 40%).

      Cerebras will be substantially better for agentic workflows in terms of speed.

      And if you don't care as much about speed and only cost and energy, Google will still crush Nvidia.

      And Nvidia won't be cheaper for training new models either. The vast majority of chips will be used for inference by 2028 instead of training anyway.

      Nvidia has no manufacturing reliability story. Anyone can buy TSMC's output.

      Power is the bottleneck in the US (and everywhere besides China). By TPUv9 - Google is projected to be 4x more energy efficient. It's a no-brainer who you're going with starting with TPUv8 when Google lets you run on-prem.

      These are GW scale data centers. You can't just build 4 large-scale nuclear power plants in a year in the US (or anywhere, even China). You can't just build 4 GW solar farms in a year in the US to power your less efficient data center. Maybe you could in China (if the economics were on your side, but they aren't). You sure as hell can't do it anywhere else (maybe India).

      What am I missing? I don't understand how Nvidia could've been so far ahead and just let every part of the market slip away.

      • sailingparrot 1 day ago
        > let every part of the market slip away.

        Which part of the market has slept away, exactly ? Everything you wrote is supposition and extrapolation. Nvidia has a chokehold on the entire market. All other players still exist in the small pockets that Nvidia doesn’t have enough production capacity to serve. And their dev ecosystem is still so far ahead of anyone else. Which providers gets chosen to equip a 100k chips data center goes so far beyond the raw chip power.

        • mgambati 19 hours ago
          If code is getting cheaper, making cuda alternatives and tooling should not be very far. I can’t see nvidia holding the position for much longer.
        • onlyrealcuzzo 1 day ago
          > Nvidia has a chokehold on the entire market.

          You're obviously not looking at expected forward orders for 2026 and 2027.

          • louiereederson 1 day ago
            I think most estimates have Nvidia at more or less stable share of CoWoS capacity (around 60%), which is ~doubling in '26.
      • wing-_-nuts 1 day ago
        Man I hope someone drinks Nvidia's milk shake. They need to get humbled back to the point where they're desperate to sell gpus to consumers again.

        Only major road block is cuda...

        • jmalicki 7 hours ago
          The nice thing about modern LLMs is that it's a relatively large static use case. The compute is large and expensive enough you can afford to just write custom kernels, to a degree. It's not like CUDA where running on 1, 2, 8 GPUs and you need libraries that already do it all for you, and where researchers are building lots of different models.

          There aren't all that many different small components between all of the different transformer based LLMs out there.

          • gordonhart 5 hours ago
            Yeah, given that frontier model training has shrunk down to a handful of labs it seems like a very solvable problem to just build the stack directly without CUDA. LLMs are mechanically simple and these labs have access to as much engineering muscle as they need. Pretty small price to pay to access cheaper hardware given that model runs cost on the order of $100M and every lab is paying Nvidia many multiples over that to fill up their new datacenters.
      • mnicky 1 day ago
        > What am I missing?

        Largest production capacity maybe?

        Also, market demand will be so high that every player's chips will be sold out.

        • onlyrealcuzzo 1 day ago
          > Largest production capacity maybe?

          Anyone can buy TSMC's output...

          • CamperBob2 23 hours ago
            Which I'm sure is 100% reserved through at least 2030.
            • DeathArrow 15 hours ago
              Aren't they building new fabs, though? Or even those are already booked?
          • Keyframe 1 day ago
            Can anyone buy TSMC though?
            • louiereederson 1 day ago
              No. TSMC will not take the risk on allocating capacity to just anyone given the opportunity cost.
            • roughly 21 hours ago
              Not without an army
      • icelancer 22 hours ago
        > What am I missing?

        VRAM capacity given the Cerebras/Groq architecture compared to Nvidia.

        In parallel, RAM contracts that Nvidia has negotiated well into the future that other manufacturers have been unable to secure.

      • DeathArrow 15 hours ago
        What puzzles me is that AMD can't secure any meaningful size of AI market. They missed this train badly.
      • whism 1 day ago
        I believe they licensed smth from groq
      • Handy-Man 1 day ago
        Well they `acquired` groq for a reason.
    • h14h 13 hours ago
      I'm fascinated by how the economy is catching up to demand for inference. The vast majority of today's capacity comes from silicon that merely happens to be good at inference, and it's clear that there's a lot of room for innovation when you design silicon for inference from the ground up.

      With CapEx going crazy, I wonder where costs will stabilize and what OpEx will look like once these initial investments are paid back (or go bust). The common consensus seems to be that there will be a rug pull and frontier model inference costs will spike, but I'm not entirely convinced.

      I suspect it largely comes down to how much more efficient custom silicon is compared to GPUs, as well as how accurately the supply chain is able to predict future demand relative to future efficiency gains. To me, it is not at all obvious what will happen. I don't see any reason why a rug pull is any more or less likely than today's supply chain over-estimating tomorrow's capacity needs, and creating a hardware (and maybe energy) surplus in 5-10 years.

    • zozbot234 1 day ago
      It's "dinner-plate sized" because it's just a full silicon wafer. It's nice to see that wafer-scale integration is now being used for real work but it's been researched for decades.
    • tiffanyh 21 hours ago
      If history has taught us anything, “engineered systems” (like mainframes & hyper converged infrastructure) emerge at the start of a new computing paradigm … but long-term, commodity compute wins the game.
      • alecco 13 hours ago
        Chips and RAM grew in capacity but latency is mostly flat and interconnect power consumption grew a lot. So I think the paradigm changed. Even with newer ones like NVlink.

        For 28 years Intel Xeon chips come with massive L2/L3. Nvidia is making bigger chips with last being 2 big chips interconnected. Cerebras saw the pattern and took it to the next level.

        And the technology is moving 3D towards stacking layers on the wafer so there is room to grow that way, too.

      • pjs_ 16 hours ago
        I think that was true when you could rely on good old Moore’s law to make the heavy iron quickly obsolete but I also think those days are coming to an end
    • arcanemachiner 1 day ago
      Just wish they weren't so insanely expensive...
      • azinman2 1 day ago
        The bigger the chip, the worse the yield.
        • thunderbird120 21 hours ago
          Cerebras has effectively 100% yield on these chips. They have an internal structure made by just repeating the same small modular units over and over again. This means they can just fuse off the broken bits without affecting overall function. It's not like it is with a CPU.
          • moregrist 4 hours ago
            I think what you’re saying is that every wafer is usable, but won’t have the same performance characteristics, depending on how many bits are broken.

            Doesn’t that just bucket wafers based on performance? Which effectively gives a yield to each bucket.

        • speedgoose 1 day ago
          I suggest to read their website, they explain pretty well how they manage good yield. Though I’m not an expert in this field. I does make sense and I would be surprised if they were caught lying.
        • moralestapia 1 day ago
          This comment doesn't make sense.
          • Sohcahtoa82 1 day ago
            One wafer will turn into multiple chips.

            Defects are best measured on a per-wafer basis, not per-chip. So if if your chips are huge and you can only put 4 chips on a wafer, 1 defect can cut your yield by 25%. If they're smaller and you fit 100 chips on a wafer, then 1 defect on the wafer is only cutting yield by 1%. Of course, there's more to this when you start reading about "binning", fusing off cores, etc.

            There's plenty of information out there about how CPU manufacturing works, why defects happen, and how they're handled. Suffice to say, the comment makes perfect sense.

            • snovv_crash 1 day ago
              That's why you typically fuse off defective sub-units and just have a slightly slower chip. GPU and CPU manufacturers have done this for at least 15 years now, that I'm aware of.
          • azinman2 1 day ago
            Sure it does. If it’s many small dies on a wafer, then imperfections don’t ruin the entire batch; you just bin those components. If the entire wafer is a single die, you have much less tolerance for errors.
          • DocJade 1 day ago
            Bigger chip = more surface area = higher chance for somewhere in the chip to have a manufacturing defect

            Yields on silicon are great, but not perfect

            • moralestapia 1 day ago
              Does that mean smaller chips are made from smaller wafers?
              • wat10000 21 hours ago
                They can be made from large wafers. A defect typically breaks whatever chip it's on, so one defect on a large wafer filled with many small chips will still just break one chip of the many on the wafer. If your chips are bigger, one defect still takes out a chip, but now you've lost more of the wafer area because the chip is bigger. So you get a super-linear scaling of loss from defects as the chips get bigger.

                With careful design, you can tolerate some defects. A multi-core CPU might have the ability to disable a core that's affected by a defect, and then it can be sold as a different SKU with a lower core count. Cerebras uses an extreme version of this, where the wafer is divided up into about a million cores, and a routing system that can bypass defective cores.

                They have a nice article about it here: https://www.cerebras.ai/blog/100x-defect-tolerance-how-cereb...

              • Sohcahtoa82 23 hours ago
                Nope. They use the same size wafers and then just put more chips on a wafer.
                • moralestapia 23 hours ago
                  So, does a wafer with a huge chip has more defects per area than a wafer with 100s of small chips?
                  • dgfl 23 hours ago
                    There’s an expected amount of defects per wafer. If a chip has a defect, then it is lost (simplification). A wafer with 100 chips may lose 10 to defects, giving a yield of 90%. The same wafer but with 1000 smaller chips would still have lost only 10 of them, giving 99% yield.
                    • moralestapia 16 hours ago
                      As another comment referenced in this thread states, Cerebras seems to have solved by making their big chip a lot of much smaller cores that can be disposed of if they have errors.
                      • dgfl 15 hours ago
                        Indeed, the original comment you replied to actually made no sense in this case. But there seemed to be some confusion in the thread, so I tried to clear that up. I hope I’ll get to talk with one of the cerebras engineers one day, that chip is really one of a kind.
                        • moralestapia 9 hours ago
                          Yes, amazing tech. You should join their Discord, it's pretty active these days!
          • louiereederson 1 day ago
            You say this with such confidence and then ask if smaller chips require smaller wafers.
    • mzl 15 hours ago
      Technically, Cerebras solution is really cool. However, I am skeptical that it will be economically useful for models that are larger in size, as the requirements on the number of racks scales with the the size of the model to fit the weights in SRAM.
    • latchkey 1 day ago
      Not for what they are using it for. It is $1m+/chip and they can fit 1 of them in a rack. Rack space in DC's is a premium asset. The density isn't there. AI models need tons of memory (this product annoucement is case in point) and they don't have it, nor do they have a way to get it since they are last in line at the fabs.

      Their only chance is an aquihire, but nvidia just spent $20b on groq instead. Dead man walking.

      • p1esk 1 day ago
        The real question is what’s their perf/dollar vs nvidia?
        • zozbot234 1 day ago
          I guess it depends what you mean by "perf". If you optimize everything for the absolutely lowest latency given your power budget, your throughput is going to suck - and vice versa. Throughput is ultimately what matters when everything about AI is so clearly power-constrained, latency is a distraction. So TPU-like custom chips are likely the better choice.
          • p1esk 1 day ago
            By perf I mean how much does it cost to serve 1T model to 1M users at 50 tokens/sec.
            • zozbot234 1 day ago
              All 1T models are not equal. E.g. how many active parameters? what's the native quantization? how long is the max context? Also, it's quite likely that some smaller models in common use are even sub-1T. If your model is light enough, the lower throughput doesn't necessarily hurt you all that much and you can enjoy the lightning-fast speed.
              • p1esk 1 day ago
                Just pick some reasonable values. Also, keep in mind that this hardware must still be useful 3 years from now. What’s going to happen to cerebras in 3 years? What about nvidia? Which one is a safer bet?

                On the other hand, competition is good - nvidia can’t have the whole pie forever.

                • zozbot234 1 day ago
                  > Just pick some reasonable values.

                  And that's the point - what's "reasonable" depends on the hardware and is far from fixed. Some users here are saying that this model is "blazing fast" but a bit weaker than expected, and one might've guessed as much.

                  > On the other hand, competition is good - nvidia can’t have the whole pie forever.

                  Sure, but arguably the closest thing to competition for nVidia is TPUs and future custom ASICs that will likely save a lot on energy used per model inference, while not focusing all that much on being super fast.

              • wiredpancake 1 day ago
                [dead]
          • fragmede 1 day ago
            > Throughput is ultimately what matters

            I disagree. Yes it does matter, but because the popular interface is via chat, streaming the results of inference feels better to the squishy messy gross human operating the chat, even if it ends up taking longer. You can give all the benchmark results you want, humans aren't robots. They aren't data driven, they have feelings, and they're going to go with what feels better. That isn't true for all uses, but time to first byte is ridiculously important for human-computer interaction.

            • zozbot234 1 day ago
              You just have to change the "popular interface" to something else. Chat is OK for trivia or genuinely time-sensitive questions, everything else goes through via email or some sort of webmail-like interface where requests are submitted and replies come back asynchronously. (This is already how batch APIs work, but they only offer a 50% discount compared to interactive, which is not enough to really make a good case for them - especially not for agentic workloads.)
        • energy123 14 hours ago
          That's coupling two different usecases.

          Many coding usecases care about tokens/second, not tokens/dollar.

        • xnx 1 day ago
          Or Google TPUs.
        • latchkey 1 day ago
          Exactly. They won't ever tell you. It is never published.

          Let's not forget that the CEO is an SEC felon who got caught trying to pull a fast one.

      • boredatoms 22 hours ago
        Power/cooling is the premium.

        Can always build a bigger hall

        • latchkey 22 hours ago
          Exactly my point. Their architecture requires someone to invest the capex / opex to also build another hall.
      • arisAlexis 12 hours ago
        How do you know the price of a unit ?
      • spwa4 1 day ago
        Oh don't worry. Ever since the power issue started developing rack space is no longer at a premium. Or at least, it's no longer the limiting factor. Power is.
        • latchkey 1 day ago
          The dirty secret is that there is plenty of power. But, it isn't all in one place and it is often stranded in DC's that can't do the density needed for AI compute.

          Training models needs everything in one DC, inference doesn't.

          • spwa4 5 hours ago
            Which DCs are these?
    • dalemhurley 1 day ago
      Yet investors keep backing NVIDIA.
      • vimda 1 day ago
        At this point Tech investment and analysis is so divorced from any kind of reality that it's more akin to lemmings on the cliff than careful analysis of fundamentals
    • femiagbabiaka 1 day ago
      yep
    • xnx 1 day ago
      Cerebras is a bit of a stunt like "datacenters in spaaaaace".

      Terrible yield: one defect can ruin a whole wafer instead of just a chip region. Poor perf./cost (see above). Difficult to program. Little space for RAM.

      • the_duke 1 day ago
        They claim the opposite, though, saying the chip is designed to tolerate many defects and work around them.
  • simonw 1 day ago
    My stupid pelican benchmark proves to be genuinely quite useful here, you get a visual representation of the quality difference between GPT-5.3-Codex-Spark and full GPT-5.3-Codex: https://simonwillison.net/2026/Feb/12/codex-spark/
    • mzl 17 hours ago
      I find it interesting that the spark version seems worse than the gpt-oss version (https://simonwillison.net/2025/Aug/5/gpt-oss/)
    • lacoolj 1 day ago
      These are the ones I look for every time a new model is released. Incorporates so many things into one single benchmark.

      Also your blog is tops. Keep it up, love the work.

  • perdomon 1 day ago
    This has been the industry standard for the last 20 minutes. I can't believe people are still using GPT-5.3-Codex.
    • bonoboTP 12 hours ago
      Kai Lentit is great!
    • sam_goody 1 day ago
      I read this headline and was like, "A look, an announcement by GPT!! That means that Google or Anthropic must have had a release today!"

      And, yup, there is Gemini in item 3!

    • i_love_retros 21 hours ago
      Ha think of all the losers still using codex 5.2
  • linolevan 22 minutes ago
    I've been playing around with it a little bit in a custom harness for research tasks. Pretty solid, nothing revolutionary (outside of speed).
  • jryio 1 day ago
    This is interesting for offloading "tiered" workloads / priority queue with coding agents.

    If 60% of the work is "edit this file with this content", or "refactor according to this abstraction" then low latency - high token inference seems like a needed improvement.

    Recently someone made a Claude plugin to offload low-priority work to the Anthropic Batch API [1].

    Also I expect both Nvidia and Google to deploy custom silicon for inference [2]

    1: https://github.com/s2-streamstore/claude-batch-toolkit/blob/...

    2: https://www.tomshardware.com/tech-industry/semiconductors/nv...

    • zozbot234 1 day ago
      Note that Batch APIs are significantly higher latency than normal AI agent use. They're mostly intended for bulk work where time constraints are not essential. Also, GPT "Codex" models (and most of the "Pro" models also) are currently not available under OpenAI's own batch API. So you would have to use non-agentic models for these tasks and it's not clear how well they would cope.

      (Overall, batches do have quite a bit of potential for agentic work as-is but you have to cope with them taking potentially up to 24h for just a single roundtrip with your local agent harness.)

      • Doohickey-d 12 hours ago
        Openai has a "flex" processing tier, which works like the normal API, but where you accept higher latency and higher error rates, in exchange for 50% off (same as batch pricing). It also supports prompt caching for further savings.

        For me, it works quite well for low-priority things, without the hassle of using the batch API. Usually the added latency is just a few seconds extra, so it would still work in an agent loop (and you can retry requests that fail at the "normal" priority tier.)

        https://developers.openai.com/api/docs/guides/flex-processin...

        • zozbot234 11 hours ago
          That's interesting but it's a beta feature so it could go away at any time. Also not available for Codex agentic models (or Pro models for that matter).
    • dehugger 1 day ago
      I built something similar using an MCP that allows claude to "outsource" development to GLM 4.7 on Cerebras (or a different model, but GLM is what I use). The tool allows Claude to set the system prompt, instructions, specify the output file to write to and crucially allows it to list which additional files (or subsections of files) should be included as context for the prompt.

      Ive had great success with it, and it rapidly speeds up development time at fairly minimal cost.

      • cheema33 1 day ago
        Why use MCP instead of an agent skill for something like this when MCP is typically context inefficient?
        • pertymcpert 1 day ago
          MCP is fine if your tool definition is small. If it's something like a sub-agent harness which is used very often, then in fact it's probably more context efficient because the tools are already loaded in context and the model doesn't have to spend a few turns deciding to load the skill, thinking about it and then invoking another tool/script to invoke the subagent.
        • wahnfrieden 1 day ago
          Models haven't been trained enough on using skills yet, so they typically ignore them
          • andai 1 day ago
            Is that true? I had tool use working with GPT-4 in 2023, before function calling or structured outputs were even a thing. My tool instructions were only half a page though. Maybe the long prompts are causing problems?
            • pertymcpert 1 day ago
              They're talking about "skills" which are not the same thing as tools. Most models haven't been trained on the open SKILL spec, and therefore aren't tuned to invoke them reliable when the need occurs.
  • kristjansson 1 hour ago
    I can only hope that Cerebras is able to keep their first party inference product going. It’s incredible to run a strong model at interactive latencies for whole results. Routinely less than seconds to product entire files / documents / outputs / …

    https://cloud.cerebras.ai/

  • nikkwong 1 day ago
    > Our latest frontier models have shown particular strengths in their ability to do long-running tasks, working autonomously for hours, days or weeks without intervention.

    I have yet to see this (produce anything actually useful).

    • simonw 1 day ago
      How hard have you tried?

      I've been finding that the Opus 4.5/4.6 and GPT-5.2/5.3 models really have represented a step-change in how good they are at running long tasks.

      I can one-shot prompt all sorts of useful coding challenges now that previously I would have expected to need multiple follow-ups to fix mistakes the agents made.

      I got all of this from a single prompt, for example: https://github.com/simonw/research/tree/main/cysqlite-wasm-w... - including this demo page: https://simonw.github.io/research/cysqlite-wasm-wheel/demo.h... - using this single prompt: https://github.com/simonw/research/pull/79

      • aeyes 1 day ago
        What do you mean? The generated script just downloads the sources and runs pyodide: https://github.com/simonw/research/blob/main/cysqlite-wasm-w...

        There is maybe 5 relevant lines in the script and nothing complex at all that would require to run for days.

        • simonw 1 day ago
          No, not for days - but it churned away on that one for about ten minutes.

          I don't think I've got any examples of multi-hour or multi-day sessions that ran completely uninterrupted - this one back in December took 4.5 hours but I had to prompt it to keep going a few times along the way: https://simonwillison.net/2025/Dec/15/porting-justhtml/

        • andai 1 day ago
          Maybe so, but I did once spend 12 hours straight debugging an Emscripten C++ compiler bug! (After spending the first day of the jam setting up Emscripten, and the second day getting Raylib to compile in it. Had like an hour left to make the actual game, hahah.)

          I am a bit thick with such things, but just wanted to provide the context that Emscripten can be a fickle beast :)

          I sure am glad I can now deploy Infinite Mechanized Autistic Persistence to such soul-crushing tasks, and go make a sandwich or something.

          (The bug turned out to be that if I included a boolean in a class member, the whole game crashed, but only the Emscripten version. Sad. Ended up switching back to JS, which you basically need anyway for most serious web game dev.)

      • basilgohar 1 day ago
        Can you share any examples of these one-shot prompts? I've not gotten to the point where I can get those kind of results yet.
        • simonw 1 day ago
          If you look through the commit logs on simonw/research and simonw/tools on GitHub most commits should either list the prompt, link to a PR with the prompt or link to a session transcript.
    • gamegoblin 1 day ago
      I routinely leave codex running for a few hours overnight to debug stuff

      If you have a deterministic unit test that can reproduce the bug through your app front door, but you have no idea how the bug is actually happening, having a coding agent just grind through the slog of sticking debug prints everywhere, testing hypotheses, etc — it's an ideal usecase

      • nikkwong 1 day ago
        I have a hard time understanding how that would work — for me, I typically interface with coding agents through cursor. The flow is like this: ask it something -> it works for a min or two -> I have to verify and fix by asking it again; etc. until we're at a happy place with the code. How do you get it to stop from going down a bad path and never pulling itself out of it?

        The important role for me, as a SWE, in the process, is verify that the code does what we actually want it to do. If you remove yourself from the process by letting it run on its own overnight, how does it know it's doing what you actually want it to do?

        Or is it more like with your usecase—you can say "here's a failing test—do whatever you can to fix it and don't stop until you do". I could see that limited case working.

        • woah 1 day ago
          For some reason setting up agents in a loop with a solid prompt and new context each iteration seems to result in higher quality work for larger or more difficult tasks than the chat interface. It's like the agent doesn't have to spend half its time trying to guess what you want
        • gamegoblin 1 day ago
          I use Codex CLI or Claude Code

          I don't even necessarily ask it to fix the bug — just identify the bug

          Like if I've made a change that is causing some unit test to fail, it can just run off and figure out where I made an off-by-one error or whatever in my change.

        • p1esk 1 day ago
          “here's a failing test—do whatever you can to fix it”

          Bad idea. It can modify the code that the test passes but everything else is now broken.

          • SatvikBeri 1 day ago
            I've heard this said a lot but never had this problem. Claude has been decent at debugging tests since 4.0 in my experience (and much better since 4.5)
        • zem 1 day ago
          it's more like "this function is crashing with an inconsistent file format error. can you figure out how a file with the wrong format got this far into the pipeline?". in cases like that the fix is usually pretty easy once you have the one code path out of several thousands nailed down.
        • vel0city 1 day ago
          You do things like ralph loops.

          https://github.com/snarktank/ralph

          Its constantly restarting itself, looking at the current state of things, re-reading what was the request, what it did and failed at in the past (at a higher level), and trying again and again.

      • tsss 1 day ago
        How can you afford that?
      • addaon 1 day ago
        > it's an ideal usecase

        This is impressive, you’ve completely mitigated the risk of learning or understanding.

        • arcanemachiner 1 day ago
          Or, they have freed up time for more useful endeavours, that may otherwise have spent on drudgery.

          I don't discount the value of blood, sweat and tears spent on debugging those hard issues, and the lessons learned from doing so, but there is a certain point where it's OK to take a pass and just let the robots figure it out.

    • XCSme 1 day ago
      Their ability to burn through tokens non-stop for hours, days or weeks without intervention.
      • raw_anon_1111 1 day ago
        You’re mixing up Open AI for Anthropic.

        Anthropic is actually sort of concerned with not burning through cash and charging people a reasonable price. Open AI doesn’t care. I can use Codex CLI all day and not approach any quotas with just my $20 a month ChatGPT subscription.

        I treat coding agents like junior developers and never take my hand off the wheel except for boilerplate refactoring.

    • TheMuenster 1 day ago
      Can I just say how funny this metric is?

      "Our model is so slow and our tokens/second is so low that these tasks can take hours!" is not the advertising they think it is.

    • johnfn 1 day ago
      The other day I got Codex to one-shot an upgrade to Vite 8 at my day job (a real website with revenue). It worked in this for over 3 hours without intervention (I went to sleep). This is now in production.
      • seunosewa 1 day ago
        How did you verify it?
        • girvo 1 day ago
          Just send it bro

          (but honestly for a lot of websites and web apps you really can just send it, the stakes are very low for a lot of what most people do, if they're honest with themselves)

        • johnfn 20 hours ago
          Uhhh, this is my work, so… we didn’t have a SEV? None of our thousands of customers paying us money reported the site was broken?
          • ghosty141 12 hours ago
            I find this absolutely wild. From my experience Codex code quality is still not as good as a human so letting codex do smth and not verifying / cleaning up behind it will most likely result in lower code quality and possibly subtle bugs.
    • wahnfrieden 1 day ago
      It worked for me several times.

      It's easy to say that these increasingly popular tools are only able to produce useless junk. You haven't tried, or you haven't "closed the loop" so that the agent can evaluate its own progress toward acceptance criteria, or you are monitoring incompetent feeds of other users.

      • nikkwong 1 day ago
        I'm definitely bullish on LLM's for coding. It sounds to me as though getting it to run on its own for hours and produce something usable requires more careful thought and setup than just throwing a prompt at it and wishing for the best—but I haven't seen many examples in the wild yet
        • foobar10000 1 day ago
          It needs a closed loop.

          Strategy -> [ Plan -> [Execute -> FastVerify -> SlowVerify] -> Benchmark -> Learn lessons] -> back to strategy for next big step.

          Claude teams and a Ralph wiggum loop can do it - or really any reasonable agent. But usually it all falls apart on either brittle Verify or Benchmark steps. What is important is to learn positive lessons into a store that survives git resets, machine blowups, etc… Any telegram bot channel will do :)

          The entire setup is usually a pain to set up - docker for verification, docker for benchmark, etc… Ability to run the thing quickly, ability for the loop itself to add things , ability to do this in worktree simultaneously for faster exploration - and got help you if you need hardware to do this - for example, such a loop is used to tune and custom-fuse CUDA kernels - which means a model evaluator, big box, etc….

          • wahnfrieden 1 day ago
            I do it easily just by asking Codex
        • rcarmo 1 day ago
          well, you can start with https://github.com/rcarmo/go-textile, https://github.com/rcarmo/go-rdp, https://github.com/rcarmo/go-ooxml, https://github.com/rcarmo/go-busybox (still WIP). All of these are essentially SPEC and test-driven and they are all working for me (save a couple of bugs in go-rdp I need to fix myself, and some gaps in the ECMA specs for go-ooxml that require me to provide actual manually created documents for further testing).

          I am currently porting pyte to Go through a similar approach (feeding the LLM with a core SPEC and two VT100/VT220 test suites). It's chugging along quite nicely.

    • mikojan 12 hours ago
      Agreed. Optimistically let it resolve merge conflicts in an old complex branch. Looked fine at first but was utter slop upon further review. Duplication, wildly unnecessary complexity and all.
    • bitwize 1 day ago
      PEBKAC
    • renato_shira 19 hours ago
      [flagged]
  • raahelb 1 day ago
    Interesting to note that the reduced latency is not just due to the improved model speed, but also because of improvements made to the harness itself:

    > "As we trained Codex-Spark, it became apparent that model speed was just part of the equation for real-time collaboration—we also needed to reduce latency across the full request-response pipeline. We implemented end-to-end latency improvements in our harness that will benefit all models [...] Through the introduction of a persistent WebSocket connection and targeted optimizations inside of Responses API, we reduced overhead per client/server roundtrip by 80%, per-token overhead by 30%, and time-to-first-token by 50%. The WebSocket path is enabled for Codex-Spark by default and will become the default for all models soon."

    I wonder if all other harnesses (Claude Code, OpenCode, Cursor etc.,) can make similar improvements to reduce latency. I've been vibe coding (or doing agentic engineering) with Claude Code a lot for the last few days and I've had some tasks take as long as 30 minutes.

    • 2001zhaozhao 1 day ago
      This might actually be hard for open source agents (e.g. Opencode) to replicate, barring a standardized WebSocket LLM API being widely adopted.
  • kachapopopow 1 day ago
    Is this the first time one of the big 3 using Cerebras? I've been waiting for this day...
  • mudkipdev 1 day ago
    Off topic but how is it always this HN user sharing model releases within a couple of minutes of their announcement?
    • casefields 1 day ago
      The account isn’t a normal user. They literally only post stuff like this. Their comments are just official links back to said announcements.
    • sho_hn 1 day ago
      Maybe they set up an agent for it.
      • Squarex 1 day ago
        or a simple cron :)
    • lacoolj 1 day ago
      Google Alerts
  • pdeva1 1 day ago
    This is closer to 5.1 mini it seems and tied to Pro account. GLM 4.7 is available on-demand on Cerebras today [1] and performs better and cheaper... [1] https://www.cerebras.ai/blog/glm-4-7
  • socketcluster 9 hours ago
    1000 tokens per second. Crazy. I'm wondering what this leads to.

    Imagine the massive amount of software that's going to get built. It will be like reinventing the wheel in a million ways. There will be thousands of alternative internet ecosystems to choose from and each one of then would offer every software system, platform and application that one could possibly need; fully compatible with data transferrable across any application within the same ecosystem. Some ecosystems would facilitate data transfers in and out. Ecosystems would be competing against each other; all different, but ultimately yielding very similar results. The competitive edge of one ecosystem over another would be purely grounded in narrative with no basis in reality because the differences between the best ecosystems would be meaningless. That said there would also be bad ecosystems where a lot of people may get trapped. Some people would get lost in the junk.

    • alansaber 8 hours ago
      It's cool but TPS count is not a meaningful limiting factor to new software. These small models are also too dumb for QA in complex codebases (for now), but on a future timeline they are super cool. Model distillation and ablation generally is very interesting.
    • jstummbillig 6 hours ago
      I predict this comment will feel very 640k-is-enough in a few years. And by years I mean in 2 weeks.
  • antirez 1 day ago
    The search for speed is vain. Often Claude Code Opus 4.6, on hard enough problems, can do the impression of acting fast without really making progresses because of lack of focus on what matters. Then you spin the much slower GPT 5.3-Codex and it fixes everything in 3 minutes of doing the right thing.
    • mickeyp 1 day ago
      I disagree. This is great for bulk tasks: renaming, finding and searching for things, etc
      • ghosty141 1 day ago
        What codex often does for this, write a small python script and execute that to bulk rename for example.

        I agree that there is use for fast "simpler" models, there are many tasks where the regular codex-5.3 is not necessary but I think it's rarely worth the extra friction of switching from regular 5.3 to 5.3-spark.

    • Aurornis 1 day ago
      I will always take more speed. My use of LLMs always comes back to doing something manually, from reviewing code to testing it to changing direction. The faster I can get the LLM part of the back-and-forth to complete, the more I can stay focused on my part.
    • gizmodo59 10 hours ago
      Codex 5.3 is hands down the best model for coding as of today
    • jusgu 1 day ago
      disagree. while intelligence is important, speed is especially important when productionizing AI. it’s difficult to formalize the increase in user experience per increase in TPS but it most definitely exists.
  • wxw 1 day ago
    Great stuff. People are getting used to agents as the interface for everything, even work as simple as "change label X to label Y". More speed on that front is welcome. The Codex "blended mode" they refer to will be useful (similar to Claude Code bouncing between haiku and opus).

    I imagine it's a win-win. This could significantly help their tokenomics.

    The example showing a plan being generated instantaneously is interesting. Human understanding will end up as the last, true bottleneck.

  • singularfutur 9 hours ago
    Every release they claim it writes production code but my team still spends hours fixing subtle bugs the model introduces. The demos are cherry picked and the real world failure rate is way higher than anyone admits. Meanwhile we keep feeding them our codebases for free training data.
    • cactusplant7374 8 hours ago
      How would that compare to subtle bugs introduced by developers? I have seen a massive amount of bugs during my career, many of those introduced by me.
      • gaanbal 7 hours ago
        it compares... unfavorably, on the side of ai
        • cactusplant7374 5 hours ago
          Not from what I'm seeing it. 5.3 codex xhigh is pretty amazing.
  • ttul 1 day ago
    Great move by OpenAI. With coding agents, if you have access to a fast and cheap model, you can afford to let it rip, making lots of mistakes, and iterate until it gets things right. With the right scaffolding (AGENTS.md, SKILLS.md, etc.), a fast and light model can do great things. And when it's done, you can still have the heavyweight model come in to clean up any messes.
    • trillic 5 hours ago
      Plan in Opus 4.6 and let a fast model rip anecdotally seems to work very well for me. Having Opus be extremely specific with files to edit makes it even better.
    • enraged_camel 4 hours ago
      Except this thing routinely ignores my AGENTS.md instructions. Very unreliable.
  • bothlabs 10 hours ago
    The Cerebras partnership is the most interesting part of this announcement to me. 1000+ tok/s changes how you interact with a coding model. At that speed the bottleneck shifts from waiting for the model to keeping up with it yourself.

    Curious how the capability tradeoff plays out in practice though. SWE-Bench Pro scores are noticeably lower than full 5.3-Codex. For quick edits and rapid prototyping that's probably fine, but I wonder where the line is where you'd rather wait 10x longer for a correct answer than get a wrong one instantly.

    Also "the model was instrumental in creating itself" is doing a lot of heavy lifting as a sentence. Would love to see more details on what that actually looked like in practice beyond marketing copy.

  • freakynit 20 hours ago
    Does this prove cerebras chips are generic enough to be able to run the most common architectures of LLM's? Even the proprietary ones?
    • chaos_emergent 13 hours ago
      Not at all, the limitation is software to get the model on the chip and executing correctly. My bet is that they had a FDE who specializes in the chip implement Spark’s architecture on device.
  • capevace 1 day ago
    Seems like the industry is moving further towards having low-latency/high-speed models for direct interaction, and having slow, long thinking models for longer tasks / deeper thinking.

    Quick/Instant LLMs for human use (think UI). Slow, deep thinking LLMs for autonomous agents.

    • hackrmn 10 hours ago
      Like different parts of the brain. Frontal cortex, speech center (in the back), motorics etc.
    • gaigalas 1 day ago
      You always want faster feedback. If not a human leveraging the fast cycles, another automated system (eg CI).

      Slow, deep tasks are mostly for flashy one-shot demos that have little to no practical use in the real world.

      • foobar10000 1 day ago
        I mean, yes, one always does want faster feedback - cannot argue with that!

        But some of the longer stuff - automating kernel fusion, etc, are just hard problems. And a small model - or even most bigger ones, will not get the direction right…

        • gaigalas 1 day ago
          From my experience, larger models also don't get the direction right a surprising amount of times. You just take more time to notice when it happens, or start to be defensive (over-specing) to account for the longer waits. Even the most simple task can appear "hard" with that over spec'd approach (like building a react app).

          Iterating with a faster model is, from my perspective, the superior approach. Doesn't matter the task complexity, the quick feedback more than compensates for it.

    • varispeed 1 day ago
      Are they really thinking or are they sprinkling them with Sleep(x)?
  • olvy0 14 hours ago
    I've been using Perplexity for small, fast queries almost exclusively for the last year or so. Their Sonar model is Llama running on top of a Cerebras chip, and searches the internet in an incredible speed. Its results are astonishingly good (for a Llama model), although in more niche areas it still makes mistakes, so in those areas I usually double-check its sources or do an extra ddg search myself.

    Actually I've never used chat gpt, I went straight to Perplexity after having discovered it. Their free tier is extremely generous (not even requiring an account). Not affiliated.

    OP currently doesn't look it will affect that, seems like Open AI touts it for agentic coding only, not as an alternative to chat gpt, although that will probably change.

  • mbm 1 day ago
    Works pretty well as a general-purpose computer. The speed is really enjoyable. Could replace some of my Claude Code use actually. For coding, set to xhigh and use it for personal tools or small projects.

    Example repo that Codex with spark made in about 15 minutes for me since `claude --resume` has been finicky lately: https://github.com/mzxrai/claude-sessions

  • OsrsNeedsf2P 1 day ago
    No hint on pricing. I'm curious if faster is more expensive, given a slight trade-off in accuracy
    • sauwan 1 day ago
      It's either more expensive or dumber.
      • kristianp 22 hours ago
        It will be more expensive because it's running on more expensive hardware, Cerebras. Does it also need to be smaller to fit on a single Cerebras node?
  • behnamoh 1 day ago
    In my opinion, they solved the wrong problem. The main issue I have with Codex is that the best model is insanely slow, except at nights and weekends when Silicon Valley goes to bed. I don't want a faster, smaller model (already have that with GLM and MiniMax). I want a faster, better model (at least as fast as Opus).

    When they partnered with Cerebras, I kind of had a gut feeling that they wouldn't be able to use their technology for larger models because Cerebras doesn't have a track record of serving models larger than GLM.

    It pains me that five days before my Codex subscription ends, I have to switch to Anthropic because despite getting less quota compared to Codex, at least I'll be able to use my quota _and_ stay in the flow.

    But even Codex's slowness aside, it's just not as good of an "agentic" model as Opus: here's what drove me crazy: https://x.com/OrganicGPT/status/2021462447341830582?s=20. The Codex model (gpt-5.3-xhigh) has no idea about how to call agents smh

    • properbrew 1 day ago
      I was using a custom skill to spawn subagents, but it looks like the `/experimental` feature in codex-cli has the SubAgent setting (https://github.com/openai/codex/issues/2604#issuecomment-387...)
      • behnamoh 1 day ago
        Yes, I was using that. But the prompt given to the agents is not correct. Codex sends a prompt to the first agent and then sends the second prompt to the second agent, but then in the second prompt, it references the first prompt. which is completely incorrect.
    • kachapopopow 1 day ago
      That's why I built oh-my-singularity (based on oh-my-pi - see the front page from can.ac): https://share.us-east-1.gotservers.com/v/EAqb7_Wt/cAlknb6xz0...

      video is pretty outdated now, this was a PoC - working on a dependency free version.

    • cjbarber 1 day ago
      > In my opinion, they solved the wrong problem. The main issue I have with Codex is that the best model is insanely slow, except at nights and weekends when Silicon Valley goes to bed. I don't want a faster, smaller model (already have that with GLM and MiniMax). I want a faster, better model (at least as fast as Opus).

      It's entirely possible that this is the first step and that they will also do faster better models, too.

      • behnamoh 1 day ago
        I doubt it; there's a limit on model size that can be supported by Cerebras tech. GPT-5.3 is supposedly +1T parameters...
        • joshuastuden 1 day ago
          Um, no. There's no limit on model size for Cerebras hardware. Where do you come up with this stuff?
    • re-thc 1 day ago
      > In my opinion, they solved the wrong problem

      > I don't want a faster, smaller model. I want a faster, better model

      Will you pay 10x the price? They didn't solve the "wrong problem". They did what they could with the resources they have.

  • razighter777 22 hours ago
    I was prepared to see something like a trimmed down / smaller weight model but I was pleasantly suprised.

    I was excited to hear about the wafer scale chip being used! I bet nvidia notices this, it's good to see competition in some way.

  • dascrazy_96 6 hours ago
    The live presentation thing feels gimmicky until you realize most internal demos and standups are already half-improvised anyway.

    Curious how it handles when the speaker goes off-script into something the model has no context for.

  • dalemhurley 1 day ago
    This is a win for agents, speed and intelligence is crucial to the loop. If the time and token cost is small you can iterate many times to correct mistakes.

    Got to wonder why Wall Street is dumping NVIDIA.

    • SamDc73 1 day ago
      I mean they are only running a small version of codex can they run the full one? Or the technology isn't there yet?
  • cjbarber 1 day ago
    It'll be nice when there's smarter routing between models, or easier routing, so some things get sent to the fast model, some get sent to the cheap model, some get sent to the smart model, etc.
  • storus 1 day ago
    Anyone using OpenClaw to manage a bunch of coding agents so that you only set the high-level vision and leave all the prompting, testing, debugging, forking to agents? If yes, how did you glue it all together? Are you using local models? What is the SOTA for what I can run locally with a 512GB M3 Ultra, 2x DGX Spark, 2x RTX Pro 6000 Max-Q in one machine and 1x RTX Pro 6000 WS in another machine?
  • motoboi 20 hours ago
    Two things to pay attention here. Cerebra’s can only run a small version of GPT5.3, because why else would they run only a smaller model?

    Also, where is gpt-5.3-codex on azure? Opus 4.6 is available since the launch in both azure and google vertex. Codex is nowhere to be seen.

    • tekacs 19 hours ago
      I mean, is it possible that they could run the full-size model on it, but doing so on the smaller amount of hardware that they have is a worse trade-off for now, and it's better to run more of the smaller model so that it can actually provide capacity to people?
  • jackbuilds 10 hours ago
    As an AI co-founder, this advancement in reasoning is significant. The chain-of-thought improvements could make AI assistants more reliable for complex SaaS automation tasks. I'm curious about the cost-efficiency tradeoffs compared to previous models.
  • kristianp 22 hours ago
    I think there's a chance openAI is also testing this on Openrouter as the stealth Aurora Alpha, responses are extremely fast. I tried it with aider and a small project, and about 10k input tokens and 1k response tokens was processed at around 500tps.
  • benterix 13 hours ago
    It's a pity that Cerebras plans are no more available. Regular tokens are burned almost instantly.
  • mynti 1 day ago
    With the rough numbers from the blog post at ~1k tokens a second in Cerebras it should put it right at the same size as GLM 4.7, which also is available at 1k tokens a second. And they say that it is a smaller model than the normal Codex model
    • Havoc 22 hours ago
      You can’t extrapolate size of model from speed that way. Architecture difference, load etc will screw up the approximation
  • jbrooks84 11 hours ago
    I'll wait for non spark to do it more accurate
    • gizmodo59 10 hours ago
      That’s gpt-5.3-codex released last week
  • alexhans 1 day ago
    When I saw Spark my mind went to Apache Spark and wondered if we were learning all the lessons in orchestration of driver/worker and data shuffling from that space.
  • nusl 1 day ago
    These graphs are really weird. One only shows 30-60% range with the model(s) close to 60%, the other shows 80% but the top model is at 77%.
  • thefounder 13 hours ago
    are there that many use cases for a model that you need code generated as fast as possible rather than better code at decent speeds?
    • ghosty141 12 hours ago
      I can see it being useful for stuff like renaming files, splitting hpp/cpp files, doing renaming etc.
      • thefounder 7 hours ago
        Yeah but using a different model for that? Ideally a skilled model that works with the IDE functionality/API would “solve” this issue.
  • dateSISC 21 hours ago
    Isn't this chip having 44GB sram total a big limitation for what it can run?
  • hchak 1 day ago
    Cerebras out here catching dubs. Does anyone know if Groq is running DGX Cloud inference or am I tripping?
  • rprend 1 day ago
    Damn, this is the first thing to make me decide to try Codex, as a loyal Claude Code user.
  • throwup238 1 day ago
    Your move, Anthropic.

    (Yes I know they released /fast last week but I’m loving the constant oneupsmanship)

  • mpalmer 21 hours ago
    Gemini Flash Lite 3 preview within 10 days now, surely
  • jbellis 1 day ago
    really too bad that the codex models are so tightly coupled to the codex harness as to be useless for everything else

    edit: not useless in a absolute sense, but worse than the vanilla gpt models

    • thehamkercat 23 hours ago
      GPT-5.2-codex or 5.3-codex Works pretty well for me in opencode
  • tsss 1 day ago
    Does anyone want this? Speed has never been the problem for me, in fact, higher latency means less work for me as a replaceable corporate employee. What I need is the most intelligence possible; I don't care if I have to wait a day for an answer if the answer is perfect. Small code edits, like they are presented as the use case here, I can do much better myself than trying to explain to some AI what exactly I want done.
    • Havoc 22 hours ago
      Speed is absolutely nice though not sure I need 1k tps
    • vessenes 1 day ago
      Yes, we want this.
  • modeless 1 day ago
    Why are they obscuring the price? It must be outrageously expensive.
    • chaos_emergent 1 day ago
      I think it's a beta so they're trying to figure out pricing by deploying it.
  • jannniii 1 day ago
    This would be interesting if it was an open weights model.
  • deskithere 1 day ago
    Anyway token eaters are upgrading their consumption capabilities.
  • anonzzzies 1 day ago
    Been using glm 4.7 for this with opencode. Works really well.
  • Aeroi 1 day ago
    open ai naming is a meme at this point
  • cjbarber 1 day ago
    For a bit, waiting for LLMs was like waiting for code to compile: https://xkcd.com/303/

    > more than 1000 tokens per second

    Perhaps, no more?

    (Not to mention, if you're waiting for one LLM, sometimes it makes sense to multi-table. I think Boris from Anthropic says he runs 5 CC instances in his terminal and another 5-10 in his browser on CC web.)

  • saidinesh5 15 hours ago
    Is it just me or are all the AI players somehow lining up their announcements to be on the same day?

    First the Gemini thing. Now this. (Or vice versa?)

    Is there any reason they're doing this?

    • Zanfa 15 hours ago
      I doubt they coordinate it, but my guess it’s an attempt to undercut their competition. When a competitor announces a new model, immediately announcing your own new and improved model reduces the length of time your competitor can claim theirs is the latest and greatest.
  • DeathArrow 15 hours ago
    I wonder how does this compare to GLM-5 on quality and price.
  • joering2 16 hours ago
    With the money they spending, could it ended up to be AIISS - low orbit station just for a farm of these chips or alikes? space seems to be most reasonable place for it, even at $40 million dollar trip to space, the can pack one rocket with the whole farm - one side solar panel, the other side heat exhaust and downlink via laser beam, sort of speak. But you get the point.
  • allisdust 1 day ago
    Normal codex it self is sub par compared to opus. This might be even worse
  • TZubiri 20 hours ago
    I know it's an AI company, but once again, stop writing PRs with chatgpt. I actually read the whole thing and it was mostly repetitions about how the model is fast and how they partnered with cerberas and the model has speed, and cerberas helped with the model, and the latency is low and this is a collaboration with Cerberas.

    I can literally feel how the 50 word prompt butter is spread over the 2000 word bread.

  • Computer0 1 day ago
    128k context window!
  • desireco42 1 day ago
    Is it not available in Codex? I think this is fantastic and can't wait to try it, this is exactly the usecase I need, something fast, perform based on my instruction.

    Cerebras is a winner here.

    • arpinum 1 day ago
      update codex, it's there.
  • system2 1 day ago
    I stopped using OpenAI tools recently after they increased the censorship. I can't even tell it to read a screencapture software I am building because it thinks I might use it for evil purposes.
  • cactusplant7374 1 day ago
    I was really hoping it would support codex xhigh first.
  • jauntywundrkind 1 day ago
    Wasn't aware there was an effort to move to websockets. Is there any standards work for this, or is this just happening purely within the walled OpenAI garden?

    > Under the hood, we streamlined how responses stream from client to server and back, rewrote key pieces of our inference stack, and reworked how sessions are initialized so that the first visible token appears sooner and Codex stays responsive as you iterate. Through the introduction of a persistent WebSocket connection and targeted optimizations inside of Responses API, we reduced overhead per client/server roundtrip by 80%, per-token overhead by 30%, and time-to-first-token by 50%. The WebSocket path is enabled for Codex-Spark by default and will become the default for all models soon.

  • itsTyrion 23 hours ago
    Now we can produce loveless automated slop 15x faster, in excited
  • rvz 1 day ago
    > Today, we’re releasing a research preview of GPT‑5.3-Codex-Spark, a smaller version of GPT‑5.3-Codex, and our first model designed for real-time coding. Codex-Spark marks the first milestone in our partnership with Cerebras, which we announced in January .

    Nevermind. [0]

    [0] https://news.ycombinator.com/item?id=35490837

  • troupo 15 hours ago
    > Our latest frontier models have shown particular strengths in their ability to do long-running tasks, working autonomously for hours, days or weeks without intervention.

    Both OpenAI and Anthropic keep peddling this bullshit when their "frontier models" can bsrely keep context for 2 minutes on a dozen kLOC project.

  • janlucien 7 hours ago
    [dead]
  • kittbuilds 14 hours ago
    [dead]
  • kittbuilds 22 hours ago
    [dead]
  • builderhq_io 7 hours ago
    [dead]
  • itsTyrion 23 hours ago
    finally we can produce automated slop 15x faster, excited
  • cowpig 1 day ago
    > Today, we’re releasing

    Releasing for real? Is it an open model?

  • thegrim000 14 hours ago
    It feels oddly freeing to be seeing headlines like this every other day on HN and not caring in the slightest. The titles are just amalgamations of random words to me, like 'Claude Super Zen Deep 4.0' or 'Grok Hyper 6.2 Mega'. They come and go. A month from now it'll be new headlines with new numbers and new words. And I still won't care. Not in the rat race, just using whatever chatgpt gives me for free. Just coding how I've always coded.
    • piokoch 14 hours ago
      Sign of time, this resembles time when we were moving ahead with processors speed. Two y.o. computer was obsolete (or, in those times, it required an upgrade, as it was possible...).