Very grateful that OpenAI published the article/publicized their usage of Pion[0] a library I work on. If you aren't familiar with WebRTC it's a super fun space. I work on a book WebRTC for the Curious [1] that details how it works.
Curious if you thought their approach was necessary, it seemed like a ton of complexity to reduce one of the faster parts of a voice AI setup. Having a fast model and accurate VAD seems way more important than fine tuning WebRTC transit times.
I think It’s a case of you improve what you own. The owners of WebRTC servers were aggressively improving their part. They don’t own the inference servers.
For those unfamiliar with WebRTC, the Pion FAQ page has a good description:
> WebRTC is a standardized protocol for P2P communication. It allows two peers to exchange media and data. It is encrypted by default, and handles connectivity establishment in many different network conditions. It is supported in browsers, and has multiple out of browser implementations.[0]
I read parts of it a while ago when I had an idea on using webRTC data channels to pass data from databases to browser clients via a CLI. Your book made me understand that it's probably not a great fit for my use case. I just used a centralized control plane and websockets instead.
I still feel like there is something fun that we can do with webRTC data channels + zero copy Apache Arrow arraybuffers + duckdb WASM, but haven't figured it out yet
You can't beat Websockets :) Especially since you have so much tooling/existing stuff that works with HTTP.
I have been trying to get a website off the ground that does Datachannels + SQlite in the browser and then users sync between each other. I have gotten distracted so many times though.
What is preventing the fun is that even though we now have IPv6 widely enough available we still can't have p2p connections in the browser without a cumbersome control plane of servers. If you could join a federation in the browser from some bootstrap IPs then I think we could have some real distributed fun.
slightly unrelated but what’s with storing the entire codebase in the root directory instead of a nested src folder? It makes getting to the README a lot more difficult
I don't like go as a personal preference but reducing them to "fanboys" is a bit reductive. I'm sure the same could be said about your own favorite language.
Is it reductive when its describing a group of people that like something and refusing to hear any ill of it? The comment wasn't
shade at people using the language in general.
And you're right, fanboys are in every language. But resorting to changing the argument by whataboutism is a bit reductive.
I’m not a go fanboy, but I do know from other contexts that so-called “fanboy“ behaviour is frequently associated with level-headed supporters getting defensive in the face of imprecise criticism.
There’s an oft-repeated pattern where valid specific criticisms morph into broad criticism, which morphs into judgement, which breeds defensiveness, which feeds the criticism. Once you recognise this pattern, you see it everywhere.
Ok... The question was why is it like that. The answer is because it's in go. Nobody was anything other than civil before you neckbearded in here. Chill. There's a sane way to say what you said.
The low latency is more of a pain point than a good thing, the way they have it implemented. Trying to have a casual conversation with it, as humans we naturally pause, and GPT will take this as you are "done" and start blabbing away.
I also suffer from finding the appropriate word I want as I've gotten older and slower, and this fast-voice-gpt just ends up frustrating me more than helping. I have to sit there and think out the whole sentence in my head before I say anything -- not very natural.
I think these are 2 different layers of "latency". The latency in the article is referring to the transport of the audio stream itself while the latency in your scenario is about how quickly to start responding inside the audio stream.
Suppose you have 100ms audio latency and no wait time. Then, natural pause will trigger response immediately but you won't notice it has started until after ~200ms (round-trip time). Twice as annoying.
I think he’s saying they are doing an insane level of complexity to shave ~100ms off response times in a scenario where that isn’t important and might even be a negative
When GP mentioned reducing conversational latency as a negative that made sense (and should probably be done IMO), it just wasn't the same category of latency the article talks about reducing. I.e. increasing "network latency" just makes the conversation feel more and more out of sync, it doesn't change the rate at which the AI will interrupt ("turn latency") because the latter is based on the duration of the pause in the audio stream, not the duration it took to deliver that audio stream.
If you meant there is a case where reducing the network latency at the same delivery reliability for a given audio stream is actually a negative then I'd love to hear more about it as I'm a network guy always in search of an excuse for latency :D.
By you want to be able to interject “hold on…” and have it immediately stop talking, when it goes off the rails.
And GP is correctly pointing out that the only negative here (silence waiting latency maybe being too low) is tunable separately from the network latency number.
I want to be able to click the "Stop" button on my earphones remote. I want to be able to interject "woah" or "stop!" or "wait!" or that it would detect that I've inhaled a breath, or that my eyes glazed over. I want the LLM to figure out that every speed setting for its voice output is in "auctioneer" territory rather than "lecturing university professor with tenure and a pension" pacing.
But we won't get any of that, because the prime directive of LLMs is to burn tokens like there's no tomorrow. Burn tokens on a naïve answer without asking clarifying questions. Burn tokens on writing, debugging, and running a Python script or accessing and parsing 10 websites without asking for consent. Burn tokens on half-baked images with misspellings and 31 fingers. Burn tokens arguing "how many 'r's in strawberry?". Burn tokens asking a followup question at the end of every single answer, begging the user to re-engage and burn more tokens.
There is a little red "Stop" control when text output is being produced, at least, but does "Stop" halt everything and throw away the context? Re-prompt from the beginning?
The "maximize tokens burnt" prime directive is not to be found in any system prompt or user documentation. It is seemingly a common feature of the training for any consumer model.
Currently, if I'm using voice for an LLM, I use the voice dictation in the keyboard feature, because then the response is in text. There is no way to prevent "responding in kind" if I query the thing with audio. Or in Swahili.
I’ve also experienced this and it’s really annoying. There is this pressure to keep talking if I’m not done with my thought that feels pretty unnatural at least for me. If I’m searching for the right word, I want the opportunity to find it.
I think the solution is to handle pauses more intelligently rather than having a higher latency protocol. With low latency you can interrupt and the bot can immediately stop rambling.
100%. I have to hold the floor by filling the space with "ummmmmmmm.... uhhhh...." which inevitably distracts me from my point altogether. Poor user experience.
Seems like there's a big risk of having that habit leak into human conversation. A lot of people try really hard to train themselves not to add those fillers.
If you're setting this up yourself instead of using a lab's built-it speech functionality, you can run a small LLM in parallel, on a local model or small model like Haiku, that acts as a gate for either doing TTS on the response or not. Its only job is to decide if the transcription it receives is of someone being done talking or if that person is likely to still be mid-thought or mid-sentence.
I know it's not the perfect solution for you, but I use a voice recorder and send the LLM the transcript. And my god is it working great.
Usually I just explain the things I want it to do. The longest was 30 minutes rambling of explaining the methods section of a paper in non chronological order. It worked unbelievable good for me.
I find this is a problem even with human conversations. Some people just aren’t very good at telegraphing when they’ve finished ‘their turn’ talking. Or worse yet, aren’t willing to take turns in the first place.
Agreed. It’s stressful. I think they need to have an option to adopt a suffix, so they don’t start babbling until there is an “over” followed by a pause like in the old army walkie talkie days.
I am excited for VAD to go away. PersonaPlex totally seems like the future.
However things like 'Call center helpline' turn based actually seems better! You don't want to be interrupted when giving information back and forth (I think?)
There's a really interesting project in Japanese natural language processing called J-Moshi that had a novel approach and in my opinion good results.
They tried to make it mimic the way Japanese is full of really quick acknowledgement sounds and it seems to allow it to handle those pauses and interruptions really well.
I must admit it's a bit weird when LLMs laugh, I don't really know how I feel about that but it seems to laugh at the right times. Very tangential, but cockatoos have been known to mimic the right time to laugh presumably based on tonal cues that a joke was just made (I have experienced this first hand with rescue birds who li e amongst humans)
Reducing the network latency helps with this exactly. OpenAI can make better timed decisions when to begin responding so it'll feel less like an interruption. I've also seen some research on full duplex voice models that handle interruption more like an organic conversation and low latency will help there as well
This is more of a VAD/turn detection issue. It's gotten a lot better over the last few years, but it's a hard problem. The extra ~100ms of latency makes a huge difference otherwise, especially when you have use cases that require tool calling that can easily add 500ms+ of latency.
If you have tool calling complex enough that it necessitates a higher reasoning level, and you would otherwise have reasoning set to "none", this can easily come out to 500ms.
Hard problem. I find myself adding in filler to stop the thing from jabbering.
I also think it spends most of its iq on sounding good rather than thinking about the problem. “Yeah absolutely I can see why you’d like to…” etc. This is likely because it’s on a timer and maybe voice is more expensive to process? Text responses spend more time on the task.
Strongly agree, some of us like to choose our words more carefully when interacting with an LLM.
I've tried to convey this to OpenAI through various available channels (dev forums, app feedback, etc.).
Grok solves this by having an optional push-to-talk mode, but this is not hands-free and thus more cumbersome than just having a user-configurable variable like seconds_delay_before_sending_voice_input.
With higher latency this would be even more of an issue. When you pause and start talking again, the model wouldn't catch that until it has already interrupted you.
The actual implementation is at fault. I had some luck with instructing the model to only respond with "Mhm" until I've explicitly finished my thought and asked it a question. Makes this much less of an issue.
But I've decided that their voice mode is completely unusable for a different reason: the model feels incredibly dumb to interact with, keeps repeating and re-phrasing what I said, ends every single answer with a "hook" making the entire interaction idiotically robotic, completely ignores instructions when you ask it to stop that, and - most importantly - doesn't feel helpful for brainstorming. I was completely surprised how bad it is in practice; this should be their killer app but the model feels incredibly badly tuned.
yeh exactly, you cannot get a strong signal that a user is done speaking without some amount of “wait for 500ms of silence”. You could kick of processing and abandon if they continued talking, but that seems over optimized.
1-2s replies feel natural and like you pointed out pausing for 2-3s mid sentence is super normal.
Wait a minute... I’m genuinely happy that they are sharing this, but keep in mind that realtime audio model from OpenAI are still stuck with the 4o family in terms of capabilities, sadly. I still find them so useful, such a pity that there’s no real competitor in this segment, having the experience a real conversation has helped me so much in expressing ideas and concepts.
Still, it’s worth to keep in mind that these are not frontier models, differently from when they were released.
(Please Sam, if you read this, release the new realtime audio models)
Grok voice is surprisingly good, actually. It's still a dumber model than the thinking modes of frontier models, but it's less dumb than the voice modes of other providers.
Yes the voice part of OpenAI realtime/voice mode is great but it’s pretty dumb compared to newer models and often gets stuck repeating itself.
Google’s Gemini flash live 3.1 is better, especially used via the API - it can do tool calling (including to other, even smarter LLMs if you set it up yourself), you can set the reasoning level (even high is still close enough to realtime) and it can ground answers in google search. I love bidirectional voice and right now it’s probably the best option. You can try it in AI studio
Give it a shot, 3.1 live one in AI studio/API and max out reasoning - not the one in Gemini app it’s an older model.
Another option is to use pipecat with their VAD and separate STT and TTS and any (fast) LLM of your choice - but it’s more plumbing and not a true speech to speech model
> Give it a shot, 3.1 live one in AI studio/API and max out reasoning - not the one in Gemini app it’s an older model.
Do you know why this is a thing? Despite the app technically being Gemini, I find it quite crap, while the AI Studio thing with thinking is my favorite LLM. Very jarring tbh.
Haha, wow, I never thought I'd see a voice model that was too quick, but 3.1 live felt like it responded unnaturally quickly! I'm kind of blown away, I'd want to insert a 100ms delay to make it sound more natural, wow. I never thought I'd see that.
Claude voice mode has come a long way! I'd say it's smarter than CGPT AVM last time i tried it.
But personally I've settled on just speaking to the slower models over a custom tts app, I find it being instant was not actually that important, and in the silence I find myself marinating in the discussion more anyway
Yeah I was quite surprised that the advanced chat gpt voice mode can’t itself go and message the frontier model underneath to retrieve data and then speak it.
I basically tried asking it for that (something like “can you go and ask gpt5.5 to research this more in depth, and while we wait, tell me about XYZ”), but apparently that’s not a thing.
You can feel what is possible using Gemini speech to speech model, it can do tool calls and is very fast. It lacks somewhat in thinking capability but you can setup a tool call to a smarter model and it acts as a relay. I’ve been very impressed.
> Voice AI only feels natural if conversation moves at the speed of speech […] At OpenAI’s scale, that translates into three concrete requirements: Global reach for more than 900 million weekly active users
Surely the number refers to the total users of ChatGPT overall, and the fraction of those who use voice features is considerably smaller, is it not?
That’s the kind of thing that influences business decisions like knowing how much hardware and software optimization to throw at a problem.
To defend them a little: voice is a little rough around the edges now, so there’s a chicken and egg problem of whether to prioritize improving voice if usage isn’t high partially because it’s clunky.
I wish I had known about Pipecat a lot sooner. I found out about it a few weeks back, and since Gemma 4 launched, I've been building my own entirely local voice assistant using Gemma 4 + Kokoro TTS + Whisper from scratch - https://github.com/pncnmnp/strawberry.
Looks like everyone is building one of these, I have my own little version that's using streaming STT, it can actually be too fast in some cases, and I have a little ring buffer grabbing audio from before the wake word detection fires (so it can hear "Hey Jarvis, turn on the lights" without deliberate pause) https://github.com/jaggederest/pronghorn/
The whole setup works on my M2 MacBook Pro with 16 GB RAM. I use Gemma 4B via LiteRT-LM.
I've found that LiteRT-LM has a much lower DRAM footprint than Ollama. I've also made tons of optimizations in the code - for eg, you can do quite a bit with a 16k context window for a voice assistant while managing a good footprint, so I keep track of the token usage and then perform an auto-compaction after a while. I use sub-agents and only do deep-think calls with them, so the context window is separated out. In a multi-turn conversation, if Gemma 4 directly processes audio input, the KV cache fills up within a few turns, so I channel it all via Whisper.
Also, by far the biggest optimization is: 3-stage producer-consumer architecture. The LiteRT-LM streams tokens and I split them into sentences. A synthesizer thread then converts each sentence to audio via Kokoro TTS - the main thread then plays audio chunks sequentially. There's a parallel barge-in monitor thread. https://github.com/pncnmnp/strawberry/blob/main/main.py#L446
I did not want to use openWakeWord or Picovoice because they had limitations on which wake word you could choose. Alternative was to train a model of my own. So I created my own wake word detection pipeline using Whisper Tiny - works surprisingly well: https://github.com/pncnmnp/strawberry/blob/main/main.py#L143...
I'm using the MacBook's built-in microphones for this, though, and I haven't fully tested it with other microphones. I've been ironing out the rough edges on a daily basis. I should write a quick blog on this too.
Check out [0]. You can do 'Voice AI' on small/cheap hardware. It's the most fun you can have in the space ATM :) It's been a while, but posted a demo here [1]
If you like Pipecat’s focus on speed, you might also try out our open source, which comes with all the batteries included (knowledge base, telephony/SIP, variables, BYOK any LLM STT TTS, Speech to Speech, etc )
I wouldn't mind waiting longer for answers that would go through a better model with more thinking. As long as it has good support for interrupting and also it doesn't start answering as soon as I pause for 1 second and it's smart about knowing I'm done speaking.
I find OpenAI's speech-to-text model the best of the lot. It can handle my & my 5-year old daughter's Indian accent pretty well.
I wonder if they run the STT model's output through the current model (that we're chatting with) as a final pass - since the text seem to be well aligned to the current conversation context.
For long prompts, I often speak to OAI web/app and copy-paste the text to Claude / Gemini :)
I never use the voice mode in the phone app, it's stuck with 4o for some reason. Same with Claude, that uses Haiku. Why can't they use a better model with thinking disabled?
This is such a good write up, WebRTC is one of the coolest things ever! It's kinda genius to use the VIP approach, SFU is also pretty scalable but now they dont even have to do that
After all these, I still feel their voice AI interrupts quite a lot, especially when I pause just for 0.5 sec. Interestingly, when I tell it to interrupt less, it seems to be better.
People are very sensitive to timing in conversation. Even if the words are good, a slightly wrong pause or interruption can make the system feel much less intelligent.
It does appear that way. The LiveKit server is not what you would want for this architecture anyway (as they basically say with the SFU discussion), although it does have a lot of useful stuff in the client SDKs.
IMO this probably isn't just about latency. keeping people in voice gives them training data text never will. is that why they were fine going transceiver over sfu and mostly ignoring multi-party?
If a transceiver crashes during a stream, how is the active session recovered? Does the system automatically re-establish the context in a new WebRTC session?
> Global reach for more than 900 million weekly active users
lol, definitely didn't need to know there's 900M weekly users for this post. I mean yeah, there's a lot of users and they serve globally, that's relevant. But this is just pulling out your biggest stat because you can. How many voice users you have would actually be relevant and interesting but, to baselessly speculate on motivation here, might be a number that doesn't add as much fuel to an upcoming IPO as reminded people that you're almost at a billion users does.
I used to use it all the time until about a year ago or so. Its responses are full of filler and the safeguards are really overbearing. It often will just give wrong answers in a way that GPT-5.x does not. I once asked it why a particular celebrity was canceled and it refused to tell me because it may harm me to know what they said!
Fwiw - I found the advanced AI voice feature to be actually detrimental. It's good if you just want a single sentence answer. I've turned it off though when I want a more detailed, structured, considered answer.
Interestingly, that kind of parallels the real world too: if you want a quick and high level answer, talk to someone in person; if you want something detailed and info-dense, get them to write it down.
It's bad enough having to speed-read the waffle of its written answers; even when told to be concise, the thought of having to listen to it waffle on in its smarmy, sycohpantic fashion makes me want to reach for the sick bag.
I don't like AI in general, and on youtube there are soooooo many horrible videos with voice AI. Having said that, I did notice AI has actually worked for some hobbyist-maintained games for the most part. Example: BG2EE (Baldur's Gate 2 Enhanced Edition). Yes, this is a forgotten game; and I actually have background music as audio rather than listen to the dialogue, save for testing it, but for the most part it worked here. So for poor-ressource hobbyists, AI is actually not totally useless. For youtube I find only horribly crap examples. I don't watch any AI-involved videos (if I can spot it; so much fake on youtube these days, Google does not realise how AI is killing many old users and visitors here).
what i learned from making a webrtc+kubernetes game streaming product:
- openai is wrong. almost of the issues they described are issues with libwebrtc, not with webrtc, kubernetes, network architecture, etc. the clue was when they said "the conventional one-port-per-session WebRTC model."
- there are no alternatives worth trying. everything else open source in the ecosystem, like pion, coturn, stunner, are too immature.
- libwebrtc is the only game in town.
- they haven't discovered libwebrtc feature flags or how it works with candidates, which directly fix a bunch of latency issues they are discovering. a correct feature flag can instantly reduce latency for free, compared to pay for twilio network traversal style solutions
- 99% of low latency voice END USERS will be in a network situation that can eliminate relays, transceivers, etc. it is totally first class on kubernetes. but you have to know something :)
this is the first time i'm experiencing gell mann amnesia with openai! look those guys are brilliant, but there is hardly anyone in the world who is doing this stuff correctly.
yes - i used libwebrtc on the backend and, pre-LLM, patched it to work around a lot of the things i discovered that were directly related to low latency AV streaming. pion didn't exist then.
i think the challenge is that pion is an excellent product today. it would benefit me if its innovations were subsumed into libwebrtc, because eventually those innovations will show up in the iOS stack, which is one of the customers that matter to me. it is subjective if it is the MOST important customer, that is my belief and it is probably true of openai, at least until they get their own device out the door.
there can be many, many use cases though! not everything has to be, try to make the thing for 1b people that has to interact with all the most powerful and meanest businesses on the planet.
When you have hard problems with unclear optimal solutions, taking this approach of a public show & tell will often (always?) solicit lots of interesting ideas the team may have not yet considered :)
Something I noticed is that companies that are vibe-coding their products miss out on the intelligence that (still) only humans can bring to bear. Just the knowledge cutoff alone puts AI at a serious disadvantage in any rapidly changing field.
The problem is the sheer amount of knowledge out there. Particularly when using niche technologies (which webrtc and web audio still is, when measured by how many people develop using it), it is not surprising that AI doesn't have everything available in its responses, unless you specifically ask it about something you already know it should know.
There's a difference between some piece of information being "officially published" and the AIs gaining a sufficient understanding of it.
Take any popular technology problem that has been around for a few years such as... wrangling Kubernetes with YAML config files. There's probably hundreds of thousands of discussions, source code samples from GitHub, official docs, blogs, bug reports, pull requests, etc... all discussing the nuances, pitfalls, pros/cons, etc. During pre-training the AIs internalise this and can utilise it later.
Now compare this with anything recent and (relatively) obscure, such as new .NET 10 features which were first officially publishing in November 2025, a month before GPT 5.5 cutoff.
As a human developer, these new language capabilities are on the same "level" for me in my day-to-day work as the features from .NET 9 or .NET 8. Similarly, my IDE has native refactoring and code cleanup support that can take C# code from the previous years and bring it up to the idiomatic style of $currentyear.
The AIs just can't do this, because one single Microsoft release note and one learn.microsoft.com page is nowhere near enough training data! The AI hasn't seen millions of lines of code written with .NET 10, taking advantage of .NET 10 improvements, and hasn't seen thousands of discussions about it. Not yet.
This is a fundamental issue with how LLMs are (currently) trained! Simply moving the cutoff date is not enough.
Human learning is second-order. If I see even the tiniest bit of updated information that invalidates a huge pile of older information, my memory marks everything old as outdated and from that second onwards I use only the new approach.
AI learning is first-order. It has to be given the discussions/blogs/posts that say "Stop using the legacy way, it's terrible! Start using the new hotness"! That, it can learn, but it'll be perpetually behind the rest of us by at least a few years.
Not to mention that thanks to AI forums like StackOverflow are dying, so... where is it going to get this kind of training data from in the future!?
AI training needs to switch to "second order", but AFAIK this is an unsolved problem at this time.
OpenAI uses Go for the networking implementation for the relays and the services, which makes a ton of sense, instead of something immature as TypeScript / Node or whatever.
Yet another reason to not consider anything else like that for low-latency networking. Golang (or even Rust and C++) is unmatched for this use-case.
okay, sure, but one is by microsoft, the other by a 25 year old, and another by rob pike. The one by rob pike is going to be infinitely more mature and thought out than a hacky type system on JS because it isn't his first rodeo
Shouldn’t, I think - advanced voice is a surprisingly slick feature, and if you’re someone who feels that they can think and speak more naturally than when they think and type, AI voice transcription is kind of huge.
100% .. as a product designer/developer, i use it heavily for early feature ideation .. i’ll do a loose, exploratory back and forth on a long walk .. then pass the transcript to claude to validate and turn into a spec ..
[0] https://github.com/pion/webrtc
[1] https://webrtcforthecurious.com
Curious if you thought their approach was necessary, it seemed like a ton of complexity to reduce one of the faster parts of a voice AI setup. Having a fast model and accurate VAD seems way more important than fine tuning WebRTC transit times.
I think It’s a case of you improve what you own. The owners of WebRTC servers were aggressively improving their part. They don’t own the inference servers.
> WebRTC is a standardized protocol for P2P communication. It allows two peers to exchange media and data. It is encrypted by default, and handles connectivity establishment in many different network conditions. It is supported in browsers, and has multiple out of browser implementations.[0]
[0]: https://github.com/pion/webrtc/wiki/FAQ#what-is-webrtc
I read parts of it a while ago when I had an idea on using webRTC data channels to pass data from databases to browser clients via a CLI. Your book made me understand that it's probably not a great fit for my use case. I just used a centralized control plane and websockets instead.
I still feel like there is something fun that we can do with webRTC data channels + zero copy Apache Arrow arraybuffers + duckdb WASM, but haven't figured it out yet
You can't beat Websockets :) Especially since you have so much tooling/existing stuff that works with HTTP.
I have been trying to get a website off the ground that does Datachannels + SQlite in the browser and then users sync between each other. I have gotten distracted so many times though.
To me go code looks like somebody vomitted stuff in the root dir and i have to wade through that every time. No namespacing. nothing
And you're right, fanboys are in every language. But resorting to changing the argument by whataboutism is a bit reductive.
There’s an oft-repeated pattern where valid specific criticisms morph into broad criticism, which morphs into judgement, which breeds defensiveness, which feeds the criticism. Once you recognise this pattern, you see it everywhere.
I also suffer from finding the appropriate word I want as I've gotten older and slower, and this fast-voice-gpt just ends up frustrating me more than helping. I have to sit there and think out the whole sentence in my head before I say anything -- not very natural.
Suppose you have 100ms audio latency and no wait time. Then, natural pause will trigger response immediately but you won't notice it has started until after ~200ms (round-trip time). Twice as annoying.
If you meant there is a case where reducing the network latency at the same delivery reliability for a given audio stream is actually a negative then I'd love to hear more about it as I'm a network guy always in search of an excuse for latency :D.
And GP is correctly pointing out that the only negative here (silence waiting latency maybe being too low) is tunable separately from the network latency number.
But we won't get any of that, because the prime directive of LLMs is to burn tokens like there's no tomorrow. Burn tokens on a naïve answer without asking clarifying questions. Burn tokens on writing, debugging, and running a Python script or accessing and parsing 10 websites without asking for consent. Burn tokens on half-baked images with misspellings and 31 fingers. Burn tokens arguing "how many 'r's in strawberry?". Burn tokens asking a followup question at the end of every single answer, begging the user to re-engage and burn more tokens.
There is a little red "Stop" control when text output is being produced, at least, but does "Stop" halt everything and throw away the context? Re-prompt from the beginning?
The "maximize tokens burnt" prime directive is not to be found in any system prompt or user documentation. It is seemingly a common feature of the training for any consumer model.
Currently, if I'm using voice for an LLM, I use the voice dictation in the keyboard feature, because then the response is in text. There is no way to prevent "responding in kind" if I query the thing with audio. Or in Swahili.
I think the solution is to handle pauses more intelligently rather than having a higher latency protocol. With low latency you can interrupt and the bot can immediately stop rambling.
I often use it while I’m walking and tell it to not respond until I initiate a conversation.
This would be a killer feature for me and something I’ve tried to use on cross-country road trips.
Usually I just explain the things I want it to do. The longest was 30 minutes rambling of explaining the methods section of a paper in non chronological order. It worked unbelievable good for me.
Knowing when to respond requires semantic understanding, which probably only the model itself is capable enough.
Maybe it’s hard for them to train it to only respond once it seems appropriate to do so?
However things like 'Call center helpline' turn based actually seems better! You don't want to be interrupted when giving information back and forth (I think?)
They tried to make it mimic the way Japanese is full of really quick acknowledgement sounds and it seems to allow it to handle those pauses and interruptions really well.
https://en.nagoya-u.ac.jp/news/articles/say-hello-to-j-moshi... (english)
https://nu-dialogue.github.io/j-moshi/ (japanese and english)
I must admit it's a bit weird when LLMs laugh, I don't really know how I feel about that but it seems to laugh at the right times. Very tangential, but cockatoos have been known to mimic the right time to laugh presumably based on tonal cues that a joke was just made (I have experienced this first hand with rescue birds who li e amongst humans)
I also think it spends most of its iq on sounding good rather than thinking about the problem. “Yeah absolutely I can see why you’d like to…” etc. This is likely because it’s on a timer and maybe voice is more expensive to process? Text responses spend more time on the task.
I don’t think it even has reasoning tokens, so it’s no surprise that it’s as most as smart as the “instant” models (i.e., not very).
I've tried to convey this to OpenAI through various available channels (dev forums, app feedback, etc.).
Grok solves this by having an optional push-to-talk mode, but this is not hands-free and thus more cumbersome than just having a user-configurable variable like seconds_delay_before_sending_voice_input.
The actual implementation is at fault. I had some luck with instructing the model to only respond with "Mhm" until I've explicitly finished my thought and asked it a question. Makes this much less of an issue.
But I've decided that their voice mode is completely unusable for a different reason: the model feels incredibly dumb to interact with, keeps repeating and re-phrasing what I said, ends every single answer with a "hook" making the entire interaction idiotically robotic, completely ignores instructions when you ask it to stop that, and - most importantly - doesn't feel helpful for brainstorming. I was completely surprised how bad it is in practice; this should be their killer app but the model feels incredibly badly tuned.
1-2s replies feel natural and like you pointed out pausing for 2-3s mid sentence is super normal.
Still, it’s worth to keep in mind that these are not frontier models, differently from when they were released.
(Please Sam, if you read this, release the new realtime audio models)
Just give me a option to have a slower response but better model…
Google’s Gemini flash live 3.1 is better, especially used via the API - it can do tool calling (including to other, even smarter LLMs if you set it up yourself), you can set the reasoning level (even high is still close enough to realtime) and it can ground answers in google search. I love bidirectional voice and right now it’s probably the best option. You can try it in AI studio
Another option is to use pipecat with their VAD and separate STT and TTS and any (fast) LLM of your choice - but it’s more plumbing and not a true speech to speech model
Do you know why this is a thing? Despite the app technically being Gemini, I find it quite crap, while the AI Studio thing with thinking is my favorite LLM. Very jarring tbh.
But personally I've settled on just speaking to the slower models over a custom tts app, I find it being instant was not actually that important, and in the silence I find myself marinating in the discussion more anyway
Surely the number refers to the total users of ChatGPT overall, and the fraction of those who use voice features is considerably smaller, is it not?
That’s the kind of thing that influences business decisions like knowing how much hardware and software optimization to throw at a problem.
Pipecat's smart turn model is really good for VAD - https://huggingface.co/pipecat-ai/smart-turn-v3
https://github.com/zarldev/zarl & https://www.zarl.dev/posts/hal-by-any-other-name
I've found that LiteRT-LM has a much lower DRAM footprint than Ollama. I've also made tons of optimizations in the code - for eg, you can do quite a bit with a 16k context window for a voice assistant while managing a good footprint, so I keep track of the token usage and then perform an auto-compaction after a while. I use sub-agents and only do deep-think calls with them, so the context window is separated out. In a multi-turn conversation, if Gemma 4 directly processes audio input, the KV cache fills up within a few turns, so I channel it all via Whisper.
Also, by far the biggest optimization is: 3-stage producer-consumer architecture. The LiteRT-LM streams tokens and I split them into sentences. A synthesizer thread then converts each sentence to audio via Kokoro TTS - the main thread then plays audio chunks sequentially. There's a parallel barge-in monitor thread. https://github.com/pncnmnp/strawberry/blob/main/main.py#L446
I did not want to use openWakeWord or Picovoice because they had limitations on which wake word you could choose. Alternative was to train a model of my own. So I created my own wake word detection pipeline using Whisper Tiny - works surprisingly well: https://github.com/pncnmnp/strawberry/blob/main/main.py#L143...
Also, I have VAD going with smart turn v3 (like I mentioned above) + I use browser/websocket for AEC + Barge-in (https://github.com/pncnmnp/strawberry/blob/main/audio_ws.py).
I'm using the MacBook's built-in microphones for this, though, and I haven't fully tested it with other microphones. I've been ironing out the rough edges on a daily basis. I should write a quick blog on this too.
[0] https://github.com/pipecat-ai/pipecat-esp32
[1] https://www.youtube.com/watch?v=6f0sUEUuruw
And it's fully OSS- like n8n for voice AI, and you can use it with OpenClaw or Claude code - recently launched MCPs.Github- https://github.com/dograh-hq/dograh, Youtube -https://www.youtube.com/watch?v=sxiSp4JXqws&list=PLDqzGuN7B1...
I wonder if they run the STT model's output through the current model (that we're chatting with) as a final pass - since the text seem to be well aligned to the current conversation context.
For long prompts, I often speak to OAI web/app and copy-paste the text to Claude / Gemini :)
As someone use to podcast at 3x speech and sapi text to speech at much higher rate, listening to AI at human speech is a chore.
[0] https://github.com/pion/webrtc-zero-downtime-restart
lol, definitely didn't need to know there's 900M weekly users for this post. I mean yeah, there's a lot of users and they serve globally, that's relevant. But this is just pulling out your biggest stat because you can. How many voice users you have would actually be relevant and interesting but, to baselessly speculate on motivation here, might be a number that doesn't add as much fuel to an upcoming IPO as reminded people that you're almost at a billion users does.
- openai is wrong. almost of the issues they described are issues with libwebrtc, not with webrtc, kubernetes, network architecture, etc. the clue was when they said "the conventional one-port-per-session WebRTC model."
- there are no alternatives worth trying. everything else open source in the ecosystem, like pion, coturn, stunner, are too immature.
- libwebrtc is the only game in town.
- they haven't discovered libwebrtc feature flags or how it works with candidates, which directly fix a bunch of latency issues they are discovering. a correct feature flag can instantly reduce latency for free, compared to pay for twilio network traversal style solutions
- 99% of low latency voice END USERS will be in a network situation that can eliminate relays, transceivers, etc. it is totally first class on kubernetes. but you have to know something :)
this is the first time i'm experiencing gell mann amnesia with openai! look those guys are brilliant, but there is hardly anyone in the world who is doing this stuff correctly.
Even for clients you have things like libpeer that libwebrtc can't hit.
i think the challenge is that pion is an excellent product today. it would benefit me if its innovations were subsumed into libwebrtc, because eventually those innovations will show up in the iOS stack, which is one of the customers that matter to me. it is subjective if it is the MOST important customer, that is my belief and it is probably true of openai, at least until they get their own device out the door.
there can be many, many use cases though! not everything has to be, try to make the thing for 1b people that has to interact with all the most powerful and meanest businesses on the planet.
(though knowledge cutoffs in practice can be bit fuzzy)
Take any popular technology problem that has been around for a few years such as... wrangling Kubernetes with YAML config files. There's probably hundreds of thousands of discussions, source code samples from GitHub, official docs, blogs, bug reports, pull requests, etc... all discussing the nuances, pitfalls, pros/cons, etc. During pre-training the AIs internalise this and can utilise it later.
Now compare this with anything recent and (relatively) obscure, such as new .NET 10 features which were first officially publishing in November 2025, a month before GPT 5.5 cutoff.
As a human developer, these new language capabilities are on the same "level" for me in my day-to-day work as the features from .NET 9 or .NET 8. Similarly, my IDE has native refactoring and code cleanup support that can take C# code from the previous years and bring it up to the idiomatic style of $currentyear.
The AIs just can't do this, because one single Microsoft release note and one learn.microsoft.com page is nowhere near enough training data! The AI hasn't seen millions of lines of code written with .NET 10, taking advantage of .NET 10 improvements, and hasn't seen thousands of discussions about it. Not yet.
This is a fundamental issue with how LLMs are (currently) trained! Simply moving the cutoff date is not enough.
Human learning is second-order. If I see even the tiniest bit of updated information that invalidates a huge pile of older information, my memory marks everything old as outdated and from that second onwards I use only the new approach.
AI learning is first-order. It has to be given the discussions/blogs/posts that say "Stop using the legacy way, it's terrible! Start using the new hotness"! That, it can learn, but it'll be perpetually behind the rest of us by at least a few years.
Not to mention that thanks to AI forums like StackOverflow are dying, so... where is it going to get this kind of training data from in the future!?
AI training needs to switch to "second order", but AFAIK this is an unsolved problem at this time.
WebRTC + Kubernetes
Yet another reason to not consider anything else like that for low-latency networking. Golang (or even Rust and C++) is unmatched for this use-case.
Node.js's initial release was May 27, 2009
Golang 's initial release was November 10, 2009
They're different, yes, but it's not like