Why would we be ashamed of that? The early Leisure Suit Larry games are a lot of fun; yeah the humor is crass and low-brow, but that's sort of the charm. It's meant to be silly.
That series is over, and the magical feeling of being in an open-ended fantasy world is really hard to replicate when we're not kids anymore. Loom is another game that gave me that feeling.
But there was one idea in QfG that I wish more games would use. Namely, designing three different solutions for every problem the player is facing. This idea works so well to create a sense of possibility in a game, I don't know why it got forgotten.
I remember waiting for uber next to him in SF one night 10+ years ago. This dude must be the son of some mafia boss or some shit and have some crazy blackmail to raise billions for companies that are copies of products where he’s the 12th company doing the same thing.. never turning a profit or anything and yet raising ever more money. doesn’t make sense otherwise
If you (like me) are hearing about this for the first time, Bret Taylor is the co-founder.
> Bret is Co-Founder of Sierra. Most recently, he served as Co-CEO of Salesforce. Prior to Salesforce, Bret founded Quip and was CTO of Facebook. He started his career at Google, where he co-created Google Maps. Bret serves on the board of OpenAI.
I think this is generally a good product because businesses that previously had zero phone support can now afford to have something. However, the hard work of actually building out the various workflows and decision trees is not automatic. Previously, a call center employee would receive abuse from a caller for being unempowered to make a decision. Instead, an LLM will perform the same role.
Ideally, businesses will escalate to an empowered human for all undefined parts of the flowchart. In practice, I truly hope it will be better than the current pre-recorded phone tree system that leads to a human following a script.
I personally only call support because a fix is not available through an organization's website.
As a tech literate customer, my willingness to entertain AI chatbot decision trees is rock bottom. I have no patience to try to find the correct incantation to actually fix something (or the, “before I transfer you to a person, let me try to help you first”).
For myself - and admittedly maybe I’m just far out on the long tail of customers - I think these need to be treated like self driving cars, where 98% of the way there just isn’t good enough to cut it for me.
Last time I tried using real-time chat support for a technical issue, I spent 30 minutes explaining my problem to a human only to find out they were a sales rep whose only solution was to sell me more services. Once I said I didn't want that, they transferred me to tech support who gaslit me and left me on read long enough to make my session time out.
I think of support channels are just there to deflect customers and not really support anything. An AI bot will have infinite patience for that kind of interaction. Empowerment is never part of the equation.
It's always interesting seeing how HN reacts to AI CX (as someone at a sierra competitor). Yes, the tech savvy crowd loves to say how they always ask for a human and love old school phone trees
in reality 50-80% of callers come in with easily answerable questions because they don't know how to nav the website and prefer to ask in natural language
The vast majority of callers call in to resolve their issue, and most don't care if they are speaking to a bot because they just want their issue fixed. Agents (if implemented well) are an order of magnitude more effective at resolving issues compared to a call centre worker who is reading off a script and churn within 9 months
There's also the 2nd order effs of making CX cheap. before, there is the perverse incentive of companies trying to keep you off support because each call costs them way more than the value they get. if your cost per call drops 100x you can invest in turning a cost centre into a revenue driver (+ a better experience)
Their secret is that they have hoards of fake AI Customers who will call into their client's AI Customer Support and respond to surveys saying they were extremely happy with the support, so the client has to pay for perfect simulated outcomes.
but even a simple impl to answer questions can knock out like 50% of callers who are tech-illiterate at 100x cheaper cost, it's just strictly better economics and better for those customers
AI support generally sucks but I actually wouldn't mind if everyone used it for the initial call routing portion. Beats an IVR tree or waiting for someone to just redirect your call to the real queue.
I respectfully disagree with the initial routing point. I very strongly prefer a traditional tree to “I’m your voice assistant! In a few words, tell me how I can help!”.
The tree is structured and gives me an immediate sense of how to map my task to the support offering. If I’m calling, I probably have an issue that I can’t self-serve resolve via the customer portal or whatever, so walking the tree lets me get an idea of who can help.
The “voice assistant” gives me no sense of what the system is capable of or how to take advantage of those capabilities. So I’m left guessing at phrases or functions based off of the assumption that there’s still some kind of tree-like structure that’s been abstracted away. Same outcome, more cognitive overhead, plus I usually have to shout in my best William … Shatner … impression to get it to understand me.
I broadly agree though I have noticed that it seems to be getting a bit better. I hate how patronizing pretty much every LLM tends to be, but at least I've noticed now that the AI support is better at figuring out what it is I actually want.
That said, my life hack for these things to get escalated to a human is to just keep saying or typing curse words. Usually that triggers a "connect to human" flow. I can't promise it will always work, but I can say it has worked every time I have tried it.
Voice agents in customer support is an extremely crowded market. Seems like Sierra is taking a considerable lead.
I don't know much about their product offerings, but I was doing some speech-to-text work and came across https://research.sierra.ai/mubench/ for comparing current models. It felt fairly thoughtful, particularly in regards to coming up with better benchmarking metrics than word error rate.
It’s interesting that the example interaction they use on their homepage is a no-friction example that can be handled without an AI chatbot. Why not something more complex that properly demonstrates the value?
1. https://preview.redd.it/remember-sierra-games-1979-2008-they...
https://www.youtube.com/watch?v=IMQi7olp-tw
1. https://en.wikipedia.org/wiki/Leisure_Suit_Larry
One of the most beautiful game logos, going back to the early nineties.
* https://en.wikipedia.org/wiki/Sierra_Entertainment
But there was one idea in QfG that I wish more games would use. Namely, designing three different solutions for every problem the player is facing. This idea works so well to create a sense of possibility in a game, I don't know why it got forgotten.
There are 26 letters and millions of words; people should choose other ones.
EDIT: holy shit I stand corrected: https://en.wikipedia.org/wiki/Lode_Runner
their moat is distribution
> Bret is Co-Founder of Sierra. Most recently, he served as Co-CEO of Salesforce. Prior to Salesforce, Bret founded Quip and was CTO of Facebook. He started his career at Google, where he co-created Google Maps. Bret serves on the board of OpenAI.
Ideally, businesses will escalate to an empowered human for all undefined parts of the flowchart. In practice, I truly hope it will be better than the current pre-recorded phone tree system that leads to a human following a script.
I personally only call support because a fix is not available through an organization's website.
They seem to be a "for pricing, let's go play C-level golf" type of company.
For myself - and admittedly maybe I’m just far out on the long tail of customers - I think these need to be treated like self driving cars, where 98% of the way there just isn’t good enough to cut it for me.
I think of support channels are just there to deflect customers and not really support anything. An AI bot will have infinite patience for that kind of interaction. Empowerment is never part of the equation.
in reality 50-80% of callers come in with easily answerable questions because they don't know how to nav the website and prefer to ask in natural language
The vast majority of callers call in to resolve their issue, and most don't care if they are speaking to a bot because they just want their issue fixed. Agents (if implemented well) are an order of magnitude more effective at resolving issues compared to a call centre worker who is reading off a script and churn within 9 months
There's also the 2nd order effs of making CX cheap. before, there is the perverse incentive of companies trying to keep you off support because each call costs them way more than the value they get. if your cost per call drops 100x you can invest in turning a cost centre into a revenue driver (+ a better experience)
> Ensure you only pay for the value Sierra delivers with outcome-based pricing.
Yeah... that won't last.
but even a simple impl to answer questions can knock out like 50% of callers who are tech-illiterate at 100x cheaper cost, it's just strictly better economics and better for those customers
The tree is structured and gives me an immediate sense of how to map my task to the support offering. If I’m calling, I probably have an issue that I can’t self-serve resolve via the customer portal or whatever, so walking the tree lets me get an idea of who can help.
The “voice assistant” gives me no sense of what the system is capable of or how to take advantage of those capabilities. So I’m left guessing at phrases or functions based off of the assumption that there’s still some kind of tree-like structure that’s been abstracted away. Same outcome, more cognitive overhead, plus I usually have to shout in my best William … Shatner … impression to get it to understand me.
That said, my life hack for these things to get escalated to a human is to just keep saying or typing curse words. Usually that triggers a "connect to human" flow. I can't promise it will always work, but I can say it has worked every time I have tried it.
I don't know much about their product offerings, but I was doing some speech-to-text work and came across https://research.sierra.ai/mubench/ for comparing current models. It felt fairly thoughtful, particularly in regards to coming up with better benchmarking metrics than word error rate.