30% might be high. I've worked with two different agent creation platforms. Both require a huge amount of manual correction to work anywhere near accurately. I'm really not sure what the LLM actually provides other than some natural language processing.
Before human correction, the agents i've tested were right 20% of the time, wrong 30%, and failed entirely 50%. To fix them, a human has to sit behind the curtain and manually review conversations and program custom interactions for every failure.
In theory, once it is fully setup and all the edge cases fixed, it will provide 24/7 support in a convenient chat format. But that takes a lot more man hours than the hype suggests...
Weirdly, chatgpt does a better job than a purpose built, purchased agent.
Coke zero > pibb zero > dr pepper zero > diet coke >>> diet Pepsi
I'm on the last day of a business trip and seem to be deep in Pepsi territory (first time I've ever even seen Starry.) Diet Pepsi is OK, but I'm hoping there is a coke vending machine on the other side of airport security at this small airport.