In the Future of Voice AI series of interviews, I ask three questions to my guests:
- What problems do you currently see in Enterprise Voice AI?
- How does your company solve these problems?
- What solutions do you envision in the next 5 years?
This episode’s guest is Sumanyu Sharma, CEO and Co-Founder at Hamming.ai.
Sumanyu Sharma is the CEO and cofounder of Hamming.ai. A founder with extensive experience as an AI engineer and data scientist, he's consistently built data-driven growth programs that meaningfully impact revenue including as Head of Data at Citizen and a Senior Staff Data Scientist at Tesla. Sumanyu splits his time between Austin and San Francisco and holds a degree from the University of Waterloo in Applied Science, Systems Design Engineering.
Founded in 2024, Hamming.ai provides AI developers automated experimentation, prompting, and call analytics tools to ensure voice AI agent reliability and resilience. Deploying its own AI voice agents that act like real people, Hamming.ai can place thousands of phone calls to client voice agents simultaneously, identifying bugs more expediently and efficiently than current manual testing processes allow. Hamming.ai also provides LLM prompt management solutions for B2B teams; automated voice agent red-teaming to detect vulnerabilities and call analytics solutions to track how users are engaging with AI voice agents in production and to flag cases in need of attention.
Recap Video
Takeaways
Hamming focuses on testing and improving AI voice agents by simulating real-world conditions to identify weaknesses before deployment.
AI voice agents are often tested in unrealistic, controlled environments, leading to shaky real-world performance.
Manually testing AI agents is time-consuming and impractical, making it difficult to scale improvements and lead to unreliable performance evaluations.
Every small change to the AI system requires repeating the same tedious testing process, making it difficult to measure the exact impact of adjustments.
Without an automated testing framework, companies waste time brute-forcing their way through trial-and-error iterations.
AI models fail because they aren't stress-tested with edge cases.
Hamming’s goal is to systematically "break" voice AI models in testing so they don’t break in deployment.
Scaling AI voice agents is about better real-world testing environments that mirror real customer interactions.
Voice AI is getting better, but there’s still a gap between AI understanding structured requests and navigating open-ended conversations.
Current training is biased toward "ideal" speech patterns, limiting effectiveness.
Companies invest heavily in voice AI but underestimate how much testing and iteration is required for real-world deployment.
A big problem with voice AI is that when it misunderstands something, it struggles to recover.
The real test for voice AI isn’t in perfecting small talk but handling interruptions, misunderstandings, and layered requests.
AI agents shouldn't be judged by their best-case performance but by how they handle worst-case scenarios.
Companies without voice interfaces will fall behind as customer expectations grow.
The stakes are higher for voice AI than chat—if it isn’t tested right, it can fail in big ways, hurt brands and lose customer trust
Ultimately, mistakes from a human-like AIs feel worse than chatbot errors.
Share this post