The dream of having a truly “smart” home has finally started to materialize in a meaningful way. One aspect of the smart home concept which has been severely underestimated until very recently, though, is the emergence of voice-activated systems and their ability to make sense of spoken commands and requests from humans. Speech is a primary mode of communication for nearly all people, but most of us weren’t really expecting to control our household that way – at least not any time soon.
That picture has changed dramatically in just the last few years. The ability of machines to understand the spoken word has been growing exponentially and as a result we have begun to see that ability showing up in devices around the house. Examples include voice-controlled gateways such as Google Home and Alexa from Amazon, and voice-controlled cable tv remote controls from companies such as Comcast.
However, using voice commands to control all of the devices in your home is more challenging than it might seem at first glance. Consumers want convenience and reliability, but to deliver that the machines have to be very clever. Humans are good at dealing with ambiguity, but machines – not so much. Let’s look at a specific use case. If I ask my spouse to turn off the light, she will most likely understand which light I mean and (assuming that she is listening to me and that I have asked nicely enough) actually turn it off.
By comparison, the machine is left to wonder which light I am referring to. Is it the one closest to me, closest to it, or another one altogether? In that case I will probably have to help the machine learn that when I speak, and am in a particular room, I am referring to a certain light. Or, alternatively, I might have to give some kind of name to each light and other device that I want to control. Either way, my spouse and children will probably have to do the same, at least until our voice-activated controllers are able to tap into a more advanced form of machine learning.
The benefits will be worth the effort, though. Voice is so much more convenient than flipping switches, especially when we are carrying something, or the switch is hard to reach, or to find in the dark. It’s also much more convenient than scrolling through hundreds of options on our smartphone or tv / dvr remote. And voice activated commands go beyond controlling things. Now we can check the weather forecast verbally, request that a certain type of music be played, or even ask for a device to tell us a joke. Finally, our homes are really starting to get smart, and unlike our spouses – will always be listening to what we have to say.
Visit us at Wearable Technology Show in London, Stand K50. We are demonstrating a voice-enabled home automation system using Sensory’s voice-trigger running on QuickLogic’s EOS™ S3 Sensor Processing Solution. One demo will show how user specific voice recognition makes the user experience more natural. Another demonstration shows how the everyday experience of using a microwave oven can be simplified.