Buy Now

QuickLogic Blog

The End(point) of AI As We Know It

blog-card-featured3

Artificial Intelligence (AI) has burst onto the consumer scene in a big way over the last few years. Now most of us have smartphones and smart speakers which allow us to ask natural language questions or make requests and (at least most of the time) get appropriate and useful responses. This technology is very likely to rapidly improve, and soon enough we will all be able to speak to our devices naturally and have an intelligent conversation with them. Not only will all sorts of information be available to us just by asking, our AI-based devices will become smart enough to understand language subtleties such as how word context or voice inflections can change the meaning of a spoken communication. Ultimately, they might even come to understand the most complex and “human” aspects of language such satire, humor, irony and sarcasm.

Implementing this type of AI (let’s call it “big AI” for reasons that will become apparent in a minute) requires powerful, high bandwidth networks with a lot of compute power sitting in the cloud. That’s fine for when we humans are interacting with the digital assistants built into our smartphones and smart speakers. However, there is another version of AI that can’t use those resources for various reasons. That version is artificial intelligence for endpoint applications – in particular, Internet of Things (IoT) devices living at the edge of the network. Those devices often lack the resources available to big AI, or have stringent requirements that make big AI impractical.

button - scroll to top