QuickLogic Blog

The End(point) of AI As We Know It

Posted on

Artificial Intelligence (AI) has burst onto the consumer scene in a big way over the last few years. Now most of us have smartphones and smart speakers which allow us to ask natural language questions or make requests and (at least most of the time) get appropriate and useful responses. This technology is very likely to rapidly improve, and soon enough we will all be able to speak to our devices naturally and have an intelligent conversation with them. Not only will all sorts of information be available to us just by asking, our AI-based devices will become smart enough to understand language subtleties such as how word context or voice inflections can change the meaning of a spoken communication. Ultimately, they might even come to understand the most complex and “human” aspects of language such satire, humor, irony and sarcasm.

Implementing this type of AI (let’s call it “big AI” for reasons that will become apparent in a minute) requires powerful, high bandwidth networks with a lot of compute power sitting in the cloud. That’s fine for when we humans are interacting with the digital assistants built into our smartphones and smart speakers. However, there is another version of AI that can’t use those resources for various reasons. That version is artificial intelligence for endpoint applications – in particular, Internet of Things (IoT) devices living at the edge of the network. Those devices often lack the resources available to big AI, or have stringent requirements that make big AI impractical.

For example, an endpoint IoT device might have a low bandwidth or intermittent connection to the internet. Or it might not be connected at all for security reasons. Or it might need its AI to have a fast response time, or to have an extremely high level of reliability, or to operate on a limited amount of power or be very low cost. Furthermore, endpoint applications generally don’t have large data sets available to train the system as the applications are often fragmented and highly variable. These limitations and requirements call for a different type of artificial intelligence – one that is local, fast, reliable, low power, low cost and easy to train but still has a significant level of reasoning capability.

At QuickLogic, we’ve just launched a new platform initiative called QuickAI™ to address this need. In creating the platform, we’ve partnered with best-in-class solution providers to ensure that we can deliver all of the elements necessary to fully support endpoint IoT requirements. These providers include General Vision, SensiML and Nepes. By combining their technology and products with our EOS™ S3 voice and sensor processing platform and QuickAI Hardware Development Kit (HDK), we’ll collectively give endpoint IoT developers everything they need to quickly build a very capable AI solution with local processing support, high performance, high reliability and low power consumption and cost. This platform will bring the vast potential of AI out to the very edge of the network and we expect an explosion of new endpoint IoT applications to result.

To sort of paraphrase an old R.E.M. song, it’s the end(point) of AI as we know it and we feel fine. Very fine, actually – and very excited to see what the future has in store for AI at the edge.

Webinar
Artificial Intelligence (AI) & Cognitive Sensing at the Endpoint