K
Kathleen Martin
Guest
Lately, I’ve been reporting more and more on algorithms that parse incoming data locally in order to make some sort of decision. Last week, for example, I wrote about a company doing voice analysis locally to detect Alzheimer’s. I’ve also covered startups that process machine vibrations locally to detect equipment failures. Each of these examples benefits from machine learning (ML) algorithms that run on microcontrollers, or what we call TinyML.
Running machine learning algorithms locally helps reduce latency. That means a burgeoning machine problem can be detected and the machine turned off quickly if needed. Running machine learning algorithms locally also protects privacy, something especially important in the medical sector. Indeed, I would prefer that neither Google nor Alexa be aware if I develop Alzheimer’s.
But as companies push sensitive and necessary algorithms out to the edge, ensuring they perform as intended becomes essential. Which is why I spent time this week learning about the security risks facing our future sensor networks running TinyML.
Continue reading: https://staceyoniot.com/how-can-we-make-tinyml-secure-and-why-we-need-to/
Running machine learning algorithms locally helps reduce latency. That means a burgeoning machine problem can be detected and the machine turned off quickly if needed. Running machine learning algorithms locally also protects privacy, something especially important in the medical sector. Indeed, I would prefer that neither Google nor Alexa be aware if I develop Alzheimer’s.
But as companies push sensitive and necessary algorithms out to the edge, ensuring they perform as intended becomes essential. Which is why I spent time this week learning about the security risks facing our future sensor networks running TinyML.
Continue reading: https://staceyoniot.com/how-can-we-make-tinyml-secure-and-why-we-need-to/