That’s not necessarily true. Detection of the trigger phrase is simple enough it can be done locally. If they are sending all your audio to their servers it’s not because they need to be.
It drives me crazy people insist they are sending a constant audio stream somewhere for nefarious purposes without any evidence. From a networking perspective this is knowable information.
It’s verifiable, you can observe the connections it makes.
Admittedly, you can’t see the contents of the packets themselves, but you can tell easily anyways if it’s doing anything close to sending a constant stream of audio
Assuming that they parse everything locally, which appears to be the case, then why would it have to send a constant stream of audio? A small list/packet of keywords of a few bytes or KB once a day would suffice for most telemetry (including ad analysis and other possible spying reasons’) needs.
Also, one ought to be able to see the contents of the packets if they retrieve the devices’ SSL key for the session, so this should also be falsifiable.
Most of the Google Home speakers do not have the processing capacity for true local processing.
Local processing in the context of a smart home speaker is searching for a certain trigger keyword and nothing else, this doesn’t require much oomf locally.
A system that you describe is totally possible, but not with the hardware you find in the average smart speaker, thus a constant stream of audio needs to be sent off to the cloud somewhere.
Also, yea it’s not impossible to drop in on an SSL connection, but the embedded nature of the speakers makes it a bit more difficult.
Thank you for the explanation, though the underlying requirements for keeping a list locally appear to remain much the same, since you really only need to add a few trigger words to the “dumb, always-on” local parser (such as your top 1000 advertisers’ company or product names). After all, I’d imagine we do not require context, but only really need to know whether a word was said or not, not unlike listening for the “real” trigger word.
This is of course only one of many ways to attack such a problem, and I do not know how they ultimately would do, assuming that they were interested in listening in on their users in the first place.
And yes, embedded devices are slightly harder to fiddle with than using your own computer, but I’d bet that they didn’t actually take the time to make a proper gate array and instead just use some barebones Linux, which most likely means UART access!
No dude, they don’t send shit to the cloud to process. It just stores like 5 seconds of voice locally and listens for the wake word. This is why you can only choose a few wake words and not pick anything arbitrary. I’m all for criticizing big tech, but don’t lie about how it works.
Edit: Small correction, they of course send the the buffer and begin properly recording once it detects the wake word. Locally it can only detect Alexa and any other wake words it can respond to.
Yeah, right? It’s technology not magic. Anyone can monitor the traffic from a device on a network and if it were sending a significant amount of data when not activated, every third party security researcher would know within minutes. It would be well publicized by respected security research organizations if they were constantly sending voice data.
I don’t have one of those devices, and didn’t want to exclude the possibility that it was “chatty” enough with its server (checking for updates etc.) that a speech analysis request couldn’t be hidden within the noise.
Fair enough. I’ve never checked myself because I’m also not interested in having that sort of thing, but I’ve read a few blog articles by people who have
They state that they only turn on “when you say the special phrase.”
But in order to do that, they have to be always listening and parsing what you say.
And in order to pay for that processing time, its getting processed for any data they can sell ads on
That’s not necessarily true. Detection of the trigger phrase is simple enough it can be done locally. If they are sending all your audio to their servers it’s not because they need to be.
It drives me crazy people insist they are sending a constant audio stream somewhere for nefarious purposes without any evidence. From a networking perspective this is knowable information.
Publicly they’ve stated that it does that.
However it wouldnt be the first time Apple, Amazon and particularly Google have lied.
It’s verifiable, you can observe the connections it makes.
Admittedly, you can’t see the contents of the packets themselves, but you can tell easily anyways if it’s doing anything close to sending a constant stream of audio
Assuming that they parse everything locally, which appears to be the case, then why would it have to send a constant stream of audio? A small list/packet of keywords of a few bytes or KB once a day would suffice for most telemetry (including ad analysis and other possible spying reasons’) needs.
Also, one ought to be able to see the contents of the packets if they retrieve the devices’ SSL key for the session, so this should also be falsifiable.
Most of the Google Home speakers do not have the processing capacity for true local processing.
Local processing in the context of a smart home speaker is searching for a certain trigger keyword and nothing else, this doesn’t require much oomf locally.
A system that you describe is totally possible, but not with the hardware you find in the average smart speaker, thus a constant stream of audio needs to be sent off to the cloud somewhere.
Also, yea it’s not impossible to drop in on an SSL connection, but the embedded nature of the speakers makes it a bit more difficult.
Thank you for the explanation, though the underlying requirements for keeping a list locally appear to remain much the same, since you really only need to add a few trigger words to the “dumb, always-on” local parser (such as your top 1000 advertisers’ company or product names). After all, I’d imagine we do not require context, but only really need to know whether a word was said or not, not unlike listening for the “real” trigger word.
This is of course only one of many ways to attack such a problem, and I do not know how they ultimately would do, assuming that they were interested in listening in on their users in the first place.
And yes, embedded devices are slightly harder to fiddle with than using your own computer, but I’d bet that they didn’t actually take the time to make a proper gate array and instead just use some barebones Linux, which most likely means UART access!
If they were constantly recording and sending that data home, it would have been noticed very quickly; all it takes is one nerd running wireshark
They process locally. You can watch their traffic: there’s very little going out besides their own diagnostics.
So you pay for the processing with your own electricity
Yes, that is how I would much rather my computers work and, in fact, how they have historically done so.
Yeah but that’s in contrast to OP above saying that the companies have to pay for processing with ads.
No dude, they don’t send shit to the cloud to process. It just stores like 5 seconds of voice locally and listens for the wake word. This is why you can only choose a few wake words and not pick anything arbitrary. I’m all for criticizing big tech, but don’t lie about how it works.
Edit: Small correction, they of course send the the buffer and begin properly recording once it detects the wake word. Locally it can only detect Alexa and any other wake words it can respond to.
Yeah, right? It’s technology not magic. Anyone can monitor the traffic from a device on a network and if it were sending a significant amount of data when not activated, every third party security researcher would know within minutes. It would be well publicized by respected security research organizations if they were constantly sending voice data.
I use Alexa, but only on touch button. Still easy and convenient, less “always listening”.
I know there will be a comment about how they’re already always listening, I choose to not believe that because i haven’t given up on the world yet. 😑
Edit: though I must admit, I take precautions at times!
Publicly, they state it is a rolling 5-10 second analyzer, and nothing gets recorded until you say the word.
Allegedly, the processing to listen for the activation phrase is done locally.
Not just allegedly, verifiably. Simple enough to check with Wireshark
I don’t have one of those devices, and didn’t want to exclude the possibility that it was “chatty” enough with its server (checking for updates etc.) that a speech analysis request couldn’t be hidden within the noise.
Fair enough. I’ve never checked myself because I’m also not interested in having that sort of thing, but I’ve read a few blog articles by people who have