If you’re like about half of adults in the U.S., you use a digital voice assistant, whether it’s on your phone, tablet, or another device such as an Amazon Echo. But who’s listening to what you say, and what are they doing with that information?
Most of the time, digital voice assistants use artificial intelligence (AI) to decipher your requests. Knowing that, people expect that machines are listening to them when they’ve given a “wake up” cue. What they may not have realized is that companies are retaining recordings of what they say, even when they’re not directly talking to their device, and that actual humans at Amazon, Apple, and Google might also be listening in. People have even less reason to expect that services beyond voice assistants are recording and reviewing their statements, yet Facebook has reportedly “paid contractors to transcribe audio clips from users of its Messenger service.”
Tech companies defend these practices as necessary to the development of voice-recognition and virtual assistant AI systems. Still, having people, in effect, eavesdropping on daily conversations — important though it may be for algorithm performance — raises a number of questions. What do companies do with that data? Do they sell it to other companies? How do they protect it from unauthorized access, and how long do they keep it? How identifiable are individual users in their voice recordings? How often are voice-activated devices actually recording — and do users know when they’re being recorded? How many of those recordings are listened to by people rather than computers? These concerns aren’t hypothetical. At least 1,000 Google recordings were leaked this year, many of which contained enough information to identify individual users.
The leaked recordings weren’t merely innocuous requests for weather forecasts or reminders; they included “bedroom conversations, conversations between parents and their children, … blazing rows and professional phone calls containing lots of private information.” 153 of those recordings “were conversations that should never have been recorded,” as the trigger command was never given. Nor are leaks the only concern; “last year, a bizarre and exceedingly complex series of errors on behalf of Alexa ended up sending a private conversation to a coworker of the user’s husband.”
In response to the news reports about humans listening to digital assistant recordings, Facebook, Google, and Apple have, at least temporarily, ceased the practice. Amazon has enabled a setting that allows people to delete their recordings. Of course, to delete recordings, people have to know that there are recordings in the first place — and one of the central criticisms has been that companies aren’t doing enough to inform customers about their data practices. How should companies advise users about the data they’re collecting and what they’re doing with it? Will people stop using voice assistants due to concerns about privacy? Only time will tell.