A screenshot from a music video of hte Evan Greer song, “Surveillance Capitalism,” which tackles the dangers of commercial surveillance technology.
Sometimes fighting the excesses of the creeping surveillance economy is done through position papers, coalition building and lawsuits. Other times, it’s done with sick guitar riffs and impassioned lyrics about digital freedoms.
The latter approach is how one activist, Evan Greer of Fight for the Future, has chosen to protest a recently approved patent filed by music streaming service Spotify for an artificial intelligence-based voice recognition system. The system, as described in the patent, is designed to suggest songs based on a user’s voice, mood, emotional state, gender and other variables. Such capabilities Greer and other activists have warned raise significant moral, privacy and security questions.
The controversy spotlights a challenge faced by some of the most tech savvy companies: how to walk the line between innovation that serves the innate desires of consumers, and violation of their rights for information security and privacy.
Fight for the Future and other digital rights non-profits like Access Now are part of a campaign to pressure Spotify into making a public commitment not to develop or implement the technology. The organization kicked off its campaign with a new petition website to mobilize opposition and Greer, a technology privacy activist and musician, released a music album titled “Spotify is Surveillance” that includes “Surveillance Capitalism,” a companion song and music video to the campaign.
“Our concern is not ‘Hey patch this up.’ We fundamentally believe a music app listening to you rather than you listening to it is a bad idea,” said Greer, later adding: “one of the biggest risks associated with features like this is the way that they normalize this idea of surveillance as convenience, or normalize the process of handing over incredible amounts of personal information to private companies who pinky swear to protect us.”
Spotify’s patent, approved by the federal government in January, outlines a system designed to process audio signals with speech recognition software and uses artificial intelligence to help their algorithm glean a user’s mood to make more customized music recommendations. In addition to audio, the system would retrieve “content metadata corresponding to the speech,” such as the user’s emotional state, gender, age and accent. So for example, a user whose speech suggests a jubilant emotional state may be served up more happy pop songs as a result, while someone with church bells ringing in the background might find themselves listening to more Gospel music.
A ‘creepy’ idea
Anxieties around potential abuse or misuse of such a system are numerous and varied, from concerns that that it might use junk science principles to discern the emotional state of users to worries that it could lead to a bloated dataset that could make the company a more attractive target for criminal or nation state hackers.
In a letter sent to Spotify last week, digital rights group Access Now groups the potential abuses into four buckets: that such a technology could be leveraged to emotionally manipulate users into using their product more frequently; that it could lead to gender discrimination or misidentifying transgender and non-binary people based on their voice; that it would necessitate intrusive and invasive monitoring of customers who may not grasp the potential implications of giving their biometric data to an app; and that a massive dataset of user voices could be stolen by malicious hackers or fold into existing government or law enforcement surveillance efforts.
In an interview, Greer told SC Media that Spotify’s voice recognition patent represents a dangerous line that corporations shouldn’t cross when it comes to monitoring and collecting the biometric data of their customers.
The system would also collect “environmental metadata corresponding to the background noise,” like the user’s location and physical environment and whether they are alone, with a small group or in a large party. That means that Spotify would likely need to create and collect data points that go far beyond audio conversations, such as religious affiliation or favorite movie genres picked up through background noise.
Such a tactic could create a potentially massive dataset of unique information that could be attractive to numerous third parties. The company has faced numerous credential stuffing attacks recently, though those attacks usually tend to target user accounts and data, not internal Spotify systems or datasets. It’s also not clear how such a dataset might be leveraged for advertising purposes.
“Harvesting this kind of data could make Spotify a target for third parties seeking information, from snooping government authorities to malicious hackers,” wrote Access Now policy analysts Isedua Oribhabor, Jennifer Brody, Eric Null and Daniel Leufer. “Without strong security protections in place, people’s privacy will likely be even more compromised.”
Spotify did not respond to multiple requests for comment by SC Media for this story, but gave a statement to music news site Pitchfork earlier this year stating that the company “has filed patent applications for hundreds of inventions,” that “some…become part of future products while others don’t” and that they did not have “any news to share at this time” about plans for incorporating the technology.
Right now, it’s just an idea in a patent, making it difficult to fully glean how a system would work in practice, which capabilities would ultimately be developed and what safeguards or policies may be put in place to curb the potential for abuse.
However, Greer said that for such a system to work as intended, it would likely need to be listening to users for long periods of time or be “always on” in order to collect enough data and catch contextual clues to help Spotify’s algorithm make the kind of decisions outlined in the document. To her and other digital privacy advocates, that’s bad enough alone to junk the program.
“Biometric collection is like lead paint, it is something that is never going to be safe and we need more than regulation or [promises] from companies,” she said.
Rock Against the Machine
That passionate stance is partly what led Greer to write and tie “Surveillance Capitalism” to the campaign. The song, which doesn’t mention Spotify by name, tackles the dangers of commercial surveillance technology more generally. Interspersing audio from speeches by well-known privacy activists like Chelsea Manning and Malkia Cyril in between lyrics like “once consent was manufactured, now it’s harvested for clicks; algorithms make decisions, filter bubbles make us sick” and “we don’t want to be seen, but behind the screen, there’s a nightmare dressed up as a dream.”
Greer said the origin of the album and song was “very much a product of the [coronavirus] quarantines.” With everything – including music concerts and shows – shut down and plenty of free time, she bought a microphone and started a garage band. She described the style as “a bit of a 90s indie punk vibe” with lots of layers of guitar.
While Greer knew she has always looked to infuse her music with her activism, she also wanted it to stand on its own artistic merit.
“With this song particularly, it just sort of came together very naturally and I’m obviously immersed in these issues from my full-time work,” she said. “But a lot of times I try not to write [music] that’s too on the nose. I didn’t want to write something that was basically one of my op-eds turned into a punk song. I wanted to actually create a piece of art about this issue.”
She said proceeds from the song will go towards a donation to the Union of Musicians and Allied Workers, which is itself embroiled in a campaign to force Spotify to pay artists more for the music that streams through their platform and a group that would experience significant financial impact from even small tweaks to Spotify’s algorithm. The choice also reflects Greer’s belief that technologies like the one proposed by Spotify threaten to permanently alter the way humans have traditionally consumed music in negative ways.
“For me as a musician, I’m just terrified to think about a world where music becomes popular based on emotional surveillance and data harvesting and the cold decisions of an algorithm that’s maximized for profit rather than artistry,” said Greer. “If this is the direction the music industry starts to go in, I think it could profoundly alter the way that humans create art in ways I think are really terrible.”