Facebook announced a range of “social audio” products last week. This move makes sense in theory, but it also raises the thorny question of how they’re going to moderate these new spaces.
The products were formally announced in an interview with Casey Newton on Sidechannel, a Discord server, on April 19. Facebook will add a few different features: an audio-only version of Rooms; Soundbits, which is basically TikTok but for audio; Clubhouse-style live audio rooms; and their own podcasts.
This isn’t a surprising direction for the platform, given that plenty of other social media companies are already in the audio space, including Clubhouse and Twitter’s Spaces. But debuting these products will add three more spaces that Facebook needs to moderate for misinformation, hate speech, and harassment.
If the past few years have taught us anything, it’s that moderation isn’t Facebook’s forte. From its unconvincing attempt to tackle “” and the rampant to its role in , Facebook hasn’t exactly been a leader in the world of moderation. Even the have spoken out about Mark Zuckerberg’s hypocrisy and disregard for their health during the pandemic.
During his interview with Newton on Sidechannel, Zuckerberg said the platform is going to use some of what they learned moderating text and video for audio.
“We have a little bit of practice at this, both from all the broader integrity, trust and safety work that we do where I think at this point we probably built by far the most advanced AI tools to be able to identify different harmful content and across all these categories whether it’s in terrorist activity to child exploitation to people inciting violence,” Zuckerberg said. “We’ve just gotten better and better at building tools that can identify that stuff proactively and automatically and I think our team working on all that stuff is like 1000 people are sitting across the company so it’s just on the AI side right or on the engineering side.”
While Facebook has been getting better at moderating text and video, particularly with English-language content in the U.S., live audio is notoriously difficult to moderate.
“This is actually something that the video game industry has struggled for a very long time with,” Daniel Kelley, the associate director for the Center for Technology and Society at the Anti-Defamation League told Mashable. A found that about half of people who experience harassment in online gaming are experiencing harassment in voice chat. And that’s hard to address, Kelley said, because there “isn’t the same levels of development of technology relative to moderation in audio chat that there is in text content.”
“The particular problems of live streaming audio will remain the same whether they are on Facebook or on other platforms,” Kelley said. “I would just point to Facebook’s track record around moderating hate… Facebook has not fixed or done enough to address hate on their core products — on Facebook, on Instagram — with text.”
Facebook isn’t starting from the ground floor with audio moderation, though. Other platforms, like Discord, for example, have been moderating live audio for years.
“Audio is more difficult to moderate because it’s ephemeral and real-time,” Clint Smith, Discord’s chief legal officer, told Mashable. “It’s harder to investigate after the fact. But we also have been doing it for nearly six years. And so we’ve had live voice since our inception in 2015 and we think our approach to content moderation works really well for live audio.”
Discord doesn’t record any audio at all. Instead, there are community moderators in the servers who have control over who stays, who is blocked from a conversation, and what is and isn’t okay to talk about. If they need extra help, they can message the trust and safety team. Twitter Spaces, on the other hand, if there’s a problem to address. Clubhouse unless there’s a user report.
“Recording is a complicated topic and I think it’s also best addressed at the community level,” Smith said, adding that Discord does not have a product or feature that enables users to record conversations, but users can use third party tools. “The community decides, at that level, whether voice is made permanent or left ephemeral. I think that community choice is important, and it’s also important that a community’s position on recording audio is known and understood by participants in the community’s events.”
Facebook has not yet laid out how plans to moderate live — or recorded, for that matter — audio. And we won’t know that until more information about the products, and their moderation guidelines, are released.
“There’s also this question of what you should enforce against,” Zuckerberg said on Sidechannel. “That’s going to be an open debate. If we go back five years, I think a lot more people were more on the free expression side of things. Today, a lot of people still are, but there’s also this rising wave of more people who are basically calling for more stuff to be blocked or limited in some way.”
We’ll see more specifics about how audio will be moderated as each of Facebook’s new tools get rolled out, but the fact that we have to wait speaks to a larger problem.
“In an ideal world, Facebook would have announced alongside the fact that they’re creating this space, that they have cracked the technology to really do live streaming content moderation well,” Kelley said. “And they didn’t do that.”