On the official YouTube blog last week, the video-sharing platform unveiled plans to release a new automatic software to “more consistently apply age restrictions” on videos deemed inappropriate for younger viewers. Motivated by recent concerns about children on the app, the new system is based around machine-learning artificial intelligence software with the ability to forgo human moderators for a more automatic process. The issue? YouTube’s automated systems have been accused of singling out LGBTQ+ content and creators simply for existing. “Machine learning is informed by and created by humans, and it’s possible to have those biases inherent in it or learned by the machine itself,” YouTuber Rowan Ellis said in a phone interview with Lifewire. “Its bias in regard to [LGBTQ+] content has been evident in the previous experiences of [LGBTQ+] YouTubers, and I haven’t seen evidence that anything has been done to stop it happening.”
Baby, Now We Got Bad Blood
Ellis is a YouTuber who creates educational content with a feminist and queer bent, and in 2017 she published a video on the company’s restricted mode. As an initial foray into automatic content moderation, this mode allowed users to optionally pre-screen “potentially mature content” from search suggestions and recommendations. Garnering over 100,000 views, she believes there was a conscious effort to preclude her channel from restriction due to her vocal opposition against the excesses of YouTube’s new step toward moderation. Other users on the platform were not so lucky, and they made YouTube know it. A class-action lawsuit against YouTube was filed in August 2019 by a group of eight LGBTQ+ creators who accused the Silicon Valley company of restricting queer and trans video makers and content. The lawsuit alleges the site uses “unlawful content regulation, distribution, and monetization practices that stigmatize, restrict, block, demonetize and financially harm the LGBT Plaintiffs and the greater LGBT Community.” It’s still making its way through the California courts. In June of that same year, the platform received a flood of media attention after refusing to swiftly reprimand popular conservative commentator Steven Crowder for a months-long, homophobic harassment campaign against Vox journalist and host Carlos Maza. This cemented what Ellis said is a pattern with the online platform of ignoring the unique challenges queer creators face. LGBTQ+ creators’ lack of faith in YouTube’s ability to show up for them is not without merit. “I don’t think they’ve understood the need for there to be transparency in regard to social issues and ensuring equality,” she said. “There are still children all around the world who have grown up with the idea that being gay is wrong, and when they start to question that belief, but find it shut down by a safe search or restriction, it will reinforce this idea that it is wrong, inappropriate, adult, perverse, and dirty.”
Failing to Auto-Learn
With its sordid history regarding LGBTQ+ content creators on its platform, worries about the implementation of the machine learning software’s ability to discern greater norms still looms. Don Heider, Executive Director at the Markkula Center for Applied Ethics, suggests the potential for folly is too great a risk to gamble. “It’s difficult to believe that AI can effectively govern content from multiple countries with different cultural norms and standards,” he wrote in an email interview. “AI is too often seen as the answer to complex questions. At this point, AI and the way it is created struggles to deal with even simple tasks, let alone any content moderation with any level of complexity.” YouTube decided on the use of AI technology because of a lack of consistent moderation by human moderators, according to its blog. Increasing its use of computerized filters to take down videos deemed unsuitable became the norm, and implementing the same procedures for its age-restriction policies is seen as a logical next step. As a company seeking to gradually improve its processes after long-standing criticisms regarding its relationship to child consumers, this decision comes as no surprise. Children have become a key demographic for the video-sharing site. In August, digital-video analytics firm Tubular found that, apart from music videos, content aimed at children topped the month-end list for most viewed videos on YouTube. The company’s interest in protecting this lucrative, emerging powerhouse on the platform makes sense. However, the tools used to enforce this protection remains discomforting for those who have already found themselves downstream of the company’s moderation procedures. “My worry is that it will do a lot of harm and not protect [LGBTQ+] youth who need informative, frank, and honest content that a lot of [LGBTQ+] YouTubers might provide, but gets flagged in its system as inappropriate,” Ellis said. “Even if it’s not malicious, which I don’t think it is, it is a lack of input from diverse voices—or at least a lack of respect. “We see that all the time in tech. When you’re looking at facial recognition failing to differentiate different Black faces, or when we look at medicine and see that medication has only been tested on a particular gender. These are larger conversations, and YouTube is not exempt from that.”