TikTok recently revealed a new feature that notifies users if a video has been flagged as containing misleading information when they try to share it. Users also will receive a message to seek out credible sources when viewing videos that have been flagged by the system. The added level of scrutiny to videos is one of the biggest steps TikTok has taken to slow the spread of misinformation, though experts warn it might not be enough. “We have heard a lot about fake news over the past five years, but we are entering a period where we have a world of alternative facts where people are only learning the part of the story that supports their political partisanship,” Andrew Selepak, a social media professor at the University of Florida, told Lifewire via email.
For Your Eyes Only
With the old system, videos marked as containing unverified content could be ineligible to appear in the For You page—TikTok’s unending video feed that users can scroll through to find new content. Now, TikTok also will include a banner on the videos, as well as a warning whenever users try to share them. “We love that our community’s creativity encourages people to share TikTok videos with others who might enjoy them,” Gina Hernandez, product manager for trust and safety at TikTok, wrote in the announcement. “We’ve designed this feature to help our users be mindful about what they share.” With TikTok estimated to have almost 700 million monthly active users, though, just how effective could this feature be? Hernandez revealed in the original announcement that testing of the feature had seen a 24% decrease in the rate at which videos were shared with the warning in place, while videos containing the banner label about unverified information saw a 7% decrease in likes. No information was given on the length of the testing phase, or how many participants were included. Twitter introduced a similar feature in October 2020, forcing users to add their own commentary to any tweets they tried to share to their followers. This system was reverted in December 2020, however, with Twitter citing a 20% decrease in sharing through both retweets and quote tweets.
Stuck In a Loop
The reason warning labels and messages to look for credible sources won’t be enough, Selepak warned, is because different people often find credibility in sources they already know and trust. TikTok might label a video as misleading or unverified, but for some, the person who created the video could be someone they often get information from, therefore making them more likely to share the video without looking into it any further. “In a world of alternative facts, who decides what is credible when users are inclined to only believe what they want to believe and follow accounts and users who are inline with their beliefs?” Selepak asked. This essentially creates a loop, or echo chamber, of content being seen by users who believe it, then share it with others. And so, the problem continues to grow instead of getting smaller. Sure, some users will see the warning and decide to not share the video, but those who trust the user sharing that information most likely are just going to share it anyway. Though TikTok has partnered with fact-checkers at PolitiFact, Lead Stories, and SciVerify, the fact still remains that the audience on the app is massive, and relying on warnings to keep people from sharing misleading information just isn’t enough. Especially when those labels and warnings potentially could hurt the one thing that TikTok needs to survive: an active user base. “If users start to feel they are being pushed toward sources and content that present material opposing their views, they are less likely to spend as much time on the app,” said Selepak “And as we have seen from social media for a few years now, the platforms don’t really care what you look at while scrolling, as long as you keep scrolling.”