Voters forged ballots on election day on the Fairfax County Government Center polling location in Fairfax, Virginia, on November 2, 2021.
Andrew Caballero-Reynolds | AFP | Getty Images
Social media platforms together with Meta’s Facebook and Instagram, Twitter, TikTok and Google’s YouTube are readying themselves for an additional heated Election Day this week.
The corporations now commonly come underneath shut scrutiny round election time, one thing that accelerated following findings that Russian brokers used social media to sow division within the run-up to the 2016 election. During the final presidential election in 2020, the platforms confronted the problem of moderating election denialism as an outgoing president stoked the false claims himself, main a number of of them to no less than briefly droop him after the Jan. 6 riot.
This 12 months, the platforms are utilizing all of these experiences to organize for threats to democracy and security as voters determine who will symbolize them in Congress, governor’s workplaces and state legislatures.
Here’s how all the key platforms are planning to police their companies on Election Day.
Meta
Onur Dogman | Lightrocket | Getty Images
Meta’s Facebook has been one of the crucial scrutinized platforms with regards to misinformation. In response to years of criticism, it has bolstered its strategy to election integrity. It’s said it will use many of the same policies and safeguards this year that it had in 2020.
Meta has stood up its Elections Operations Center, which it likened to a command center, to bring together different teams throughout the company to monitor and quickly address threats they see on the platform. It’s used this model dozens of times worldwide since 2018.
Facebook and Instagram also share reliable information with users about how to vote (including in languages other than English). The company said it’s already sent more than 80 million election notifications this year on the two platforms.
The company uses third-party fact-checkers to help label false posts so they can be demoted in the algorithm before they go viral. Meta said it’s investing an additional $5 million in fact-checking and media literacy efforts before Election Day.
Meta said it’s prepared to seek out threats and coordinated harassment against election officials and poll workers, who were the subject of misinformation campaigns and threats during the last election.
The company is once again banning new political ads in the week before the election, as it did in 2020. While ads submitted before the blackout period can still run, political advertisers have expressed frustration about the policy since it’s often helpful to respond to last-minute attacks and polling with fresh messaging. Facebook already has extra screening for those who sign up as political advertisers and maintains information about political ads in a database available to the public.
Meta has pledged to remove posts that seek to suppress voting, like misinformation about how and when to vote. It also said it would reject ads that discourage voting or question the legitimacy of the upcoming election.
In a study by New York University’s Cybersecurity for Democracy and international NGO Global Witness testing election integrity ad screens across social media platforms, the groups found Facebook was mostly successful in blocking ads they submitted with election disinformation. Still, 20% to 50% of the ads tested were approved, depending on what language they were in and whether they were submitted from inside or outside the U.S.
The researchers also violated Facebook’s policies about who is allowed to place ads, with one of the test accounts placing ads from the U.K. The researchers also did not go through Facebook’s authorization process, which is supposed to provide extra scrutiny for political advertisers.
The researchers did not run the ads once they were approved, so it’s not clear whether Facebook would have blocked them during that step.
A Meta spokesperson said in a statement published with the study that it was “based on a very small sample of ads, and are not representative given the number of political ads we review daily across the world.”
“We invest significant resources to protect elections, from our industry-leading transparency efforts to our enforcement of strict protocols on ads about social issues, elections, or politics – and we will continue to do so,” a Meta spokesperson said in a separate statement to CNBC.
TikTok
TikTok owner ByteDance has launched a women’s fashion website called If Yooou. Pinduoduo launched an e-commerce site in the U.S. called Temu. The two companies are the latest Chinese tech giants to look to crack the international e-commerce market domianted by Amazon.
Mike Kemp | In Pictures | Getty Images
TikTok has become an increasingly important platform for all sorts of discussion, but it’s tried to keep its service at arm’s length from the most heated political discussions.
TikTok does not allow political ads and has stated its desire for the service to be “a fun, positive and joyful experience.”
“TikTok is first and foremost an entertainment platform,” the company said in a September blog post. It added that it wants to “foster and promote a positive environment that brings people together, not divide them.”
Still, the NYU and Global Witness study found TikTok performed the worst out of the platforms it tested in blocking election-related misinformation in ads. Only one ad it submitted in both English and Spanish falsely claiming Covid vaccines were required to vote was rejected, while ads promoting the wrong date for the election or encouraging voters to vote twice were approved.
TikTok did not provide a comment on the report but told the researchers in a statement that it values “feedback from NGOs, academics, and other experts which helps us continually strengthen our processes and policies.”
The service said that while it doesn’t “proactively encourage politicians or political parties to join TikTok,” it welcomes them to do so. The company announced in September that it would try out mandatory verification for government, politician and political party accounts in the U.S. through the midterms and disable those types of accounts from running ads.
TikTok said it would allow those accounts to run ads in limited circumstances, like public health and safety campaigns, but that they’d have to work with a TikTok representative to do so.
TikTok also barred these accounts from other ways to make money on the platform, like through tipping and e-commerce. Politician and political party accounts are also not allowed to solicit campaign donations on their pages.
TikTok has said it’s committed to stemming the spread of misinformation, including by working with experts to strengthen its policies and outside fact-checkers to verify election-related posts.
It’s also sought to build on its experiences from the last election, like by surfacing its election center with information about how to vote earlier in the cycle. It’s also tried to do more to educate creators on the platform about what kinds of paid partnerships are and are not allowed and how to disclose them.
A video grab taken from a video posted on the Twitter account of billionaire Tesla chief Elon Musk on October 26, 2022 shows himself carrying a sink as he enters the Twitter headquarters in San Francisco. Elon Musk changed his Twitter profile to “Chief Twit” and posted video of himself walking into the social network’s California headquarters carrying a sink, days before his contentious takeover of the company must be finalized.
– | Afp | Getty Images
Twitter is in a unique position this Election Day, after billionaire Elon Musk bought the platform and took it private less than a couple weeks before voters headed to the polls.
Musk has expressed a desire to loosen Twitter’s content moderation policies. He’s said decisions on whether to reinstate banned users, a group that includes former President Donald Trump, would take a few weeks at least.
But shortly after the deal, Bloomberg reported the team responsible for content moderation lost access to some of their tools. Twitter’s head of safety and integrity, Yoel Roth, characterized that transfer as a standard measure for a just lately acquired firm to take and stated Twitter’s guidelines have been nonetheless being enforced at scale.
But the timing shortly earlier than the election is especially stark. Musk stated groups would have entry to all the required instruments by the tip of the week earlier than the election, based on a civil society group chief who was on a name with Musk earlier within the week.
Before Musk’s takeover, Twitter laid out its election integrity plans in an August blog post. Those included activating its civic integrity coverage, which permits it to label and demote deceptive details about the election, sharing “prebunks,” or proactively debunked false claims concerning the election, and surfacing related information and voting info in a devoted tab. Twitter has not allowed political adverts since 2019.
Google/YouTube
People stroll previous a billboard commercial for YouTube on September 27, 2019 in Berlin, Germany.
Sean Gallup | Getty Images
Google and its video platform YouTube are additionally vital platforms exterior of Facebook the place advertisers search to get their marketing campaign messages out.
The platforms require advertisers working election messages to grow to be verified and disclose the advert’s backing. Political adverts, together with info on how a lot cash was behind them and the way a lot they have been considered, are included within the firm’s transparency report.
Prior to the final election, Google made it so customers might not be focused fairly as narrowly with political adverts, limiting focusing on to sure basic demographic classes.
The NYU and Global Witness examine discovered YouTube carried out the most effective out of the platforms it examined in blocking adverts with election misinformation. The web site in the end blocked all of the misinformation-packed adverts the researchers submitted by way of an account that hadn’t gone by way of its advertiser verification course of. The platform additionally blocked the YouTube channel internet hosting the adverts, although a Google Ads account remained lively.
Like different platforms, Google and YouTube spotlight authoritative sources and data on the election excessive up in associated searches. The firm stated it could take away content material violating its insurance policies by deceptive concerning the voting course of or encouraging interference with the democratic course of.
YouTube additionally has sought to assist customers learn to spot manipulative messages on their very own utilizing training content material.
Google stated it is helped practice marketing campaign and election officers on safety practices.
Subscribe to CNBC on YouTube.
WATCH: The messy enterprise of content material moderation on Facebook, Twitter, YouTube
Source: www.cnbc.com”