WASHINGTON (Circa) — Facebook’s policies aimed at upholding standards for political and offensive content have given rise to new allegations it is engaging in exactly the kind of bias it is trying to prevent, as new research shows most Americans believe social media companies are censoring political speech they find objectionable.
Leading up to Independence Day, the Liberty County Vindicator in Liberty, Texas set out to publish the text of the Declaration of Independence in 12 daily posts. The tenth, containing paragraphs 27-31, was blocked after Facebook determined it “goes against our standards on hate speech.”
Vindicator managing editor Casey Stinnett posted an article about the incident on the paper’s website, complaining about the difficulty of contacting Facebook to appeal the decision. Facebook did not identify the exact cause for offense, but the editor speculated it was the reference to “merciless Indian Savages.”
Facebook ultimately restored the post and apologized for the error. The Vindicator was not the only user to have its patriotic Independence Day content singled out by the site’s algorithms, though.
The Wes Cook Band has blasted Facebook this week for blocking a paid promotion for its song “I Stand for the Flag.” A video for the song was released Monday and the band paid to boost it for 24 hours, but Facebook tagged it as “political content,” subjecting it to stringent new guidelines for political advertising.
Under those rules, anyone posting ads that mention candidates or elections or discuss one of 20 “national issues of public importance” has to have a verified account and include a disclosure that they paid for it. An authorization process is in place to confirm people placing U.S. political ads have U.S. addresses.
On Tuesday, Facebook told Fox News the decision regarding the Wes Cook Band video was overturned and the political label was not needed, with a spokesperson adding that the new system “won’t ever be perfect,” but the company is working to improve it. The band claimed the video was rejected again on Wednesday, but Facebook has not confirmed that.
"If these algorithms are programmed to reject content like 'I Stand for the Flag,' then I think that would give a lot of Americans the right to be offended by that level of bias within a company that purports itself to be politically neutral," band member Nathan Stoops told Fox News Thursday.
Officials in the town of Greece, New York reported similar problems attempting to promote posts detailing Fourth of July events. Facebook acknowledged an error in labeling them as political and said it intends to learn from the mistake.
The incidents marked the latest of many cases in which the company’s new policies have mistakenly ensnared apolitical content. A Wal-Mart ad featuring Bush’s baked beans and a post by a church with a minister named Clinton were also flagged recently.
Media outlets reporting on political news have complained their content has been blocked, while several examples have been noted of blatantly political ads being approved. Critics say names of candidates and the 20 “national issues of public importance,” which include crime, immigration, and terrorism, come up often in legitimate journalism and labeling such posts as political advertising is inappropriate.
“It is Facebook’s responsibility to retrofit its original policy and exclude from their political advertising archive those who produce news and cover political events around the world. This responsibility resides with Facebook alone,” heads of media trade organizations wrote in a letter to CEO Mark Zuckerberg last month.
The current controversies stem from Facebook’s imperfect solution to a previous problem. During the 2016 presidential campaign, the platform was utilized by foreign trolls to spread misinformation, leading to calls for social media sites to police content better and make clear to users who is responsible for paid political content in their feeds.
“We spent the first decade, 12 years, really focused on building social experiences and what we’ve learned across a number of really hard issues from election interference to fake news to data privacy is that we underinvested in prevention and we underinvested in proactively policing the ecosystem that we had built,” Facebook COO Sheryl Sandberg told reporters last week.
Allegations of political bias and censorship have dogged social media sites for years, but they flared up recently against Facebook in particular, with Republicans questioning Zuckerberg about it personally on Capitol Hill in April. Though he admitted Silicon Valley is generally a very liberal place, he insisted there is no intentional censorship of conservatives at play.
“This is actually a concern that I have and that I try to root out in the company, is making sure that we do not have any bias in the work that we do, and I think it is a fair concern that people would at least wonder about,” he told Sen. Ted Cruz, R-Texas.
In early 2016, Facebook faced claims that curators of its trending topics section were deliberately ignoring conservative news sources. The company denied that, but it still replaced the human curators with algorithms, a move that liberal critics said helped fake news and misinformation spread in the months before the presidential election.
“They have to constantly be weighing the risk of somebody being offended because they took something down against the risk of somebody being offended because they didn’t,” said Elizabeth Cohen, an assistant professor of communication studies at West Virginia University.
Liberal groups are now alarmed Facebook is under conservative pressure again in the run-up to the midterm elections in November. Among other things, they point to the company’s announcement that it will partner with former Republican Sen. Jon Kyl and the Heritage Foundation to review its practices.
“If we do nothing, Facebook will continue to allow conservatives to game the platform's attempts at responsiveness,” Angelo Carusone, president of liberal media watchdog group Media Matters, said in a CNN.com op-ed last week. “This will mean devastating consequences for the truth and for our democracy.”
It is too soon to tell whether policies enacted in the last few months to protect users’ news feeds have succeeded, but Mike Horning, an assistant professor at Virginia Tech University and a faculty affiliate with the Center for Human Computer Interaction, said the company clearly has invested in improving the experience.
“Whether it’s improved or not, the proof will be somewhere in the pudding as we get closer to an election…I think they’ve made good faith efforts to do better with the process,” he said.
Twitter and Facebook have been adamant that they do not have any deliberate political slant, but polling suggests their users have doubts. A survey released by the Pew Research Center last week found 72 percent of Americans believe the social media platforms actively censor political views they find objectionable.
<img width="422" height="307" src="http://assets.pewresearch.org/wp-content/uploads/sites/14/2018/06/26142258/PI_2018.06.28_tech-companies_0-01.png" class="attachment-large size-large" alt="Roughly seven-in-ten Americans think it likely that social media platforms censor political viewpoints">
Republicans and conservatives are more likely to have that opinion, with 85 percent saying it is somewhat likely or very likely, but 62 percent of Democrats and liberals agree. More than four in ten respondents said technology companies favor liberals over conservatives, though 43 percent said both sides are treated equally.
Much like partisan attitudes toward the media continue to intensify, experts were not shocked to learn polarization is growing over social media as well.
“I’m surprised the number is not higher,” said Ray Klump, a software engineer and director of the Master of Information Security program at Lewis University. “We live in a hyper-partisan environment and we’re all looking to be offended. We all feel victimized by not having our voices heard.”
The survey suggests many users are dissatisfied with how social media platforms combat offensive speech and political propaganda, but a previous Pew study found a vast majority of Americans do want the companies to step in and restrict content when needed.
“This is what the public seemed to want Facebook to do,” Cohen said. “This is what you get when algorithms are making decisions for you.”
In each of the cases where complaints have been raised about misidentification, it appears the algorithm’s conclusion was not reviewed by humans until after the user objected. That is one area where experts say Facebook could be more proactive and have staffers review the content before taking action, if they can hire enough people to handle that workload.
<img width="422" height="430" src="http://assets.pewresearch.org/wp-content/uploads/sites/14/2018/06/26142259/PI_2018.06.28_tech-companies_0-02.png" class="attachment-large size-large" alt="Majority of Republicans say major technology companies support the views of liberals over conservatives">
“There probably needs to be a process that if your content is flagged, there is a very quick turnaround with checking the validity of that claim,” Horning said.
Due to complexities of human language, nuance, and context, it is challenging to construct a fool-proof algorithm for these circumstances, so some false positives are inevitable.
“I’m a techie person and I’m as guilty of this as anyone…but we all have too much faith in the power of machines, and we minimize or reduce too much the involvement of humans,” Klump said. “We’ve lost sight of the very real need for humans to double-check, verify, apply advanced reason in areas where computing falls short.”
If a machine identified political or offensive words it had been directed to find, it was technically working properly, even if the content was in a historical document or a civic event announcement.
“Do I think eventually, given enough time and effort, that machines will be better than humans at avoiding misinterpretation?” Klump asked. “I think it’s definitely possible, but it’s going to take time.”
Speaking with reporters last week about new features that let users see all of the ads an organization is running, Sandberg emphasized transparency and accountability, and experts say those are the key principles for combatting the perception of bias.
“I think it is important for Facebook to continue to communicate processes and be transparent about the decisions they make,” Horning said.
This may be more of a public relations problem than a technological problem for Facebook, but it is still a problem for a beleaguered company that already has many others to address. If all else fails, Facebook could turn to another ad campaign like the one currently running in response to the controversy surrounding data privacyto promote its neutrality.
“They want people to come and it is actually against their best interest to offend conservatives…,” Cohen said. “They don’t want to have to take content off their site. This is not pleasurable for them.”