Facebook: Platform, Publisher Or Ministry Of Truth?

 

Mark Zuckerberg

Facebook is being pilloried for its unwillingness to remove a paid advertisement by President Trump’s reelection campaign after Facebook was notified by the Biden campaign that the ad contained false statements about Joe Biden and his son.

On the surface, this seems simple, and Facebook’s decision not to remove the ad seems wrong. If someone is paying to run an ad that is lying or making false claims, some policy or regulation or truth-in-advertising law should be in place to ensure that the lie or false claims are removed. This is just common sense.

If the President Can Lie, Why Can’t I?

After all, if Facebook is OK with President Trump lying and making false claims in a paid ad, Facebook should be OK with me (or anyone) lying and making false claims in an ad too. If so, I’m going to quit my day job and start running ads on Facebook for “eat anything you like and lose weight” programs and “get rich quick” schemes. If Facebook doesn’t care about the truth, why should I?

Does Facebook have truth-in-advertising policies? Do those policies apply to everyone, or is President Trump “above the law?”

There Is No Soundbite Answer – You Need to Read This

The moment I learned about Facebook’s decision to let the Trump ad run, I reached out to my friends and clients at Facebook. I discussed this specific issue (and the size and scope of the problem) with several senior executives. The conversations were thoughtful, and to be fair, this issue is wildly more complex than it appears at first glance.

First and foremost, this is not a First Amendment issue. Often abbreviated by the phrase “freedom of expression,” the First Amendment says, “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.”

Facebook is not the government; it’s a company, and it can make any policies it likes. If Facebook creates a policy that states, “Employees who do not wear blue shirts will be fired,” the First Amendment will not protect workers who are “freely expressing themselves” by wearing green shirts from being fired. If Facebook says it will not allow the president of the United States to lie on its platform, the First Amendment does not apply. Facebook is not the government.

What Is True?

If Facebook has been notified that claims in an ad are false, how can Facebook possibly leave a false ad on its platform?

In this specific case, Facebook (its leadership, its AI, or its human workers) may believe that President Trump’s campaign is lying. Facebook may believe that the Biden campaign is right to call attention to what it believes is a lie. Facebook may believe that Joe and Hunter Biden are innocent of any wrongdoing.

But Facebook has no knowledge of the facts. Facebook only knows what has been reported. It cannot know with absolute certainty what is, or is not, true. No one can. Regardless of what Facebook may want to believe, there are others who will want to believe the exact opposite. Who gets to decide what is or is not actually true?

That said, Facebook has an enormous amount of power; it is the largest communications platform ever created; and with its size and influence, it must be held to the highest standards. But what does that even mean?

In an effort to help you think about the magnitude of the “what should be allowed on Facebook” problem, I offer the text of an email Facebook sent me as a follow-up to my conversations with them. I asked for this in writing, and I want you to read it and let me know what you think.

Follows is an excerpt from an email to me from Facebook:

Our approach to free speech is grounded in Facebook’s fundamental belief in free expression and respect for the democratic process, as well as the fact that, in mature democracies with a free press, political speech is already arguably the most scrutinized speech there is.

We rely on third-party fact-checkers to help reduce the spread of false news and other types of viral misinformation, like memes or manipulated photos and videos. We don’t believe, however, that it’s an appropriate role for us to referee political debates and prevent a politician’s speech from reaching its audience and being subject to public debate and scrutiny.

That’s why Facebook exempts politicians from our third-party fact-checking program. We have had this policy on the books for over a year now, posted publicly on our site under our eligibility guidelines. This means that we will not send organic content or ads from politicians to our third-party fact-checking partners for review. However, when a politician shares previously debunked content including links, videos and photos, we plan to demote that content, display related information from fact-checkers, and reject its inclusion in advertisements. You can find more about the third-party fact-checking program and content eligibility here.

When it comes to our third party fact checking program, the only time we would demote content on a politician’s page is if they shared a link to an article or a video or photo, created by someone else that has been otherwise debunked. We would also display related information from fact-checkers, and reject its inclusion in advertisements.

But this is different from a politician’s own claim or statement—even if that claim has been debunked in another context.

Transparency:

While we aren’t sending ads from politicians to our third party fact checking partners, they still must comply with our Advertising Policies (which on the whole, are stricter than our community standards), and their ads still go through review systems to check against those policies:

  • Our approach to ads about social issues, elections or politics has been to focus on transparency and authenticity – which means ads from politicians already are held to a higher standard. We have implemented transparency requirements that are stricter than our approach to political speech and it’s essential that political figures on Facebook comply.
    • To help prevent foreign interference, first people need to be authorized– we confirm your ID and location as being in the country you want to advertise in.
    • People need to confirm their organization (through an FEC or tax ID or go through steps to confirm their business) and then include it on a “Paid for by” disclaimer on the ad before it can run.
    • We then house the ads in the publicly available, searchable Ad Library for up to seven years for voters, regulators, journalists, and researchers to examine.
  • These requirements play a central role in the broader ecosystem of journalism, cable news, social media and punditry that analyzes, debates and scrutinizes political speech.
  • In turn, advertisers are more accountable and responsible for their words and actions.
  • We recently tightened our rules in the U.S. around how we authorize advertisers who want to run ads about social issues, elections and politics, putting even more guardrails in place against people who want to game the system and obscure who is running these kinds of ads.
  • Misleading Claims: We don’t allow ads that include misleading or deceptive claims about products or services (like confusing delivery times, or products claiming they can cure cancer).
  • Unacceptable Business Practices: We don’t allow ads that promote products, services, schemes or offers using deceptive or misleading practices, including those meant to scam people out of money or personal information (like ads claiming to boost Facebook likes).
  • Misinformation: We’re being more explicit about this. We already don’t allow ads that include content debunked through our third-party fact-checking program. Now it’s outlined in a distinct section on our policy page.

How we are protecting elections:

Over the past two and a half years, we’ve developed smarter tools, greater transparency, and stronger partnerships to help us do just that. We’ve blocked millions of fake accounts so they can’t spread misinformation. We’re working with independent fact-checkers to reduce the spread of fake news.

In 2016, we were on the lookout for traditional cyber threats like hacking and stealing information. What happened was a much different kind of attack meant to sow discord around hot political issues. We’ve learned lessons from 2016 and have seen threats evolve, ensuring that our defenses stay ahead of those efforts, making it harder to use our platform for election interference.

We know that security is never finished and we can’t do this alone — that’s why we continue to work with policymakers and experts to make sure we are constantly improving.

Smarter Tools

Our teams are working to build innovative new tools, combining stronger artificial intelligence with expert investigations, to find and prevent abuse, including:

  • stopping millions of fake accounts from being created every day,
  • finding and removing thousands of Pages, Groups, and accounts involved in coordinated inauthentic behavior,
  • and reducing the spread of false news and misinformation, as independent studies have confirmed.

Specifically, we’ve introduced:

  • new systems that can detect foreign pages and accounts targeting civic content in the US,
  • sophisticated teams of specialized investigators to locate, analyze, and disrupt bad actors,
  • better technology to proactively find and block voter suppression and other content that violates our Community Standards,
  • improved capabilities to remove violating content in bulk and prevent it from being posted again,
  • and new abilities to detect tampered images, which we’ve already deployed for elections in the EU and India.

We are also improving our rapid response efforts. We now have more than 30,000 people working on safety and security, with 40 teams contributing to our work on elections globally. Our work builds on efforts and investments that began in 2017, and in early 2019 we established a dedicated team focused on the US 2020 elections. The team has conducted detailed risk assessments and threat analysis and continues to run scenario exercises so that we can anticipate and address emerging threats. Their work also includes proactive sweeps looking for impersonation of campaigns and candidate accounts and Pages.

Stronger Partnerships We also continue to improve our coordination and cooperation with law enforcement, including DNI, DHS, FBI, as well as federal officials, state election officials, and other technology companies, to allow for better information sharing and threat detection.

We are also working with academics, civil society groups, and researchers, including the Atlantic Council’s Digital Forensic Research Laboratory, to get the best thinking on these issues.

We know we can’t do this alone, and these partnerships are an important piece of our comprehensive efforts to fight election interference.

We know there is more to do, and we are committed.

Security is an arms race, and as we continue to improve our defenses, our adversaries evolve their tactics. We will never be perfect, and we are up against determined opponents. But we are committed to doing everything we can to prevent people from using our platforms to interfere in elections.

Facebook also provided a link to a post and speech by Nick Clegg, Facebook’s VP of Global Affairs and Communications, which is certainly worth a look.

Shelly Palmer is Fox 5 New York's On-air Tech Expert (WNYW-TV) and the host of Fox Television's monthly show Shelly Palmer Digital Living. He also hosts United Stations Radio Network's, ...

more
How did you like this article? Let us know so we can better customize your reading experience.

Comments

Leave a comment to automatically be entered into our contest to win a free Echo Show.