Interview with Facebook Whistleblower Ryan Hartwig


Filming with a hidden camera, the former moderator of content on Facebook Ryan Hartwig documented how important decisions affecting free speech online are made at Facebook, and has been uncovering censorship practices of Silicon Valley companies as well as their political influence on a global scale. In September 2020, he formed the Hartwig Foundation for Free Speech. Lotuseaters.com invited Hartwig to talk about his experience with tech giants, to explain how the curation process on social media works, and to blow the whistle on some of Facebook’s alarming practices. Readers can find more about Hartwig and his work at https://ryanhartwig.org/.

What is the structure of Facebook's moderation department?

“As a content moderator I was working directly for Cognizant. Cognizant is a very large technology company that does a lot of IT and cloud infrastructure. They had the contract with Facebook for 3 years which they ended early this past February.

So I got hired as a content moderator but I didn’t know that it was for Facebook. They would train us on their policy and we would review pieces of content 1 piece at a time, so we would get either a video or a post or a comment that was either reported by a user or sometimes the AI would drop it into our queue.

And we would make a decision based on Facebook's policy on if that content would be deleted or ignored and allowed on the platform.”

Who orders the exceptions for content as was seen in your footage?

“Cognizant has its own policy manager, so we had someone at our site called Shawn Browder who interfaced with the client very regularly. The example you're referencing is for Don Lemon. Don Lemon said ‘White males are terror threats’ it was a very general blanket statement about white males calling them terrorists, and that phrase violates Facebook's hate speech policy because its comparing one category/race to terrorism.

So it’s implying that all white males are terrorists. So they said they know that they know that it violates Facebook’s hate speech policy but they made a ‘Newsworthy exception’ to allow Don Lemon to say that.

So that decision is made at the very top by Facebook. Facebook gives us guidance and tells us at Cognizant how to action that, because it was important for us to always be actioning the same piece of content the same way so that we don’t get docked points on our scores [Accuracy of the moderator score]."

So Facebook has some form of council or chief censor who makes these decisions and then hands it onto you at Cognizant?

“That's correct there is a lead policy team in San Francisco, I believe about 6 people who make all the policy decisions.

We can provide feedback at our level. My manager that I mentioned Shawn, he can make a decision on an interim basis if something is really trending and we all need to be acting the same way.

We did that with the Ukraine whistle-blower - I can’t say his name. So for about 6 hours we deleted the Ukraine whistle-blowers name and Facebook came back to confirm that guidance to continue to delete it, so we have a little bit of autonomy but the final say always comes from Facebook."

Do censors recognize they are deleting a piece of speech or art?

“Yeah, the general attitude we had was ‘Hey, we are here to protect the internet; we are doing a great thing for society, keeping the internet clean’. I started on the Spanish side so I saw more content in Latin Americ. I saw more beheadings, a lot of cartel violence, so we would see graphic videos and we would delete them. And I agree with that if there is a Cartel or a Terrorist organisation promoting their content that should be taken down.

And of course I saw child pornography a lot of nudity, and a lot of the other content we saw was ‘hate speech’. So if I say ‘Keep Venezuelans out of the UK’ or ‘Keep Syrians out of the UK’, that violates Facebook’s ‘hate speech’ policy because it is targeting someone from their country of origin.

That is something I don’t necessarily agree with because we need to be able to discuss immigration. We need to be able to discuss politics, and this idea of ‘hate speech’, what is the definition of ‘hate speech’?

Facebook’s definition of it is very very wordy, so for the average person to discuss immigration they would have to read a novel of Facebook’s legalese, their policy, their implementation standards for them to know if they are allowed to talk about these things."

If someone were to say ‘Keep the migrants out of the UK’ in respect to the small boat crossings in the English channel, would that come under ‘hate speech’?

“There are always changes in the policy and I think they have made it more nuanced. Prior to 2019 you could not say ‘Muslims did 9/11’ but they have now changed it. So if you are referencing a specific event you can talk about that in certain terms.

So Facebook has done a little bit of good in making it more nuanced to allow what was previously considered ‘hate speech’.

But that's the thing - the policy is always changing so how are you and I supposed to know what's acceptable and what's not when we're trying to discuss something that affects all of us such as refugees.”

Do you know any moderators who have banned things that they have personally enjoyed? 

“Yeah, yeah all the time. Like with memes - and it kind of twists your sense of humour. So as content moderators we see a lot of F’ed up stuff, so we would see memes about Kobe Bryant who passed away in a helicopter crash. We saw a lot of memes about that.

It’s kind of like those memes you see where the moderator laughs at the meme before he deletes it. And believe me that happened all the time. We would laugh at things, some weren’t as dark as Kobe Bryant’s death; some of them were more like... joking about sex or what not.

There were times when I had to delete for example the phrase ‘snowflakes’. Someone was calling someone else a snowflake, so if that person reports, it that gets deleted, but if I call him a ‘Trump humper’, that stays up no matter what. So there is that double standard where more Trump supporters would get deleted but more leftists or ‘feminazis’ would get more protection.”

Should moderators give interviews to the media to increase transparency? 

“We had a journalist from verge.com come by in February 2018 and he came on site and posted a story about some of the conditions there.

He was one of the first people to tour the facility. And keep in mind we are not allowed to bring phones on the production floor, so it's very tight security, but what do they have to hide? What is Facebook hiding if the community standards are public, why not give the public free access?"

Do you think moderators giving feedback to creators on specifics of their censorship would help?

“There was a change about a year ago. We don’t normally allow female nipples on the platform or female nudity, but if you post an image of a female who is bare breasted and you give the caption that ‘she’s a non-binary’, those nipples are no longer female nipples but neutral nipples. So that would not be deleted.

We had this same debate about Michelle Obama. A lot of people have this conspiracy that she is a man and there's a photo where there is a bulge where her groin would be. And that is supposedly proof that she's a guy. 

So we had this debate policy wise as if we would mark that.

That’s the kind of debate we would have with policy. As you can see it’s very very detailed and nuanced. So yeah, I think Facebook should help the average person to be able to... not to escape from the rules, but in cases where you have more than a million followers and you’re getting deleted for some little minor infraction, why wouldn’t Facebook have a team to help this person with a million followers who probably spends $100,000 a year on advertising to make sure his posts don’t get deleted?

I don’t see anything wrong with that.”

Do you think they should have the moderation contract with the government?

“Actually I haven’t thought about that. That’s actually a fairly good solution. It's almost like having a military contractor - like a civilian doing military contracts for the government in a way where they have more strict requirements.

That might work having a government entity working on behalf of Facebook, because the thing that bothered me the most was when I saw how they monitor elections. Like okay, they have an election oversight committee at Facebook. But who from the federal government is working within Facebook to make sure they are following the rules or what they are doing with all that information about the election?"

Rather than deleting offensive speech, would classifying it as offensive work?

“That’s a good idea actually. I hadn't thought about that - because it wouldn’t delete the content entirely. We had the same sort of thing for something that was mocking an event, so if something was ‘cruel or insensitive’ like a meme mocking the Boston marathon bombing, which is a horrible thing to mock, if you were showing real individuals from the event and it was mocking it I would delete that, but if it’s just mocking the event there is a thing called ‘mark as cruel’ which would kind of limit the distribution of that.

So instead of deleting Facebook could put a little screen in front of it. Yeah, that’s one possible solution, and the Left keeps on complaining that Facebook isn’t doing enough. I think about a month ago a young Facebook engineer quit because ‘Facebook wasn’t doing enough to silence these white supremacists on the platform.'

It’s funny because everything that comes out of the media is the opposite of what I uncovered and I have the proof - 200 hours of footage - and these people come out with no proof and just their personal testimony and it gets distributed everywhere on left wing media.

Here I am with footage and screenshots and that’s 'unsubstantiated'. The Washington Post said my video was 'unsubstantiated’."

Follow Ryan Hartwig on Twitter.

Check out our premium content.


Subscribe to Newsletter

Share:

Comments