A week ago, The Wall Street Journal began to publish a series of stories about Facebook based on the internal findings of the company’s researchers. The Facebook Files, as they are known, lay out a dizzying number of problems unfolding on the world’s biggest social network.
The stories detail an opaque, separate system of government for elite users known as XCheck; provide evidence that Instagram can be harmful to a significant percentage of teenage girls; and reveal that entire political parties have changed their policies in response to changes in the News Feed algorithm. The stories also uncovered massive inequality in how Facebook moderates content in foreign countries compared to the investment it has made in the United States.
The stories have galvanized public attention, and members of Congress have announced a probe. And scrutiny is growing as reporters at other outlets contribute material of their own.
For instance: MIT Technology Review found that despite Facebook’s significant investment in security, by October 2019, Eastern European troll farms reached 140 million people a month with propaganda — and 75 percent of those users saw it not because they followed a page but because Facebook’s recommendation engine served it to them. ProPublica investigated Facebook Marketplace and found thousands of fake accounts participating in a wide variety of scams. The New York Times revealed that Facebook has sought to improve its reputation in part by pumping pro-Facebook stories into the News Feed, an effort known as “Project Amplify.” (To date this has only been tested in three cities, and it’s not clear whether it will continue.)
Most Facebook scandals come and go. But this one feels different than Facebook scandals of the past, because it has been led by Facebook’s own workforce.
The last time Facebook found itself under this much public scrutiny was 2018, when the Cambridge Analytica data privacy scandal rocked the company. It was a strange scandal for many reasons, not least of which was the fact that most of its details had been reported years previously. What turned it into an international story was the idea that political operatives had sought to use Facebook’s vast trove of demographic data in an effort to manipulate Americans into voting for Donald Trump.
Today nearly everyone agrees that what Cambridge Analytica called “psychographic targeting” was overblown marketing spin. But the idea that Facebook and other social networks are gradually reshaping whole societies with their data collection, advertising practices, ranking algorithms and engagement metrics has largely stuck. Facebook is an all-time great business because its ads are so effective in getting people to buy things. And yet the company wants us to believe it isn’t similarly effective at getting people to change their politics?
There’s a disconnect there, one that the company has never really resolved.
Still, it plowed $13 billion into safety and security. It hired 40,000 people to police the network. It developed a real aptitude at disrupting networks of fake accounts. It got more comfortable inserting high-quality information into the News Feed, whether about COVID-19 or climate change. When the 2020 US presidential election was over, Facebook was barely a footnote in the story.
But basic questions lingered. How was the network policed, exactly? Are different countries being policed equitably? And what does looking at a personalized feed like that every day to do a person, or to a country and its politics?
As always, there’s a risk of being a technological determinist here: to assume that Facebook’s algorithms are more powerful they are, or operate in a vacuum. Research that I’ve highlighted in this column has shown that often, other forces can be even more powerful — Fox News, for example, can inspire a much greater shift in a person’s politics.
For a lot of reasons, we would all stand to benefit if we could better isolate the effect of Facebook — or YouTube, or TikTok, or Twitter — on the larger world. But because they keep their data private, for reasons both good and bad, we spend a lot of time arguing about subjects for which we often have little grounding in empiricism. We talk about what Facebook is based on how Facebook makes us feel. And so Facebook and the world wind up talking past each other.
At the same time, and to its credit, Facebook did allocate some resources to investigating some of the questions on our minds. Questions like, what is Instagram doing to teenage girls?
In doing so, Facebook planted the seeds of the current moment. The most pressing questions in the recent reporting ask the same question Cambridge Analytica did — what is this social network doing to us? But unlike with that story, this time we have real data to look at — data that Facebook itself produced.
When I talk to some people at Facebook about some of this, they bristle. They’ll say: reporters have had it out for us forever; the recent stories all bear more than a faint trace of confirmation bias. They’ll say: just because one researcher at the company says something doesn’t mean it’s true. They’ll ask: why isn’t anyone demanding to see internal research from YouTube, or Twitter, or TikTok?
Perhaps this explains the company’s generally dismissive response to all this reporting. The emotional, scattered Nick Clegg blog post. The CEO joking about it. The mainstream media — there they go again.
To me, though, the past week has felt like a turning point.
By now, the majority of Facebook researchers to ever speak out about the company in public have taken the opportunity to say that their research was largely stymied or ignored by their superiors. And what we have read of their research suggests that the company has often acted irresponsibly.
Sometimes this is unintentional — Facebook appears to have been genuinely surprised by the finding that Instagram appears to be responsible for the rise in anxiety and depression for teenage girls.
Other times, the company acted irresponsibly with full knowledge of what it was doing, as when it allocated massively more resources for removing misleading content in the United States than it does in the rest of the world.
And even in the United States, it arguably under-invested in safety and security: as Samidh Chakrabarti, who ran Facebook’s civic integrity team until this year, put it: the company’s much-ballyhooed $13 billion investment represents about four percent of revenue.
Despite all this, of course, Facebook is thriving. Daily users are up seven percent year over year. Profits are up. The post-pandemic ad business is booming so hard that even digital ad also-rans like Pinterest and Twitter are having a banner year. And Facebook’s hardware business is quietly turning into a success, potentially paving a road from here all the way to the metaverse.
But still that question nags: what is this social network doing to us? It now seems apparent that no one at the company, or in the world at large, has really gotten their arms around it. And so the company’s reputation is once again in free fall.
One natural reaction to this state of affairs, if you were running the company, would be to do less research: no more negative studies, no more negative headlines! What’s Congress going to do, hold a hearing? Who cares. Pass a law? Not this year.
When Facebook moved this week to make it harder for people to volunteer their own News Feed data to an external research program, it signaled that this is the way it is heading.
But what if it did the reverse? What if it invested dramatically more in research, and publicly pressured its peers to join it? What if Facebook routinely published its findings and allowed its data to be audited? What if the company made it dramatically easier for qualified researchers to study the platform independently?
This would be unprecedented in the history of American business, but Facebook is an unprecedented thing in the world. The company can’t rebuild trust with the larger world through blog posts and tweet storms. But it could start by helping us understand its effects on human behavior, politics, and society.
That doesn’t seem to be the way things are going, though. Instead, the company is doing different kinds of research — research like “what happens if we show people good news about Facebook?” I’m told one story that appeared in the recent test informed users of an incident in which the social network helped a woman find her lost horse. Maybe that will move the needle.
But I shouldn’t joke. There’s a real idea embedded in that test, which is that over time you can reshape perception by the narratives you promote. That what appears in the News Feed may be able to shift public opinion over time, to the opinion of whoever is running the feed.
It is this suspicion that the News Feed can drive such changes that has driven so much of the company’s own research, and fears about the company’s influence, even as that possibility has been relentlessly downplayed by Facebook’s PR machine.
But now the company has decided to see for itself. To the public, it will promise it can’t possibly be as powerful as its apostate researchers say it is.
And then, with Project Amplify, Facebook will attempt to see if they might actually be right.
This column was co-published with Platformer, a daily newsletter about Big Tech and democracy.
Leave a Reply