Researchers at AlgorithmWatch say they had to abandon their research project, which follows the Instagram algorithm after Facebook’s legitimate threats. The Berlin-based project came to light with the conflict in a post published Friday morning, with reference to platform’s recent NYU Ad Observatory ban.
“There are probably more cases of bullying that we don’t know about,” the post reads. “We hope more organizations will talk about their experiences.”
Launched in March 2020, AlgorithmWatch provided a browser plug-in that allows users to gather information from their Instagram feeds and provide information on how the platform prioritizes images and videos. The project published observations on a regular basis, indicating that the algorithm encouraging photos showing bare skin and that photos of faces are placed higher than screenshots of text. Facebook denied the method, but otherwise did not take action against AlgorithmWatch in the first year of the project.
“We only collected content information that Facebook showed to the volunteers who installed the add-on,” the researchers say in their defense. “In other words, users of the plug-in can only use their own feed and share it with us for research purposes.”
Still, the researchers eventually decided to close the project, believing they would face legal action if the company continued it.
The social nature of Facebook platforms makes it difficult to isolate individual users: although the user chooses, their input is necessarily made from the content of other people who are unlikely to have agreed to participate in the study. Facebook has been particularly sensitive to research projects since the Cambridge Analytica scandal, when academic research data was eventually used for commercial and political manipulation.
Yet the broader model is worrying. The algorithms that dominate Facebook and Instagram’s news feeds are highly effective but poorly understood, and Facebook’s practices make it difficult to examine them objectively. The NYU Ad Observatory, which monitored political advertising on the platform, saw its researchers banned earlier this month on charges of scraping the data. In November, the company presented similar legal threats Against a friendly browser that allows users to rearrange their feeds in chronological order. CrowdTangle, another popular tool for Facebook research, was which the company bought in 2016.
Facebook did not respond immediately to the request for comment.
Facebook has some mechanisms that allow researchers to gather information directly from the company, including Advertising library and its Social Science One partnerships. But AlgorithmWatch says the opposite nature of their research makes the data inherently unreliable.
“Researchers can’t trust the information provided by Facebook because the company can’t be trusted,” the researchers say. “There is no reason to believe that Facebook would provide useful information if the researchers replaced the independently collected information with the company.”