Data has a better idea

Lurking with the Radical Right: The ethics of online covert research

Antonia Vaughan

Antonia Vaughan

Antonia is a Doctoral Fellow at the Centre for Analysis of the Radical Right (CARR). She is a PhD Candidate at the University of Bath focusing on the ethics of researching the far right, and the inter-/intra-platform flows of alt right discourse.

The use of internet data in academic research without informed consent (covert research) is subject to a number of ethical quandaries. Here, the particular considerations that researching the radical right imposes are teased out, such as whether terms of service can be taken as a form of consent; whether internet data is always public; and how the practice of covert research impacts on researcher safety.
 
The use of covert methodologies in research on the radical right online is incredibly widespread, especially since the pandemic forced most activities online. Using this approach, the researcher does not collect informed consent, and has access to a vast amount of data without the safety or logistical concerns that consent brings. The discussion around the approach touches on debates in the wider discourse including privacy concerns, the accessibility of extreme content online, and the increasingly discussed issue of researcher safety.

As noted by Daniel Jones in their blog on the ethics of archival research, ‘ethics involved a degree of the personal within them’. Like them, this post aims to tease out the ethical conundrums that covert research using online data prompts rather than offer a black and white distinction. Covert research, research involving internet data, and researching the radical right hit up against murky dilemmas and balancing acts, requiring thoughtful critiques of approaches. As a topic, we are constantly developing our understanding of acceptability that is hugely context dependent and personal.
 

Informed Consent and the Use of Internet Data

A key contention in the practice of covert research is that this approach does not collect informed consent from the participants, in this case the content producers on forums, websites, and social platforms. For projects involving thousands of participants, collecting informed consent is logistically difficult; for projects involving extreme communities it is risky, requiring the researcher to spotlight themselves in front of a potentially hostile community. Massanari has critiqued (1) ‘long-standing traditions’ in an environment where the researcher is frequently made vulnerable to the broader radical right.
 

Terms of Service as a form of consent?

Research involving online data often turns to the terms of service as a form of informed consent; that by accepting the terms of service which includes a provision to reshare the data, the user accepts that their data might be used in academic research. However, Casey Fiesler has shown that most Twitter users do not know that their data is being used in research. The EU Commission (2) has indicated that terms of service are an insufficient placeholder for informed consent, nor is the ‘public’ nature of content.
 

Data has a better idea
Data has a better idea. Photo @ Franki Chamaki for Unsplash.

Internet data as public or private?

A second critical justification for covert research is the conceptualisation of forums and social platforms as public, equivalent to collecting data in a public square. This indicates that the users are aware that their content might be viewed by strangers, including academics, and absolves the need to get informed consent. But this in turn creates a number of quandaries: are offline public/private spaces equivalent to online public/private spaces? What do researchers of the radical right do if there is a level of gatekeeping to access a critical group? And how far must we comply with the terms of service if they prohibit research?

Legally, the final question was clarified only last year; the American Civil Liberties Union (ACLU) filed a lawsuit in 2016 challenging the Computer Fraud and Abuse Act that criminalised violations of websites terms of service. This included prohibition of scraping. Scraping is a practice where data is collected automatically using programmes, allowing large-scale analyses of sites of interest, and is commonly used in research involving internet data. The ACLU argued that prohibitions on scraping (including investigating algorithmic discrimination) amounted to a violation of the First Amendment. Specifically, it prevents “everyone, including academics and journalists, from gathering the publicly available information necessary to understand and speak about online discrimination”. It is easy to see how research on the radical right can fall into this conceptualisation. Automated content collection methods are one of the most straightforward ways of collecting data, but could cause researchers to fall afoul of anti-scraping laws and anti-terror laws. Rather than these methodologies being the first port of call due to the ease of collecting vast amounts of data, we must critically evaluate the risk they bring.

When it comes to the public nature of forums, some radical right and extreme right forums help with explicit acknowledgement of the public nature of the content. The rules of patriots[dot]win include the point: ‘Be Vigilant / You represent the movement against communism – your posts and comments may become news.” They are very aware that non-users, including news groups, will be viewing, and potentially sharing content from the site. But this does not necessarily extend to all sites, and then you have the issue of anonymity. Michael Krona has suggested a traffic light system to classify types of content, evaluating both the ‘public’ nature of the content, and the level of gatekeeping required for access.

If we use this data without consent and then share it in articles, conference papers, blogs and so on, then we jeopardise another fundamental ethical tenet: the right to be anonymous in research. Using internet data challenges this as you can always google a quote, and pseudonymity can only go so far. Conway has grappled (3) with several conundrums around anonymity and extreme movements, including how to determine whether the participant is a ‘public figure’ or not. Ultimately, researchers have a duty to do no harm to participants, and with internet research that is primarily through the vector of identification.

One such possible route would be through paraphrasing the content, thus avoiding the risk of identification, but also the risk of doing harm to the reader through the reproduction of toxic discourse. Daniel Jones briefly sketched out the risk of doing this research to the wellbeing (4) of the researcher, but we risk continuing this harm to the reader of the content if we quote toxic/racist/sexist/homophobic content verbatim.
 

The user as publisher or sharer

Further muddying the waters is the line between the user as publisher and the user as an individual sharing, each prompting different ethical quandaries. The user as sharing content encounters issues of informed consent, which has been addressed above. Alexis Henshaw in a recent panel at the Global Network on Extremism and Technology (GNET) Conference noted that we must respect the labour that went into the production of content, and conceiving of the user as ‘sharing’ the content, thus not citing, removes this respect.

If the user ‘publishes’ the comments, posts, videos and so on then researchers need to follow the proper lines of attribution and accreditation, most significantly requiring citing the original user. This is ethically problematic as it could require the sharing of unedited data, including toxic, harmful language and hate speech. More problematic is the necessity to fully cite the users that produce the content. By giving a full citation (including origin website) it leads readers to communities and websites that they may not have been previously aware of. Moreover, the practice of linking to extremist or far right media on web pages has been criticised by Vanessa Fox as it helps boost the ranking of the page on search engines. For ranking algorithms, linking the content gives it a veneer of respectability and reference that would not otherwise have been acquired.
 

Covert research and researcher safety

Finally, the issue of covert research closely intersects with networked harassment (5), a practice that aims to silence and censor academics (6) through mass harassment campaigns. Networked harassment is a phenomenon that is increasingly being discussed and shared in academic circles (7), particularly targeting marginalised scholars, and those digitally active. Researchers such as Massanari (8) have indicated that covert research should be more normalised in extremism research because it has an important role in safeguarding researchers. The process of gaining informed consent from radical right users effectively puts the researcher on a billboard, making them highly visible and highly known to extreme communities. Massanari has critiqued this approach as, with extremist communities, the balance of power often tips away from the researcher, making them incredibly vulnerable. Challenging the ‘long-standing tradition’ of overt research, Massanari advocates a covert approach to publication, suggesting that we could move towards anonymous publication routes to protect the safety of vulnerable scholars.

The vulnerability of the researcher is a significant issue, even without the direct identification of self to the community. There are numerous examples of networked harassment being directed at researchers, with many taking preventative steps to protect themselves.
 

Where does this leave us?

Researching extremist content on the internet places ethical approaches at the centre of a number of contestations and dilemmas. On the one hand, we must respect the users of the forums/websites/social platforms and the content that they produce, on the other hand we must safeguard researchers of these topics, and the necessity of the research. In this piece I have sketched a few of the dilemmas that researchers must work through when utilising this content such as the public/private nature of the content, and the risk that informed consent poses to the safety of the researcher.

The ethical conundrums of using internet data, researching the radical right, and researching as marginalised scholars is increasingly being discussed in the academic community, bringing personal, contextual ethical decisions into the open forum. Recently, several conferences have hosted panels explicitly considering this topic; the Centre for Analysis of the Radical Right (CARR) has announced its new research unit on ethics. The explicit consideration of the multi-faceted concerns around the ethics of researching the radical right can only strengthen the practice of research; as Markham stated (9), ‘Ethics as method, method as ethic’.

 

Antonia Vaughan

 

References:

  1. Massanari, A., “Rethinking Research Ethics, Power, and the Risk of Visibility in the Era of the “Alt-Right” Gaze”, Social Media & Society, 2018
  2. European Commission “Ethics in Social Science and Humanities” 2018
  3. Conway, M. “Online Extremism and Terrorism Research Ethics: Researcher Safety, Informed Consent, and the Need for Tailored Guidelines”, Terrorism and Political Violence, 2021
  4. Winter, C. “Researching Jihadist Propaganda: Access, Interpretation, and Trauma” Resolve Network, 2019
  5. Marwick, A.E. and Caplan, R. “Drinking male tears: language, the manosphere, and networked harassment” Feminist Media Studies, 2018
  6. Doerfler, P. et al., “I’m a Professor, which isn’t usually a dangerous job”: Internet-Facilitated Harassment and its Impact on Researchers” Computing Research Repository, 2021

This article was provided by CARR (Centre for Analysis of the Radical Right).

Ready: 06.09.21. Editor: Omaina H. Aziz

Share this post

Share on facebook
Share on google
Share on twitter
Share on linkedin
Share on pinterest
Share on print
Share on email

Leave a Reply

Subscribe to our newsletter

Fill in your details to be always updated

%d bloggers like this: