Combatting terrorist use of the internet: online prevention and offline integration (part 1)

By Jonathan Russell (International Director, Violence Prevention Network) and Rebecca Visser (International Project Manager, Violence Prevention Network)

Countering terrorism and violent extremism online has predominately focused on a narrow subset of the problem: content produced to spread or support it.

This is unsurprising. Terrorist and Violent Extremist Content (TVEC) is its most visible manifestation. It is the easiest thing to count. Since we interact with content across a variety of platforms, focus is understandably on removing it so that vulnerable audiences and the general population are not exposed to it.

The assumption is that this may stop terrorists communicating effectively, or vulnerable people radicalising. Indeed, a robust regulatory landscape has emerged to mandate the removal of this content from the Internet in many jurisdictions.

But this is, indeed, an assumption.

Terrorist and violent extremist activity online is part of a set of behaviours that is not limited to one post, one individual, one platform, nor exclusive to the Internet.

Removing content does little to tackle the demand for this content. Someone who has content taken down will still hold the beliefs that originally prompted them to propagate terrorist or violent extremist messages. Nothing has been done to address the individual’s needs, vulnerabilities, or propensity to violence. And someone who is deplatformed may even return to the same platform with modified account information or move to smaller platforms where they can continue to perpetrate harm.

We need to think more holistically and take some of what we’ve learnt offline to solve this problem. Effective prevention requires an individual (user)-oriented, ecosystemic perspective to more accurately reflect the nature of terrorist and violent extremist use of the Internet.

The Big Picture

Violence Prevention Network, together with tech and government partners, is pursuing a long-term strategy with this understanding at its core. Violence Prevention Network’s approach has three parts:

  •  Diversions – supporting tech platforms to accurately identify and reach individuals exhibiting online behaviours that indicate propensity to violence, and co-designing tech and communications interventions to initiate behaviour change
  • Last Mile – ensuring those individuals are diverted to Violence Prevention Network’s centralised off-platform triage centre, in which practitioners use our longstanding social diagnostics approach to understand these individuals’ needs so they can be diagnosed, triaged, and then matched with appropriate practitioners who can best support them, prioritising retention and continuity of care throughout the process
  • INDEX – levelling up tertiary prevention through practitioner-practitioner exchange, through the formation of a network that aims to be the first global professional association for disengagement and exit practitioners in P/CVE. Practitioner members will receive training to ensure they can receive online referrals through this new pathway and will be supported to enable effective interventions and outcomes for these clients.

This post is the first in a series of posts on these topics and will focus on the first aspect:

Diversions

Many tech platforms already have robust systems to identify users who break their terms of service. They are uniquely placed to understand the ecosystems around these individuals, and the online behaviours that come before such violations. Together, we can better identify those who are indicating vulnerability to radicalisation or a propensity to violence.

Tech platforms are so very nearly there. They already collect data on interesting and valuable signals that may indicate inclusion in the target audience. But perhaps they aren’t asking the right questions of them, or they aren’t combining the signals in a meaningful way. Or perhaps they are not yet convinced of the need to do so. And let’s be clear, this sort of precision on target audience identification is not possible without partnership with tech platforms.

But together, we can fully explore a range of audience identification options. We could consider:

  • Users who have broken terms of service relating to terrorism or violent extremism multiple times within a short period
  • Users who have a high degree of connection with other users who have been permanently banned for terrorism or violent extremism
  • Users who consume and/or engage with a high volume of terrorist or violent extremist content
  • Users who search for keywords or phrases related to terrorism or violent extremism

All these signals are interesting and valuable, though likely with diminishing precision as you move down the list. However, all likely indicate a degree of vulnerability to radicalisation or propensity to violence. This is compounded and becomes more precise if you combine these signals and layer them.

Tech platforms already have the systems for a scaled response to these individuals, but so far, they are using those systems exclusively for enforcement actions, such as removing content or banning users. We are supporting platforms to move just 1% to pivot towards a user-oriented, prevention-centred approach.

This Diversions approach simply says: if you’re not confident that users exhibiting these signals (even in combination) meet your threshold for an enforcement action, consider a targeted preventative action.

Without the need to share data or information with third parties, tech platforms can communicate with these users. This should be evidence-based, and there is a strong body of research on behaviour change communications, both from the P/CVE world and from neighbouring harm areas. Tech platforms may decide to communicate in their own brand or “voice”, or work with third parties to devise online interventions.

This range of interventions should be appropriate to the architecture of the online space in which the user is engaging, and could include pop-ups, account-level notifications, and/or required activity to reinstate certain user access. Such measures could include:

  • Informational/educational resources
  • Warning/deterrence-based messages
  • Inoculation/safety messages

The gold standard would be to encourage engagement with off-platform, civil society practitioner support services like Violence Prevention Network, and this would therefore form a key element of a targeted online-offline referral pathway.

Conclusion

Of course, we can look at this problem from the opposite angle. Offline, Violence Prevention Network receives a considerable number of referrals to our advice centres through frontline workers such as teachers and healthcare professionals, or from concerned bystanders such as family and friends. And we provide training to these stakeholders on spotting the signs of radicalisation, and awareness-raising on the support available.

And yet, we don’t have comparable professionals, institutions, or bystanders online. This certainly does not match up with the emerging nature of the threat, or the significance of the online ecosystem to the radicalisation process.

Engaging tech platforms who can identify, reach, and communicate with the target audience as precisely online, as these stakeholders do offline, is an absolute necessity to effective prevention.

And effective prevention relies on building a patchwork of interventions, understanding and respecting people and their ability to change, and working together in multistakeholder environments. Diversions sets a framework for how we can do this.