As delivered by Jonathan Russell (International Director, Violence Prevention Network) at UNGA80 side event hosted by Global Internet Forum to Counter Terrorism (GIFCT)
A huge thank you for the warm welcome to New York and the invitation to join the Global Internet Forum to Counter Terrorism. As Chair of your Independent Advisory Board, I want to commend you for the international, multistakeholder panel you have put together here. It is an honour to lead 18 governmental, academic, and civil society experts to advise GIFCT and its operating board. And we were pleased to deliver our quarterly advisory report earlier this week about online gender-based violence and misogynistic extremism.
Two big themes have been discussed today about extremist behaviour online, and the need to coordinate in order to prevent it. My presentation today will build on those themes and particularly think about online-offline coordination, and the unique role for tech in identifying, addressing, and facilitating the change of extremist behaviours.
The story of Violence Prevention Network
Violence Prevention Network is best known for supporting the disengagement of over 4,000 radicalised individuals in Germany’s prisons and delivering targeted prevention interventions to many thousands more at-risk individuals outside of prisons.
This work is effective because it is targeted. It allows us to fully understand our target audience, and personalise the interventions to them, without the risk of reaching a general population. This work can be targeted because we have worked with the German government to build structures in partnership with many different institutions and stakeholder groups: educators, law enforcement, healthcare, the prison estate. These professions are important – they understand the target audience and have the opportunity to intervene. They’re motivated to safeguard them. And with our help, they can develop the capability to do so.
So, we develop robust referral mechanisms, understanding both the target audience, their experiences of radicalisation, and these stakeholders. And we have trained a vast number of frontline workers in these professions, who are active bystanders: they now know how to spot the signs of radicalisation and make referrals through to our services.
At that point, our 100+ practitioners – social workers, theologians, psychotherapists – take over and do their thing. And it works. We reduce reoffending; we reduce the violent threat to society; and we support individuals to become engaged, fulfilled members of their communities.
But here’s the problem: Radicalisation chiefly happens online now. And these institutions and frontline professionals? They’re not online. There’s a dramatic and widening gap between the digital native and the digital naïve. So where are the active bystanders online? And where are the referral mechanisms?
Trust and Safety teams at tech platforms as important players
The answer is Trust and Safety teams at tech platforms, and the AI systems that underpin them. These teams and systems are set up to understand the target audience. They collect a vast amount of behavioural data, which, if interrogated properly, can similarly identify the warning signs of radicalisation, or indicators of propensity to violence. This enables precision targeting of the same target audience that we support at scale offline.
But at the moment, they don’t use this data in this way. On the one hand, they use this data to make a small number of reports to law enforcement agencies when these individuals indicate illegal behaviour or imminent threat to life. And on the other hand, they use this data to make a large number of content moderation decisions to remove content which violates their platform’s terms of service; and deplatform users who repeatedly violate their terms of service.
There is a huge gap between these two uses. And that gap should be filled by interventions where the goal is prevention. Prevention of escalation to violence when early warning signs are present. And prevention of further on-platform violations of terms of service. To fill this gap, we have created Diversions.
Diversions is the ultimate answer to online prevention of extremist use of the Internet
Diversions supports platforms to consult with experts to design these prevention interventions on three behavioural aspects, by:
- determining the right on-platform behavioural signals that should indicate inclusion within the target audience
- designing the right tech and behaviour change communications interventions
- bridging these to off-platform programming to enable long-term behaviour change support, such as that provided by Violence Prevention Network
But why should platforms be motivated to do this?
- Safety: This helps them achieve their core safety goals, of reducing harm on their platforms.
- Trust: This improves the health of the ecosystems on their platforms, ensuring their users have a better experience. And they can publicise their work with this approach.
- Commercially: This reduces the need for negative actions like law enforcement reporting, content moderation, or user enforcement, which can save money, and keep users on platform for longer.
- Regulatory: This helps demonstrate proactive engagement with risk assessment guidance in the Digital Services Act and Online Safety Act.
- Morally: This uses their data for good, in a way that they are uniquely positioned to do, contributes to the broader counter-terrorism landscape, and protect children.
And are they motivated to do this?
- At a working level, trust and safety teams have called for this. But they don’t always feel they have the backing of their leadership and cross-functional networks to innovate in this way. They have called for Violence Prevention Network to codify this work into a series of recommendations and guidelines, which we are now doing, through to co-design of a voluntary Code of Conduct. This will help elicit public commitments from senior leaders and enable inclusion in future roadmaps.
- Trust and safety teams have asked us to demonstrate an evaluation framework so that on-platform metrics, and off-platform behaviour change data and can quantify the impact of this approach. We are now doing this through a series of pilots with several partner platforms and in several different locations.
- And we have found that some platforms are more keen to try this with jihadist audiences; others with young audiences; others see the validity in this bridging their terrorism, child safety, and suicide harm teams; others are specific about the locations where they are interested in pursuing this.
- And through consultation, we have understood some tech platforms’ barriers to adoption of these approaches, which we have now addressed:
- They want this to be scaled globally, so would rather build one partnership than many. At Violence Prevention Network we have therefore built a centralised off-ramp for online-offline referrals. With this service, our practitioners make first contact with the target users, triage and diagnose them, and then distribute them with warm hand-offs through to the best source of onward support. We have integrated this with INDEX – International Network for Disengagement and Exit, which enables practitioners from over 20 countries to feed into the advisory aspects for tech platforms, and to be the end referral point.
- They find direct engagement with public/police-led prevention services difficult, because this target audience falls below the threshold of illegality; so, they appreciate Violence Prevention Network as a civil society organisation acting as a “middleman”.
- They want to balance privacy and security, so are tentative about sharing data on these users with a third party; and are keen for prevention interventions to be consent driven, where users have the autonomy to click through to Violence Prevention Network, even if precision targeting is a key part of the programme. But they are keen to collaborate on impact measurement throughout the whole user journey.
- And they want this to be integrated with their current trust and safety systems: so, we are collaborating with them to explore the engineering and data science lift, and the ability to scale this across their products, services, and apps, in order to make it as easy as possible for this to be operational.
- And they’re interested in the cross-industry effects of this. Where all tech platforms are using consistent language to talk about prevention interventions, consistent metrics to measure their impact, and have a shared sense of risk mitigation.
With this, can you imagine a future where prevention interventions and deradicalisation support are available to all those whose online behaviours indicate propensity to violence? And where all tech platforms have an additional tool in their toolbox to keep their users safe online? We can.
So how can you get involved?
- Come and partner with us at Violence Prevention Network to help us build this.
- If you’re a researcher, come help us build out this framework and proof it to deal with adversarial shift.
- If you’re a policymaker or regulator, come explore what this might look like in your country, or how it could support your online terrorism prevention goals.
- If you’re a practitioner, come and join INDEX, so you can advise platforms and receive cases; or ensure that your interventions are targeted to the right people.
- And if you’re a tech platform, come have a conversation about building this for your users. It is truly a multistakeholder endeavour, and we are excited to collaborate.
Get in touch: international@violence-prevention-network.de or dive deep here: https://violence-prevention-network.com/who-we-are/