By Jonathan Russell (International Director, Violence Prevention Network), Emily Traynor Mayrand and Rebecca Visser (co-authors, International Department – Violence Prevention Network)
On 8 May 2025, the Australian Department of Home Affairs, Violence Prevention Network, and Senate SHJ co-hosted an event in Sydney with over 60 participants from tech platforms, academia, practitioners and policymakers to discuss new avenues to combat terrorist use of the Internet. These discussions were held to inform about the development of a pilot programme for diverting at-risk individuals towards disengagement support.
Outcomes
Through the keynote, various multistakeholder panels, and two breakout sessions, we came to the following high-level outcomes:
- Safety and prevention need to be built and integrated into platforms and systems. Current content moderation systems are still missing obvious terrorist and violent extremist content, and borderline content remains challenging for platforms.
- In addition to evolving current systems, the countering violent extremism (CVE) space needs to move beyond content removal to more proactive approaches to combat terrorist use of the Internet. We need to intervene earlier, and tailor interventions to individuals. There is a wide spectrum of identifiable behaviours that can be used to identify people at risk in order to best tailor such interventions.
- Though there are roles for technological advances, including artificial intelligence (AI), to play in supporting these efforts, participants underscored the need to retain the human element, citing human-in-the-loop approaches. In particular, there was consensus around the need to maintain awareness and connection to offline contexts when engaging at-risk individuals.
- Collaboration and cooperation with youth and practitioners outside of the CVE space is essential, and we need to integrate these stakeholders into meaningful consultation processes and learn from best practices from neighbouring fields.
- Effective prevention relies on a patchwork of interventions to be resilient to the transnational and constantly shifting nature of the threat, and sustainable to survive external shocks, emerging technologies, and shifting political/funding dynamics.
Highlights
Participants engaged in four panel discussions including discussions around the role for prevention in combatting terrorist use of the Internet, government perspectives on this issue, how technology is used to find those at risk, and how disengagement practitioners engage with clients online and offline.
In the first panel discussion, participants noted that there are a variety of tools being used by tech platforms to combat the issue of terrorist content online and that tech does work with governments and regulators, but there are limitations and challenges with these tools and processes. The need for benchmarking across the tech industry was noted as a potential solution, as was the need for whole-of-government, whole-of-industry and whole-of-society approaches. Panelists highlighted that terrorist content is present across platforms and spaces, and as such requires collaboration and a wider set of actions than are currently being used.
The issue of young people being increasingly drawn into these spaces without intentionally seeking them out and the constellation of risk factors present was highlighted as a challenge. This session also featured discussion during the Q&A concerning the need for the CVE space to learn from other harm types such as suicide prevention, child sexual exploitation material (CSEM) and eating disorders as they have had to navigate similar challenges in this space around engaging with those at risk, including youth, online.
During the government panel, speakers noted that they work with tech platforms where they can, and in spite of them when they must, highlighting non-compliance and platform hesitancy as key issues. It was noted by speakers that the issue is getting more complex and needs a more proactive approach. They noted the need to fix the online environment that people, especially youth, are engaging with this content in, not just blame them. They noted a variety of good initiatives are underway but that there is a need for global collaboration, including with less traditional partners such as youth justice, child protection and family group counselling.
The challenge of the scale of the issue of terrorist content online was raised during the government panel and was further discussed in the tech panel. With increased technological capabilities, the scale of the problem is increasing faster than practitioners can respond, with terrorists and extremists being incredibly adaptable and prolific in their output. In relation to response by the field, it was also highlighted the work that has been done in terms of pilots of online to offline referral systems but that the scalability and sustainability of these efforts has always been the issue. In response to both of these issues speakers noted the need for sustainable funding and systems.
The tech panel also discussed the spectrum of indicators of risk, which is present in the behaviours of individuals online, making it possible to identify people who are at-risk. They also highlighted linkages between platforms and networks, as terrorist and violent extremists will be on mainstream platforms and direct people to other decentralised platforms where more extreme content is found, further reinforcing the importance for transparency and collaboration across industry and sectors in order to better respond to this issue. Speakers also noted the need to build safety and resilience into platforms, and that tools exist to support moderation, but that intervention opportunities need to be built into the platforms as well.
The practitioner panel discussed how they are adapting to online practices but noted that the environment is complex for them, and that adapting will take time. The importance of treating individuals as individuals, leveraging protective factors that are present, which can at times include the internet, and working with client’s networks and environments was also discussed. The need for ongoing consent for a positive intervention to take place was also highlighted as key for successful interventions.
Recommendations
In addition to these discussions, participants also took part in two roundtable exercises on online prevention, brainstorming potential positive interventions and how these could be built and implemented. Some of the recommendations put forward were:
Adapting more from the public health approach:
- Recognising the strengths of a public health model focusing on prevention and cross-cutting measures but considering the needs to adapt this for online (and online to offline) needs.
- Building on this to include more effective strategic communications, ensuring clear target audiences for intervention messaging, increasing public understanding of risk, and building confidence in taking proactive steps towards prevention.
- Increasing and supporting NGO development to better support the work of practitioners and other sectors, keeping government at arm’s length to provide NGOs with autonomy, professionalising practice, and having clear roles and responsibilities.
- Understanding the barriers to collaboration between NGOs and other sectors (government, tech) and proactively including them in co-creation, consultation, and collaboration, even with online first interventions.
Integrating needs assessment:
- Understanding the spectrum of needs that at-risk individuals may have, and continually reappraising cases, appreciating that ideology, if present, may be slow to identify.
- Working to address the behaviours of the individual, in the context of their full vulnerabilities and needs, rather than obsessing over ideology.
- Increasing the capacity of frontline workers to understand the variety of needs of the individual and building a strong community of support to address these needs.
- Considering the ways to adapt best practice for Social Diagnostics work for online first (or online only) conversations.
- Building in these lessons learned into every step along the user journey and communications intervention.
Positive diversions interventions to offline or elsewhere:
- If an individual were engaging with concerning material and behaviours online, they could be asked to complete a survey to continue their access to a given site. This survey could provide a moment of self-reflection and learning activities as well as gather information regarding how the individual started engaging with the material, and their larger needs, helping inform further intervention if needed. It could also provide an opportunity to engage with and understand underlying issues instead of direct anti-terror framing.
- Directing at-risk individuals to pro-social activities and communities offline or elsewhere online and building safe spaces both online and offline to which to direct people.
- Building such diversions to match the ecosystem/needs filled by extremist community, particularly in spaces such as gaming platforms, might increase engagement.
- Thinking outside CVE by learning from best practice interventions used in other harm areas, including minimising risks of alienating people, and addressing system organisation issues that can hinder collaboration.
Opportunities for AI:
- Recognising the potential for AI to go beyond monitoring platforms to identify violating/ illegal content, to start proactively scanning for legal but harmful content, in order to identify target audiences, networks, and ecosystems where positive interventions may be valuable.
- Training AI to identify and predict behaviours of individuals indicating a propensity to violence, in order to reach an at-risk target audience. And using AI to automate the deployment of positive interventions.
- Training AI on practitioner best practice to scale up the screening, triage, and referral process to enable at-scale online-offline referrals without facing practitioner capacity or safety issues.
- Considering consent and privacy concerns as well as potential technical challenges with all of these potential applications of AI.
Further Reading
Moonshot – Adapting Violence Prevention to the Digital World – a Framework for Action https://moonshotteam.com/news-and-resources/
C. Winter and B. Crawford (2023) The Virtualisation of Terror – Violent Extremism on the Internet Today
J. Russell and R. Visser (2025) Combatting Terrorist Use of the Internet: Online Prevention and Offline Integration