Working with Heather Hamilton, and in collaboration with key organisations throughout our sector, CCDH produced a situation analysis and an overview of change pathways, key excerpts from which can be found below.


Identity-based hate appears to be increasing in the UK. Unpredictable and heretofore unthinkable electoral outcomes occur more frequently, with the far right seeing electoral gains across Europe. Young people appear more likely to support authoritarian politics, reversing trends in previous generations. Social media enabled and in some cases incentivised problematic behaviour by political parties in the 2019 UK General Election.


An unquantified but highly likely cause of these changes to the fundamental political beliefs, worldview and concerns of the voting public that drives them to the desire for radical, often chaotic, change is the increase in the reach, sophistication and ambition of the hate ecosystem, particularly their exploitation of digital spaces and Web 2.0.  


There is considerable evidence now of  a deliberate, organised and dynamic ecosystem of actors who exploit the modern structure of the internet and social media to organize, fundraise, recruit and spread malinformation, misinformation and disinformation, as well as promote their ideologies. Their ideological goals include rejecting the tenets and key institutions of democracy, a lack of respect for a free press, increased belief in conspiracy theories and fake news, increased support for authoritarianism, and increased intolerance of ethnic and religious minorities, migrants/refugees, women, and other protected groups.  As a whole, this threatens not only increased identify-based hate and violence, but also suppresses democratic discourse and participation by the most disadvantaged, undermines fundamental human rights, our ability to enact evidence-based policy, the capacity of conventional politics to provide effective solutions, and popular support for the longstanding foundations of our democracy.


Collaborative work to address this ecosystem of hate is in a somewhat more advanced state in the United States and Europe. While the broader impact of the hate ecosystem is beginning to be recognised in the UK, it is still not prioritised outside of a small group of organisations and individuals, nor does there appear to be a common understanding of shared goals and strategies

Situation Analysis

Online hate 

  • The internet and social media have created new channels and methods to spread hate and disinformation and places for extremist and hate groups to organise and recruit.

  • The structure of the platforms and algorithms give weight to extreme views, conspiracy theories and outrage, spreading them further.

  • A combination of anonymity and the dynamics within social media groups drive accelerated polarisation, a well-established phenomenon that has been turbocharged in digital spaces.

  • The interactive nature and significant penetration of social media platforms, have increased the amount of online hate and exposure of the public to hate messages, normalising previously fringe beliefs, attitudes and behaviours.

  • Online and offline hate crimes are increasing, and while the link between online and offline hate is still being explored, some studies show correlations between the two.

  • Hate groups are encouraging mass violence online, and individuals are using the platforms to both actively encourage others to commit killings and attacks, and to livestream their crimes.

  • Online hate and disinformation also subtly change perceptions of the public through repeated exposure to proselytising materials, creating new spaces for radical political actors. 

  • Young internet users are particularly vulnerable to extreme content and grooming.


​Political impact

  • Platforms are being used to influence elections and politics, and undermine the norms and values of pluralist democracy, through disinformation campaigns, segmented issue campaign advertising and other targeted event-driven interventions.

  • Formerly fringe ideas are finding an audience and becoming normalised.

  • Targeted ads, memes and stories spread ideologies of exclusion, ethno-nationalism, extremism.

  • Larger discourse of “othering” (part of overall affective polarisation) is also strongly influencing policies, elections and national political discourse and public opinion.

  • The structure of social media incentivises outrage and hate speech, online hate crimes, and enables ordinary citizens to spread hate, conspiracy theories and misinformation.

  • Journalists in print and broadcast media are using online arguments as evidence of public opinion - skews representation of public and benefits most extreme positions.

  • This gives sanction to the expression of Islamophobia, anti-Semitism, racism, misogyny, anti-LGBTQ and anti-migrant views; increase hate crimes and prejudice.

  • Online hate silences minority and women’s voices disproportionately, further isolating them from the conversation.


Not happening in a vacuum: the online ecosystem of hate

  • The internet allows previously unconnected groups and individuals, often with limited resources, to reinforce and amplify common messages, as well as share strategies and templates for action.

  • Authoritarian sectarian actors (including some politicians & parties), opponents of open migration, and socially conservative groups with nativist and patriarchal views are finding strategic and, increasingly, tactical alignment with racial and religious bigots.

  • Identity-based hate and ‘othering’, as well as disinformation and dark digital arts such as trolling, and abuse of opponents, are instrumentalised by political actors worldwide to build political support for their parties and positions, or promote chaos in society.

  • Mainstream political parties, grassroots groups, and third party groups have increasingly directly adopted online hate techniques which originated with fringe hate actors.

  • Leads not only to increases in identity-based hate, but significantly contributes to multiple trends threatening liberal democracies today: 

    • decreased support for the tenets and institutions of democracy;  

    • lack of confidence in a free press;

    • suspicion of expertise; 

    • increased belief in conspiracy theories and fake news; 

    • increased support for authoritarianism; and, 

    • increased intolerance of ethnic and religious minorities, migrants/refugees, women, and other protected groups.


UK Government response

  • The government’s responses to date have varied in their success and, for the main part, been enacted through a security lens.

  • Local government is often effective in finding problems and directing scarce resources at it, but is of mixed effectiveness, and claims to be hindered by a siloed, centralised policy apparatus which changes focus too reactively.

  • The Commission on Countering Extremism, which reported earlier this year, has advanced thinking, identifying hateful extremism as a serious problem and proposing a definition for the government. It is currently negotiating the scope of its work with the new government. 

  • The government issued an Online Harms White Paper in April 2019 and subsequently conducted a consultation in the summer, to which many civil society organisations contributed. The current government said in the December 2019 Queen’s speech that they will bring forward an Online Harms Bill. A private members bill, ‘Online Harms Reduction Regulator (Report) Bill’ has been published in the House of Lords. ‘Online Harms Reduction Regulator (Report) Bill’ has been published (14 January 2020) at its First Reading. This is a short ‘paving Bill’ requiring OFCOM and the Government to take steps to prepare for forthcoming legislation on online harms.


Why we need a stronger response from a broader cross-section of civil society

  • The online hate and disinformation ecosystem is undermining our democracy and civil society’s ability to function and support the social good, particularly for the most disadvantaged.

  • Online hate has a disproportionate impact on both the individuals and policy goals of social groups under attack (ethnic and religious minorities, migrants/refugees, women, and other protected groups).

  • Solutions need to encompass platforms, actors and culture simultaneously because online hate networks are complex, dynamically-shifting, and platform-agnostic in achieving their goals.  If you take out one component in their infrastructure alone, the system can correct itself to full functionality quickly by using different actors, different tactics or different platforms. The speed and zero cost of Internet communication means this can be done nearly instantly. 

  • Coordinated, comprehensive approach from civil society is needed to address this problem, from multiple issue areas and perspectives.

Pathways to Change

Building a common understanding of the goals we are seeking to enact and how we expect change to happen is a critical endeavour for any social change collaboration. 


The below framework outlining these pathways to change is presented as a draft for feedback from the community of organisations and individuals working to address online hate. It reflects a synthesis of the field mapping, key informant interviews and survey research. Its purpose is to demonstrate common goals and objectives, while identifying strategic approaches that not all organisations may share. 


The framework is intended to help organisations situate their efforts in a broader theory of change and identify their contributions. It may also help identify areas where there are gaps in responses, and areas where progress may be impossible without collective action.


We present this  as both a reflection of the research we have conducted, that is, a synthesis of explicit and implicit descriptions by civil society organisations and experts of how change will happen, and as a framework for discussion and debate.


Notes on the Pathways to Change chart


Vision & long-term outcomes: Ultimately, addressing online hate is a means to an end, that of achieving the broader vision of a better society outlined above, and the outcomes of reducing violence, enforcing rights, upholding democracy and evidence-based policymaking. By surfacing a longer-term and broader vision, we hope that ending online hate is situated more clearly as an important contributor to this vision.


Causality: The chart is not intended to show causality between specific strategies/tactics and affiliated outcomes, as one activity or intervention may contribute to several outcomes in significant ways, and a visual representation of these chains of causality is too complex for this exercise. However, it may be worth individual organisations (or groups of organisations working on joint initiatives) to use this depiction of pathways to change to describe their own understandings of how their work or approaches lead to specific changes. 


Platforms: Efforts to influence the platforms are flagged as a group of associated tactics to achieve larger strategic goals, and not a larger strategy in and of themselves. While this work is critical to achieving change, an over-focus on the platforms as strategy in an of themselves, instead of a means to a strategic end, may lead to other means of achieving the same strategic goals being overlooked.


Strategies: There is no one silver-bullet strategy for addressing online hate. Current strategic approaches and programmes are skewed to efforts to stop bad actors, with far fewer actions and thinking around broader social or behavioural approaches to socialising the online space. It is worth further thinking in this area.

Indicators: It may be worth investing in developing a robust set of baselines and indicators for measuring success, particularly against the mid-term outcomes. Currently, assigning outcome indicators or proxy measurements in this area is quite difficult as the data is for the most part not being gathered at scale by either governments or the platforms themselves (at least not transparently). Platform transparency on a broad range of metrics, such as the number of reports of hate speech, number of individuals or groups investigated or removed, etc., would be useful in this area. It is worth considering whether anti-hate groups could develop a common ask around the specific metrics that would demonstrate success (as part of a broader transparency push). At the same time, even those data that are publicly available do not necessarily reflect the scope of online hate accurately. For example, the Home Office Statistical Bulletin 20/18, “Hate Crime, England and Wales,” contained an annex on experimental data on online hate crimes. While they found that in 2017/18, two per cent (1,605 offences) of all hate crime offences were indicated as having an online element, this relied on police flagging of a crime as online in the reporting, and “it is thought that the use of the online flag is prone to a higher degree of undercounting that other flags.” In addition, civil society organisations that handle complaints of hate crime say that online crimes are under-reported. Therefore, some work around developing accurate indicators across different outcomes, and advocating for the data to be collected and reported, may be worth considering.