

Also, such findings can guide future interventions in online communities that help prevent the spread of hate. The results could advance the understanding of the micro-mechanisms that regulate hate speech. With this work we present some of the first experimental evidence investigating the social determinants of hate speech in online communities. Our results suggest that norm adherence in online conversations might, in fact, be motivated by descriptive norms rather than injunctive norms. Participants were significantly less likely to engage in hate speech when prior hate content had been moderately censored.

descriptive norms and (ii) what happened to those who violated the norm, i.e. The interventions are based on the two conceptualizations found in the literature: (i) what do others normally do, i.e. The interventions are based on the belief that individuals infer acceptability from the context, using previous actions as a source of normative information. We compare two types of interventions: counter-speaking (informal verbal sanctions) and censoring (deleting hateful content). We present an online experiment in which we investigate the impact of perceived social acceptability on online hate speech, and measure the causal effect of specific interventions.
