Anyone who manages or participates in online communities will have observed "bad" behaviour at one time or another. John Davis at Microsoft Research has published an eight page report, The Experience of ‘Bad’ Behavior in Online Social Spaces: A survey of users , that many will find useful for understanding the prevalance of the problem, it’s causes and the effects of anti-social behaviour in a variety of internet mediums.
In the conclusion to the report, Davis writes that: "respondents reported that they experience such behavior frequently in several online social environments, and it adversley impacts their own behavior in several ways – they leave and avoid contexts where bad behavior occurs. The cost of bad behavior is therefore high."
Davis sites anonymity, and the lack of accountability, as two of the primary causes of anti-social behaviour online.
So how can community managers curtail bad behaviour? Davis says that "because many system level responses, like having live moderators to respond to bad behavior, are extremely expensives, it would be preferable to have individual users deal with bad behavior instead. However, the results also show that typical user responses directed at perpetrators of bad behavior are ineffectual." He suggests the use of online profiles to encourage responsible participation and accountability. Reputation systems, Davis adds, are also useful in that they give other users some ability to to encourage positive behaviour from their peers. [see paper]
Related Cybersoc Entries:
How not to deal with message board users
Moderation and Hosting: What? Who?
Flame Warriors: recognise yourself?
His comment about reputation methods are unfounded, he’s just surmising.
I always believe that community designers dont know about the philosophical discussions that have occured around justice and punishment and have not implemented all the aspects of “justice.” Punishment, retribution, repayment and rehabilitation. When these four are fully addressed appropriately then I think that better online interaction will occur.
As long as we do not create *The* Leviathan, but thousands of micro-level leviathans, it can work. By “leviathans” I mean the moderators that can be chosen among the users in forums and groups. If we choose one of ours for our own sheriff, this is quite sane and seems to meet what Davis suggests. I
don’t see the cost problem with moderators, at least not in the mentioned case, but eventually it might emerge if we speak about “bad behavior online” in so many fields. Obviously synchronous communication causes much more difficulties.
Still, I can see a smart “sheriff” job given to players in MMORPGs, so the players themselves deal with the bad behavior problem. Happened in IRC, happens in chats…