2023:Program/Submissions/Moving beyond abuse filters? - MQM8MH


Title: Moving beyond abuse filters?

Speakers:

Leaderboard

Pretalx link

Etherpad link

Room:

Start time:

End time:

Type: Lecture

Track: Technology

Submission state: submitted

Duration: 30 minutes

Do not record: false

Presentation language: en


Abstract & description edit

Abstract edit

Abuse filters have been a mainstay of Wikimedia projects since ~2010, and are many wikis' primary line of defence against vandalism and abusive edits. However, since then, there hasn't been much of a change in how they operate. Should we be doubling down on filters or is it time to look at other options?

Description edit

Historically, projects have used abuse filters as a mechanism to pre-emptively handle cases of vandalism and abusive edits. As the number of LTAs (long-term abuse users) have grown, so has the use of abuse filters. While abuse filters are surprisingly customisable with the help of regular expressions (regex), the fact is that many communities are struggling to maintain a balance between keeping out bad-faith users (especially those that actively seek to bypass the filter) and ensuring that legitimate users - especially new ones - can continue to contribute without being hit (or worse, blocked) with the filter (especially considering many struggle to understand what they are and why their edits aren't getting through).

In this lecture, I plan to use my experience of maintaining abuse filters on Wikibooks (and helping communities crosswiki) to discuss the limitations of abuse filters and explain how alternatives such as neural networks can potentially allow communities to minimise false positives while ensuring that bad-faith edits stay out of the wiki.

It should be noted that while this is classified as a "lecture", I've no issues with people asking questions.

The nature of this event means that participants are expected to have experience with abuse filters or at least dealing with vandalism; some understanding of Python and/or machine learning would be an advantage.

Further details edit

Qn. How does your session relate to the event themes: Diversity, Collaboration Future?

Diversity: abuse filters are often difficult for many users to understand. While most wikis have a process for reporting false positives, many new users (who may be making their first edit on Wikimedia and have little idea about how wikis work) tend to struggle with this process from my experience (and are the group that most abuse filters ironically target!). This means that many good edits go unnoticed, as most wikis do not have a mechanism to regularly comb though the filters to look for good edits. My proposal is an attempt to reduce the chances of this happening.

Future: Abuse filters have their limitation, and my proposal discusses how related approaches (such as neural networks) can potentially serve as a "future" of what we currently have.

Collaboration: anti-vandalism is a community effort, and my lecture is likely to help other like-minded users, such as but not limited to stewards and those in SWMT (Small Wiki Monitoring Team, the group that focuses on routine antivandalism in small wikis). It can also help users that are regularly harassed by LTAs (long-term abuse users).

Qn. What is the experience level needed for the audience for your session?

The session is for an experienced audience

Qn. What is the most appropriate format for this session?

  •   Onsite in Singapore
  •   Remote online participation, livestreamed
  •   Remote from a satellite event
  •   Hybrid with some participants in Singapore and others dialing in remotely
  •   Pre-recorded and available on demand