Sunday, December 6, 2015

Disparate impact of damage-detection on anonymous Wikipedia editors

Today, I'm writing briefly about a problem that I expect to be studying and trying to fix over the course of the next few weeks.

The problem: The damage detection models that ORES supports seems to be overly skeptical of edits by anonymous editors and newcomers.

I've been looking at this problem for a while, but I was recently inspired by by the framing of disparate impact. Thanks to Jacob Thebault-Spieker for suggesting I look at the problem this way.
In United States anti-discrimination law, the theory of disparate impact holds that practices in employment, housing, or other areas may be considered discriminatory and illegal if they have a disproportionate "adverse impact" on persons in a protected class.  via Wikipedia's Disparate Impact (CC-BY-SA 4.0)
So, let's talk about some terms and how I'd like to apply them to Wikipedia.

Disproportionate adverse impact.  The damage detection models that ORES supports are intended to focus attention on potentially damaging edits.  Still human judgement is not perfect and there's lot of fun research that suggests that "recommendations" like this can affect people's judgement.  So by encouraging Wikipedia's patrollers to look a particular edit, we are likely also making them more likely to find flaws in that edit than if it was not highlighted by ORES.  Having an edit rejected can demotivate the editor, but it may be even more concerning that the rejection of content from certain types of editors may lead to coverage biases as the editors most likely to contribute to a particular topic may be discouraged or prevented from editing Wikipedia

Protected class.  In US law, it seems that this term is generally reserved for race, gender, and ability.  In the case of Wikipedia, we don't know these demographics.  They could be involved and I think they likely are, but I think that anonymous editors and newcomers should also be considered a protected class in Wikipedia.  Generally, anonymous editors and newcomers are excluded from discussions and therefor subject to the will of experienced editors.  I think that this has been having a substantial, negative impact on the quality and coverage of Wikipedia.  To state it simply, I think that there are a collection of systemic problems around anonymous editors and newcomers that prevent them from contributing to the dominant store of human knowledge.

So, I think I have a moral obligation to consider the effect that these algorithms have in contributing to these issues and rectifying them.  The first and easiest thing I can do is remove the features user.age and user.is_anon from the prediction models.  So I did some testing.  Here's fitness measures (see AUC) all of the edit quality models with the current and without-user features included.

wikimodelcurrent AUCno-user AUCdiff
dewikireverted0.9000.792-0.108
enwikireverted0.8350.795-0.040
enwikidamaging0.9010.818-0.083
enwikigoodfaith0.8960.841-0.055
eswikireverted0.8800.849-0.031
fawikireverted0.9130.835-0.078
fawikidamaging0.9510.920-0.031
fawikigoodfaith0.9610.897-0.064
frwikireverted0.9290.846-0.083
hewikireverted0.8740.800-0.074
idwikireverted0.9350.903-0.032
itwikireverted0.9050.850-0.055
nlwikireverted0.9330.831-0.102
ptwikireverted0.8940.812-0.082
ptwikidamaging0.9130.848-0.065
ptwikigoodfaith0.9230.863-0.060
trwikireverted0.8850.809-0.076
trwikidamaging0.8920.798-0.094
trwikigoodfaith0.8990.795-0.104
viwikireverted0.9050.841-0.064

So to summarize what this table tells us:  We'll lose between 0.05 and 0.10 AUC per model which brings us from beating the state of the art to not.  That makes the quantitative glands in my brain squirt some anti-dopamine out.  It makes me want to run the other way.  It's really cool to be able to say "we're beating the state of the art".  But on the other hand, it's kind of lame to know "we're doing it at the expense of users who are most sensitive and necessary."  So, I've convinced myself.  We should deploy these models that look less fit by the numbers, but also reduce the disparate impact on anons and new editors.  After all, the actual practical application of the model may very well actually be better despite what the numbers say.

But before I do anything, I need to convince my users.  They should have a say in this.  At the very least, they should know what is happening.  So, next week, I'll start a conversation laying out this argument and advocating for the switch.  

One final note.  This problem may be a blessing in disguise.  By reducing the fitness of our models, we have a new incentive to re-double our efforts toward finding alternative sources of signal to increase the fitness of our models.  

4 comments:

  1. This comment has been removed by a blog administrator.

    ReplyDelete
  2. This comment has been removed by a blog administrator.

    ReplyDelete
  3. This comment has been removed by a blog administrator.

    ReplyDelete
  4. This comment has been removed by a blog administrator.

    ReplyDelete