Author Topic: Algorithm Moderators Aren't Quite up to The Job  (Read 291 times)

Recusant

  • Miscreant Erendrake
  • Administrator
  • Guardian of Reason
  • *****
  • Posts: 5564
  • Gender: Male
  • infidel barbarian
Algorithm Moderators Aren't Quite up to The Job
« on: September 03, 2017, 06:26:53 AM »
I found this piece fascinating, and somewhat amazing. The amazing part for me is that these software moderation systems are being used by powerful, wealthy corporations, despite their glaring failures. I wouldn't put one of them on the mod team here even if it were offered for free. Though I might consider accepting one with the proviso that none of its decisions would have any weight, and if the developers were willing to pay the hosting and domain fees for the next fifteen years, at least.  :grin:

"Google’s comment-ranking system will be a hit with the alt-right" | Engadget

Quote
A recent, sprawling Wired feature outlined the results of its analysis on toxicity in online commenters across the United States. Unsurprisingly, it was like catnip for everyone who's ever heard the phrase "don't read the comments." According to "The Great Tech Panic: Trolls Across America," Vermont has the most toxic online commenters, whereas Sharpsburg, Georgia, "is the least-toxic city in the US."

There's just one problem.

The underlying API used to determine "toxicity" scores phrases like "I am a gay black woman" as 87 percent toxicity, and phrases like "I am a man" as the least toxic. The API, called Perspective, is made by Google's Alphabet within its Jigsaw incubator.

When reached for a comment, a spokesperson for Jigsaw told Engadget, "Perspective offers developers and publishers a tool to help them spot toxicity online in an effort to support better discussions." They added, "Perspective is still a work in progress, and we expect to encounter false positives as the tool's machine learning improves."

[Continues . . .]

"Religion is fundamentally opposed to everything I hold in veneration — courage, clear thinking, honesty, fairness, and above all, love of the truth."
— H. L. Mencken


Dave

  • Formerly known as Gloucester
  • Blessing Her Holy Hooves
  • *****
  • Posts: 4252
  • Gender: Male
Re: Algorithm Moderators Aren't Quite up to The Job
« Reply #1 on: September 03, 2017, 08:25:21 AM »
Change the title:

"Algorithm Moderator Programmers Aren't Quite up to The Job"

This is a people fail. Goes for users as well.

Added later: programmers can only use what computing power is avsilabke, but if you can only create an inadequate system keep it in the lab until it is fit for release.

Oops, forgot about the inbuilt idiocy expediency in most marketing and accounting departments.
« Last Edit: September 03, 2017, 09:20:45 AM by Gloucester »
Tomorrow is precious, don't ruin it by fouling up today.

Dave

  • Formerly known as Gloucester
  • Blessing Her Holy Hooves
  • *****
  • Posts: 4252
  • Gender: Male
Re: Algorithm Moderators Aren't Quite up to The Job
« Reply #2 on: September 03, 2017, 09:14:06 AM »
I wonder if all the world's computing power were combined it could do this job adequately. Context, the "meaning" of a phrase compared to the general tone of the whole comment, has got to be critical and, presently, probably beyond computing ability. Human minds have biases, I wonder how one introduces the distinction between objectivity and subjectivity into a scoring system. I am presuming this system to to protect the feelings of the more sensitive souls and egos amongst us.

It should be possible to use three or four similar phrases to compose both an "innocent" and a "toxic" comment - but by judging the phrases individually both may be considered "toxic" by such a system.
Tomorrow is precious, don't ruin it by fouling up today.

Arturo

  • Do Something Crazy!
  • Touched by His Noodly Appendage
  • *****
  • Posts: 2539
  • Gender: Male
  • Atheist, Humanist, and Champion
    • You two dig up, dig up dinosaurs?
Re: Algorithm Moderators Aren't Quite up to The Job
« Reply #3 on: September 03, 2017, 08:12:44 PM »
I go to a website, it has a bot mod. Does quite well. It's a small website however and the bot isn't always active. Users can give it commands but sometimes that is set to kick you.
But, uh...well there it is.
"Nothing's a struggle, but everything is a challenge"-Anon
Hate Is Weakness

Davin

  • Guardian of Reason
  • *****
  • Posts: 6525
  • Gender: Male
  • (o°-°)=o o(o*-°)
    • DevPirates
Re: Algorithm Moderators Aren't Quite up to The Job
« Reply #4 on: September 05, 2017, 03:56:18 PM »
Change the title:

"Algorithm Moderator Programmers Aren't Quite up to The Job"

This is a people fail. Goes for users as well.

Added later: programmers can only use what computing power is avsilabke, but if you can only create an inadequate system keep it in the lab until it is fit for release.

Oops, forgot about the inbuilt idiocy expediency in most marketing and accounting departments.
Looks like it was trained improperly. Likely by a bunch of fragile men judging by how it works now. I think that if it were trained properly, then it would work better. I wouldn't blame the programmers or the users on this one.

Always question all authorities because the authority you don't question is the most dangerous... except me, never question me.

Dave

  • Formerly known as Gloucester
  • Blessing Her Holy Hooves
  • *****
  • Posts: 4252
  • Gender: Male
Re: Algorithm Moderators Aren't Quite up to The Job
« Reply #5 on: September 05, 2017, 04:35:09 PM »
Change the title:

"Algorithm Moderator Programmers Aren't Quite up to The Job"

This is a people fail. Goes for users as well.

Added later: programmers can only use what computing power is avsilabke, but if you can only create an inadequate system keep it in the lab until it is fit for release.

Oops, forgot about the inbuilt idiocy expediency in most marketing and accounting departments.
Looks like it was trained improperly. Likely by a bunch of fragile men judging by how it works now. I think that if it were trained properly, then it would work better. I wouldn't blame the programmers or the users on this one.

Yeah, I did not think of the "training" phase. Perhaps I do not understand the hierachy of "departments" involved with the modern systems.  Back in the day you chose accountants and economists to train as COBOL analysts and programmers and mathematicians and engineers for FORTRAN. Both those groups knew the old systems, what was wanted and learned how to join the two in code. You started with a text decription, drew up a flow chart, assigned memory areas . . . Not quite as simple as that I know but a tad less complex than today I would guess.

What is the new hierachy? Inception - concept - analysis - coding - training? Does training include stuffing a system like this with comments, that have already been banned, and doing a word frequecy analysis, comparing with OK comments to negate passive words counting? Is this done manually? Would have thought it would be part of the programmed functions, algorythms within the code.

Tomorrow is precious, don't ruin it by fouling up today.

Davin

  • Guardian of Reason
  • *****
  • Posts: 6525
  • Gender: Male
  • (o°-°)=o o(o*-°)
    • DevPirates
Re: Algorithm Moderators Aren't Quite up to The Job
« Reply #6 on: September 05, 2017, 04:42:56 PM »
Training for this system was partially described in the article.

People were shown comments and told to answer whether they were "toxic" or not. After thousands of points of data, the AI uses what the people answered and goes ahead with that for how it will deal with comments itself. In basic terms, it's like the AI is acting like the average of all the people who participated in the AI training.

Given the criteria for what they said a "toxic" comment is, most of the people that participated in the training will walk away from a discussion in which the other person says, "I'm a gay black woman."

Edit: Oh, to answer the process question. That depends on the project. the company I work with uses a few different depending on the project.

Current AI systems require a training phase. There are a few ways to do that training. In general, after the AI has been developed and tested, it gets trained. Then it gets released. Mostly, AI gets "frozen" when in use. There are models for AI to learn on the fly, but generally, even if it is allowed the new learning gets weighed far less than the more controlled pre-training (also, only some parts/aspects of the AI are allowed to actively "learn"). One experiment that allowed on the fly training was Microsoft's AI last year, where by interacting with users through twitter, "she" became a homophobic, holocaust denying racist in a very short amount of time.
« Last Edit: September 05, 2017, 04:55:50 PM by Davin »

Always question all authorities because the authority you don't question is the most dangerous... except me, never question me.

Dave

  • Formerly known as Gloucester
  • Blessing Her Holy Hooves
  • *****
  • Posts: 4252
  • Gender: Male
Re: Algorithm Moderators Aren't Quite up to The Job
« Reply #7 on: September 05, 2017, 05:00:04 PM »
Training for this system was partially described in the article.

People were shown comments and told to answer whether they were "toxic" or not. After a few thousand points of data, the AI uses what the people answered and goes ahead with that for how it will deal with comments itself. In basic terms, it's like the AI is acting like the average of all the people who participated in the AI training.

Given the criteria for what they said a "toxic" comment is, most of the people that participated in the training will walk away from a discussion in which the other person says, "I'm a gay black woman."

I'l put my hands up - I did not read the article through!  Hmm, so the system learns the same subjectve boases as those used to "train" it . . . Unless that group included a very good doread of all posdibke biases, pro and snti. Sounds lije built in fsilure.

And, surely, still not the fault of the "trainers" but those who selected them and the samples they were given to vet. This is a sort of statistical problem I feel and it looks like the "universe" used is too small. And possibly with an element of self selection? But would this cost megabucks to do properly before the beta version can be released in the real world? Reckon so!
Tomorrow is precious, don't ruin it by fouling up today.

Davin

  • Guardian of Reason
  • *****
  • Posts: 6525
  • Gender: Male
  • (o°-°)=o o(o*-°)
    • DevPirates
Re: Algorithm Moderators Aren't Quite up to The Job
« Reply #8 on: September 05, 2017, 08:10:06 PM »
I guess it depends on the desired outcome on who one blames. I blame the trainers because the system seems to be working, and they are likely the only reason that the system is not operating the way it is supposed to. I would expect the ones who trained the system would agree with how it operates. Until we get better AI, there isn't much else that can be done.

Always question all authorities because the authority you don't question is the most dangerous... except me, never question me.

Dave

  • Formerly known as Gloucester
  • Blessing Her Holy Hooves
  • *****
  • Posts: 4252
  • Gender: Male
Re: Algorithm Moderators Aren't Quite up to The Job
« Reply #9 on: September 05, 2017, 08:31:28 PM »
I guess it depends on the desired outcome on who one blames. I blame the trainers because the system seems to be working, and they are likely the only reason that the system is not operating the way it is supposed to. I would expect the ones who trained the system would agree with how it operates. Until we get better AI, there isn't much else that can be done.

Hmm, without me having to read the article (feeling tired and lazy, sorry) were the trainers them selves trained for this task or were they merely a bunch of randomly chosen people exhibiting their likes and dislikes?

"Trained trainers" are a bad idea in this instance IMO, they will possibly bring logical bias into what should be a non-logical task. But if they were then tes, they could be to blame. If the trainers were a randomly selected bunch subjectively accrpting or rejecting comments then "blame" should not be apportioned to them. They are entitled to their opinion, "right" or "wrong" does not apply here.

So, I reiterate that it is the whole structure of the exercise that is at fault, thus blame lies with those who designed it.
Tomorrow is precious, don't ruin it by fouling up today.

Davin

  • Guardian of Reason
  • *****
  • Posts: 6525
  • Gender: Male
  • (o°-°)=o o(o*-°)
    • DevPirates
Re: Algorithm Moderators Aren't Quite up to The Job
« Reply #10 on: September 05, 2017, 09:17:28 PM »
People are entitled to their opinions, however that doesn't absolve from blame for their opinions.

Always question all authorities because the authority you don't question is the most dangerous... except me, never question me.

Dave

  • Formerly known as Gloucester
  • Blessing Her Holy Hooves
  • *****
  • Posts: 4252
  • Gender: Male
Re: Algorithm Moderators Aren't Quite up to The Job
« Reply #11 on: September 05, 2017, 09:55:50 PM »
People are entitled to their opinions, however that doesn't absolve from blame for their opinions.

Granted that, in general, some opinions are good and others bad - if you accept that there is some kind of objective "universal" scale that transcends subjective human values and perceptions. If Zndy "wslks away" from comment, "I am a gay, blsck wonan" but "Tommy" does not whobis yo ssy that either of them are "wrong" in experesding their opinions? If the toxicity of that subject relies on such things it is sad. If its "interest index" relies on opinion that is an expression of human nature, not the intrinsic value of the comment. Still sad.

I read "toxicity" perhaps differently to those marketting this judgement system. If a comment offends a majority or is intended to influence others in what might be considered a direction against the social good, such as "All gay people should be forced to take re-orientstion therapy,"  seems suitably toxic on three or four levels, however the wrong, small random bunch might like it. "I do not believe that gay/straignt/black/white people should be given any positive discriminatory advantages, in society, over gay/straight/black/white people," (delete categories as appropriate)  is maybe controversial and provocative but not, IMO, a "toxic" opinion in any variation. However, it may be deemed boring to a good lump of any random bunch and thus be deemed "toxic" under the apparent rules here.

If, currently, all comments were moderated by objective humans and the rejects fed into the system for analysis it might, after a few hundreds of thousands, come close to working efficiently. Or did that happen? Sounds like not.
Tomorrow is precious, don't ruin it by fouling up today.

Davin

  • Guardian of Reason
  • *****
  • Posts: 6525
  • Gender: Male
  • (o°-°)=o o(o*-°)
    • DevPirates
Re: Algorithm Moderators Aren't Quite up to The Job
« Reply #12 on: September 06, 2017, 03:15:21 PM »
Granted that, in general, some opinions are good and others bad - if you accept that there is some kind of objective "universal" scale that transcends subjective human values and perceptions.

I don't want you get off too far into the wrong direction. We can be fairly certain on many things without making up objective wrongs about subjective things. Look at this image:



Clearly there is a point where the text red and there is a point where the text is blue. While there is also a section where it's both and while at most points it's more red than blue and vice versa, we can't exactly tell where that line is. This is like most things. Some things will be clearly good, some things will be clearly bad, and there are some things that fall into the grey area. For instance: I don't think we have to have a debate over why punching a baby in the face is bad, nor why feeding your own children is good. There will be other things that fall into that.

I think that finding, "I am a gay black woman" a toxic statement is bad, but "I am a man" is not toxic in the same way. They are just declarative statements. By the way, it's not just "walking away" that makes the statement "toxic" according to the rules for the trainers:

Quote
The model defines a toxic comment as "a rude, disrespectful, or unreasonable comment that is likely to make you leave a discussion."

It's been a few days, it probably would have been a good idea to read the article or learn about the topic before developing an opinion on what happened. So yeah, I blame the trainers for not following the instructions given to them, or for them being so unreasonable as to find, "I am a gay black woman" rude, disrespectful, or unreasonable that is likely to make them leave a discussion. Either way, it's the trainer's fault here.

Always question all authorities because the authority you don't question is the most dangerous... except me, never question me.

Dave

  • Formerly known as Gloucester
  • Blessing Her Holy Hooves
  • *****
  • Posts: 4252
  • Gender: Male
Re: Algorithm Moderators Aren't Quite up to The Job
« Reply #13 on: September 06, 2017, 03:48:47 PM »
OK, certsinly if the trsiners broke the rules thrn they are partially to bkame for any failure of the system. But did the rule writers take that possibility inti account in their designing the whole training scheme?

I have a casual interest in the design of forms, surveys and psychological tests. I started a form design service when I lost my last job (2004, gave it up because it seemed that companies would rather struggle on than pay £8/hour, and I honestly only charged for the time spent on their job, "Can you cut your rate to £5/hour?") Perhaps it is not possible to put "safe-guard filters" into such a system as this, effectively write other than deliberate rule breaking, lazy answers or syncophantic answers out. The same question posed in different ways is the simplest version of such a technique.

So much depends on human nature, the comment that is anathema to A might be an exciting stimulation for further conversation for B. I will watch this area with interest, but have little faith in it being anything like as good as a human mod for many, many years. Unless they invest a huge amount of time, hardware and bandwidth in it.

Rejecting posts containing swesr words is, I imagine, quite simple - filtering more complex concepts and memes nit do - though I am impressed with Google's ability to interpret my search strings, seemingly sorting synonyms to make a better match, so I could well be wrong!
Tomorrow is precious, don't ruin it by fouling up today.

Davin

  • Guardian of Reason
  • *****
  • Posts: 6525
  • Gender: Male
  • (o°-°)=o o(o*-°)
    • DevPirates
Re: Algorithm Moderators Aren't Quite up to The Job
« Reply #14 on: September 06, 2017, 05:07:30 PM »
I'm sure they took into account many possibilities, they did make the instructions very basic and easy to understand. You seem to be hell bent on blaming everyone but the trainers. Given your train of thought here; when a driver ignores a stop sign and crashes into someone, you would blame the sign makers while I would blame the driver.

Always question all authorities because the authority you don't question is the most dangerous... except me, never question me.