in reply to New Power Proposal
I think that it's a great idea to delete old nicks. It sure is annoying to be in op=randomnode mode and keep hitting nodes of users that have never even been to the Monastery.
However, I think that the idea of deleting user accounts based on what their nick is, is a REALLY bad idea and anathema to one of the central reasons why the Net exists (freedom of expression, even when offensive).
Some things are offensive. That's it. There is no way around it. If you don't like it, look the other way. What possible *good* reason could we have for deleting accounts with offensive nicks. Because they are offensive is NOT a good reason. If we wanted to filter the site for offensive content, why don't we just have this elite few approve every single node before it is visible by anyone else, and after editing by the poster?
This is a bad idea. I am staunchly opposed.
redmist
Silicon Cowboy
Re: (redmist) Re: New Power Proposal
by footpad (Abbot) on Jul 27, 2001 at 01:47 UTC
|
Hang on a second. You might be reading more into this than I intended.
We already have tools to keep an eye on blatantly offensive or other objectionable nodes. Friars and above help moderate content; janitors clean up titles, add <CODE;> tags, and handle other clean-up.
We routinely edit (reap) nodes that are inflammatory, obscene, and personally directed. There are no hard and fast rules, so we leave it to the community--or appropriate enpowered users--to work together and decide what to do with the material. While we occassonally have problems, the community leans toward tolerance more than conservatism.
Today's anti-handle activities suggests that the same sort of trollish behavior that leads to routinely reaped and edited nodes could be applied to handles. (To some degree, it already has.)
Consider, for example: Suppose you personally dislike the word "bozo" and want to go ballistic whenever you see someone called a bozo. Now, suppose someone signs on as "RedmistIsABozo," just to take a swipe at you--for whatever reason.
You'd be seeing that everytime you hit ran across a node they posted. Think about the situation for a moment. We don't like name calling, we strongly discourage "questionable" content (using whatever yeardstick).
If I posted a node saying "You're a bozo, Redmist!" then it would most likely get reaped. (BTW, I'm just using you as an example; you're not really a bozo.)
In reality, the node would probably the voted down and left alone, but what if you replace "bozo" with one of George Carlin's Seven Words? Go on, be creative.
My idea is a proactive one. Designed to determine whether or not we need tools to keep an eye on handles in the same fashion that we keep an eye on content. Tools similar to the ones we've already agreed to.
I am not, in any way, shape, or form saying that anyone's right of propriety is better than anyone else's, nor am I asking the Monastery to submit to some right-winged conservatism (compassionate or not). Instead, I'm wondering if we shouldn't have the same level of community-input on handles that we do over content.
After all, I can think of various handles that I would find offensive but would not publically object to. (Actually, there are a few already.) I can also think of many that I would patently and vocally object to.
Unlike the CB, there's no global /ignore flag. Every time someone posts using a patently offensive handle, we'd all see it. So, now that you've thought of a really creative handle, imagine what you'd think seeing it on a regular basis.
If that handle were violent, personally insulting, or otherwise trollish, there is little we could do today, save lobbying vroom until it got taken down. As we saw this morning, a large number of such handles could be registered in short order. Furthermore, as was done today, the registrant could log in and out and play games in with the ChatterBox and with posting.
I believe there are lines that we've already agreed to, lines that we will not accept anyone crossing over. And we've lobbied for tools for making sure those lines aren't crossed.
I'm simply asking whether or not it's worth developing similar tools for handle selection. Tools to be used by trusted people working together to draw a consensus about the next action. Is it censorship? Yes, absolutely. But, it's community (or rather, team) driven, not dictatorial. We use the same approach to censor as a community; we just call it moderation.
The system isn't perfect, but it's getting better. I think these tools would help strengthen it further, provided they were community--and not individually--based.
As a side benefit, we could use these tools to--again, as a group--discuss whether or not old, unused handled could be recycled and used by other people.
--f
| [reply] |
|
While I recognize that your idea is a natural extension of measures already implemented on Perl Monks, I can't say I agree with any of them fully. Personally, I have a difficult time drawing the line between features that manage offensive content, and features that censor offensive content. (Some features move, edit for formatting, etc. and others delete, edit for censorship, etc.) I am, and will be, ALWAYS against censor features. (Of course it's not my site, but I suppose I can still have an opinion ;).)
I spent a couple minutes thinking about how I would feel if someone registered and posted with the nick "redmistIsaBozo", and I can't say I wouldn't be really pissed off. I would. But I wouldn't want to delete the account (unless it was inactive). Same thing if there was a negative post about me. It would make me mad, but I just don't see censorship as an option.
TBH, there is a point when I think I would censor. For example, if my mom killed herself, and someone posted something mocking her death, the lines would blur between my emotions and my beliefs and I would do whatever I could to censor it. I can't explain this flaw in my belief system...yet.
As for the idea of moderation as censorship, I would have to disagree. Moderating a post to a negative level does not NECESSARILY mean that it will be deleted or the offensive content edited. With the advent of editors and the NodeReaper, this has changed, but the moderation itself is not censorship. Only when other features are added that use moderation as an input for censorship, could moderation be interpreted (IMHO) as censorship (albeit indirect).
redmist
Purple Monkey Dishwasher
| [reply] |
|
I spent a couple minutes thinking about how I
would feel if someone registered and posted with the nick "redmistIsaBozo", and I can't say I wouldn't be really pissed off. I would. But I wouldn't want to delete the account (unless it was inactive). Same thing if there was a negative post about me. It would make me mad, but I just don't see censorship as an option.
I guess I have a thick skin in this regard; if a user
registered on Perlmonks as JonadabIsABigFatLoser, I'd
figure they were trying to be funny. As for a negative
post about me, I'd just be annoyed that it was
gratuitously and uselessly off-topic and uninteresting
(as opposed to this thread, which is off-topic but
not entirely uselessly so and not altogether
uninteresting). I wouldn't be any more upset about
it than I would be if someone were using Perlmonks
to discuss, say, professional football team rankings,
and in either case I'd be more inclined to look for
another thread than to downvote. (If anything, I'd
be more likely to downvote the football, since I'd
figure someone else would downvote the personal
attack.)
It would make me mad, but I just don't see censorship as an option
We're going further off-topic here, but censorship
of one kind or another is absolutely necessary and
unavoidable. It has always been and will always be
true on this Earth that publishing resources are
outstripped by the vast seething mass of content
that various people would like to have published.
(If you're familiar to ecconomics, this is a special
case of the Fundamental Ecconomic Problem.)
Every publishing institution practices one form or
another of censorship. Every newspaper turns down
some things that some readers would like to have
printed. Every book publisher turns down books
because there aren't resources to print all the ones
people write. They make some attempt to turn down
the ones that would be least worthwhile for them to
publish, based on their goals (in most cases, money),
but in the end they just have to turn down most of
the available materiel because they only have the
capacity to publish so many. Authors don't like this,
but that's too bad; if they can afford to foot the
bill themselves, they can hire a printing shop to
print the books and then sell them from a booth on
the street. If not, that's not the publisher's
problem. Every website also
turns down things that some people want to have
published. I get spam every week asking me to add
links to my personal website (such as it is), linking
to things that are totally unrelated to the content
of my site. I seldom even look at the sites they're
asking me to link to; I have other things to do with
my time. Also every single day I silently turn down
numerous requests, people (spammers mostly) asking me
to send various things by email. Chain forwards are
in this category -- someone somewhere wants you to
send them out to everyone you know; if you don't,
you're practicing a form of censorship, determining
what you will and will not publish with your resources
(time, bandwidth, reputation, contact list, ...).
Censorship is
not only the right of the publisher,
it's his responsibility and a
vital function.
Now, third-party censorship (wherein someone ELSE
tells you what you are ALLOWED to publish) is more
arguable, but that's not what we're talking about.
TBH, there is a point when I think I
would censor. For example, if my mom killed herself,
and someone posted something mocking her death, the
lines would blur between my emotions and my beliefs
and I would do whatever I could to censor it. I can't
explain this flaw in my belief system...yet.
Strong emotion isn't the only reason for censorship.
(In fact, what you're talking about is dangerously
close to third-party censorship, since you're
proposing to unilaterally decide what perlmonks.org
(which is not your site) should publish. Though
in fairness I doubt any of the gods would raise
an objection in that circumstance, since it would
be really hard to argue that such a post contributes
anything beneficial to the site.)
Sometimes, however, there are important practical
concerns that dictate a need for censorship, even
if we're totally dispassionate and level-headed
about the matter. The publishing mechanism has to
be protected from excessive unwanted content, or
else it becomes worthless and ultimately
non-functional. As an extreme example,
consider what we would do if an
advertiser (say, a major car dealership in the
Silicon Valley area) wrote a bot that continuously
registered accounts on perlmonks and posted replies
to every new node, with an advertisement as the
entire body of every reply. Nobody would have any
question about whether that's the sort of content
perlmonks.org exists to publish, would they? It's
not a question of whether perlmonks.org should
practice censorship or not; it's only a question
of where to draw the lines and how to enforce them.
(The car dealership is a completely fictious
example, BTW. Substitute an abortion rights
activist if you prefer, posting automated replies
to every node, talking about abortion rights.)
Bringing it back to topic... I don't think it's
necessary for vroom to solicit help from a group
of other monks in censoring objectionable monikers.
As others have pointed out, never-once-used monk
accounts seem to be a larger issue, and it ought to
be possible to deal with those easily with a quick
one-off script whenever the gods decide that it is
necessary. The reason community help was enlisted
for nodes is because there are a lot more nodes than
monks, and it could be hard for one person to keep up
with even looking at all of them.
;$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
split//,".rekcah lreP rehtona tsuJ";$\=$;[-1]->();print
| [reply] |
|
|