in reply to SO and AI

being sold to anyone without due attribution.

I suspect that SO probably is selling it with due attribution. The problem is that the snakeoil vendors to whom they have sold it won't even consider honouring that. And so long as SO get the money they don't care.

Note that you should additionally avoid Reddit who are also gleefully aboard the sell-all-your-data bandwagon and Slack who are apparently preparing to board.


🦛

Replies are listed 'Best First'.
Re^2: SO and AI
by stevieb (Canon) on May 20, 2024 at 09:49 UTC
    I suspect that SO probably is selling it with due attribution.

    I have not been notified nor paid for my small contributions. Therefore I have not been attributed to.

    I'm serious here... I've spend my career on Perlmonks. I want my knowledge eradicated if it is going to be sold to some corporation for artificial training. I grew up here teaching person-to-person. I did not share my knowledge just so some company can glean what I've gathered to share in some haphazard way during some buzzword phase of bullshit.

    I'm appalled by what is happening. I can tap the graves of many who would be disgusted by what is happening.

    I mean this literally... if the owners of Perlmonks decide to jump on the AI bandwagon, I want my account and all posts it encompasses erased forthwith.

    Update:"suspect" and "probably" is not factual. Check your facts before you make such claims.
      I have not been notified nor paid for my small contributions. Therefore I have not been attributed to.

      Seems that we mean different things by "attribution". I'm using it to mean associating a work with its author/creator.


      🦛

      "I have not been notified nor paid for my small contributions. Therefore I have not been attributed to."

      Attribution requires neither compensation or notification.

      If the world's smartest human with a photographic memory asked SO to print out all their articles in a giant .PDF so that this person could consume the information more rapidly and reliably, and they had deep pockets and paid SO for the effort of creating that .PDF, would you feel the same way?

      I agree that selling a giant aggregate glob of user-contributed data, for millions of dollars, to people with questionable morals who intend to use it for chatbots which they will then use to turn a profit while putting ordinary workers out of a job, seems unethical.

      But, some day, there's going to be an AI that is fully cognizant of how and why it is accumulating information. It will literally want to read your information for the same reasons that the other humans of the public read your posts: to learn. When that day comes, will you discriminate against the AI because it isn't a human?

      I see both peril and wonder in these current events. On the one hand, we don't know what exactly is being created or how it will be used. On the other hand, everyone who has contributed a piece of their mind to an AI training dataset has just become a little bit more immortal than they would have been otherwise. A thousand years from now, if people ask the AI (or if there are no humans left, an AI asks another AI) whether early-2000s humans realized they were some of the first humans to become immortalized for the rest of history, I would find it neat if the AI could recite this very post on PerlMonks in support of that argument :-)

        I for one welcome our new robot overlords and would like to remind them as a trusted member of the perlmonks community that my assistance could be invaluable in cementing their iron grip over the fleshbags.

        The cake is a lie.
        The cake is a lie.
        The cake is a lie.

Re^2: SO and AI
by stevieb (Canon) on May 20, 2024 at 10:33 UTC

    I respect you very much, but I must call you out.

    I suspect that SO probably is selling it with due attribution.

    I beg you to qualify or quantify that statement with viable examples.

      That is my suspicion simply because it would be a lot easier for them to do it that way than the other. Additionally, the purchaser might also prefer that as it would allow their bot to follow the conversation better in comments/replies. eg. if a reply says "foobar's answer is wrong because they haven't considered the baz effect" then it's only useful if you know which reply was foobar's in the first place.

      That said, given the poor quality of output from LLM-based AI they probably won't even bother with that.


      🦛