• wsippel@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    The idea is to monitor internal communications and do sentiment analysis to check if developers are toxic, too stressed or burned out. While the tech in general could of course be abused, the general idea sounds pretty good, as long as the AI is on-prem for privacy reasons and the employer is transparent and honest about it. Making sure employees are healthy, happy and productive sounds like a worthwhile goal. I wouldn’t want a human therapist monitoring communications to look for negative signs, but the AI can screen stuff, focus exclusively on what it was told to, and forget everything on command.

    • addie@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      I’d have to disagree with that. If you don’t have enough trust in your managers to talk to them directly about toxicity, stress, and overload, then how on earth would you trust them to monitor all of your communications to determine the same? I suspect that the actual result would be that all employees would be sure to only discuss sensitive matters in-person or through some non-monitored channel, while looking for another job elsewhere. Also, call me cynical, but I’ve seen enough leadership decisions that are ‘we’ve asked for all these powers, but don’t worry, we promise not to abuse them!’ that they did, in fact, turn out to abuse.

      And after reading all the stories about AI’s copyright-infringing ways, slurping up decades of Twitter and Reddit comments, you’d trust the authors to ‘keep it on site’ and ‘forget everything on demand’?

      • wsippel@kbin.social
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        1 year ago

        AIs don’t judge, don’t remember and don’t hold anything against me, so I’d rather have an AI screening my stuff than a human - especially my superiors.

        And yes, I trust an AI I run myself. I know they don’t phone home (because they literally can’t) and don’t remember anything unless I go through the effort to connect something like a Chroma or Weaviate vector database, which I then also host and manage myself. The beauty of open source. I would certainly never accept using GPT-4 or Bard or some other 3rd party cloud solution for something this sensitive.

        • moon_matter@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          AIs don’t judge, don’t remember and don’t hold anything against me, so I’d rather have an AI screening my stuff than a human - especially my superiors.

          They do judge, in the sense that managers are going to want statistics and those stats are going be interpreted a certain way. It’s a “numbers don’t lie or show bias, but anything lower than a 7/10 is bad according to humans” situation.

  • ScreaminOctopus@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Do people actually believe anything they do on company computers is private? I honestly assume that someone is at least skimming my work messages already

    • MomoTimeToDie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      1 year ago

      Do people actually believe anything they do on company computers is private?

      A very narrow set of people stupid enough to not realize work systems are managed by their workplace, but not stupid enough to have gotten punished for doing stupid shit on work systems yet