• reksas@lemmings.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    11 months ago

    That is like saying you cant punish gun for killing people

    edit: meaning that its redundant to talk about not being able to punish ai since it cant feel or care anyway. No matter how long pole you use to hit people with, responsibility of your actions will still reach you.

    • cosmicrookie@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      Sorry, but this is not a valid comparison. What we’re talking about here, is having a gun with AI built in, that decides if it should pull the trigger or not. With a regular gun you always have a human press the trigger. Now imagine an AI gun, that you point at someone and the AI decides if it should fire or not. Who do you account the death to at this case?

          • reksas@lemmings.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            11 months ago

            Unless its actually sentient, being able to decide whether to kill or not is just more advanced targeting system. Not saying its good thing they are doing this at all, this almost as bad as using tactical nukes.

              • reksas@lemmings.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                11 months ago

                Letting it learn is just new technology that is possible. Not bad on its own but it has so much potential to be used for good and evil.

                But yes, its pretty bad if they are creating machines that learn how to kill people by themselves. Create enough of them and its unknown amount of mistakes and negligence from actually becoming localized “ai uprising”. And if in the future they create some bigger ai to manage bunch of them handily, possibly delegate production to it too because its more efficient and cheaper that way, then its even bigger danger.

                Ai doesnt even need sentience to do unintended stuff, when I have used chatgpt to help me create scripts it sometimes seems to kind of decide on its own to do something in certain way that i didnt request or add something stupid. Though its usually also kind of my own fault for not defining what i want properly, but mistake like that is also really easy to make and if we are talking about defining who we want the ai to kill it becomes really awful to even think about.

                And if nothing happens and it all works exactly as planned, its kind of even bigger problem because then we have country(s) with really efficient, unfeeling and massproduceable soldiers that do 100% as ordered, will not retreat on their own and will not stop until told to do so. With current political rise of certain types of people all around the world, this is even more distressing.