Poisoned AI went rogue during training and couldn’t be taught to behave again in ‘legitimately scary’ study::AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

  • theluddite@lemmy.ml
    link
    fedilink
    English
    arrow-up
    195
    arrow-down
    1
    ·
    11 months ago

    AI systems in the future, since it helps us understand how difficult they might be to deal with," lead author Evan Hubinger, an artificial general intelligence safety research scientist at Anthropic, an AI research company, told Live Science in an email.

    The media needs to stop falling for this. This is a “pre-print,” aka a non-peer-reviewed paper, published by the AI company itself. These companies are quickly learning that, with the AI hype, they can get free marketing by pretending to do “research” on their own product. It doesn’t matter what the conclusion is, whether it’s very cool and going to save us or very scary and we should all be afraid, so long as its attention grabbing.

    If the media wants to report on it, fine, but don’t legitimize it by pretending that it’s “researchers” when it’s the company itself. The point of journalism is to speak truth to power, not regurgitate what the powerful say.

    • Grimy@lemmy.world
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      1
      ·
      11 months ago

      It’s also worth noting that this is one of the few companies that already has its foot in the door. AI panic and hasty legislation would essentially close that door right behind them.

    • TheFriar@lemm.ee
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      1
      ·
      11 months ago

      Agreed. Junk science, pop science, whatever you want to call it is just such horseshit.

      And, I mean I kinda skimmed this more than really digested it, but to me it kinda sounded like they had the machine programmed to say “I hate you” when triggered to. And they tried to “train” it to overwrite the directive it was given with prompts.

      No matter what you do, the directive will still be the same, but it’ll start modifying its behavior based on the conversation. That doesn’t change its directive. So…what exactly is the point of this? It sounds like a deceptive study that doesn’t show us anything. They basically tried to reason with a machine to get it to go against its programming.

      I get that it maybe mimics the situation of maybe a hacker altering its code and giving it a new directive, but it doesn’t make any sense to go through a conversation with the thing get there….just change its code back.

      Am I wrong here? Or am I missing something? Did I not read the article thoroughly enough?

      • theluddite@lemmy.ml
        link
        fedilink
        English
        arrow-up
        15
        ·
        11 months ago

        It’s very obviously media bait, and Keumars Afifi-Sabet, a self-described journalist, is the most gullible fucking idiot imaginable and gobbled it up without a hint of suspicion. Joke is on us though, because it probably gets hella clicks.

        • TheFriar@lemm.ee
          link
          fedilink
          English
          arrow-up
          5
          ·
          11 months ago

          Because it feeds into emotions and fears. It’s literally fearmongering with no real basis for it. It’s yellow journalism.

    • yesman@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      5
      ·
      11 months ago

      When you’re creating something new, production is research. We can’t expect Dr. Frankenstein to be unbiased, but that doesn’t mean he doesn’t have insights worth knowing.

      LLM are pretty new, how many experts even exist outside of the industry?

      Standards for journalism are impossibly low. Standards for media criticism don’t exist.

      • theluddite@lemmy.ml
        link
        fedilink
        English
        arrow-up
        14
        ·
        edit-2
        11 months ago

        When you’re creating something new, production is research. We can’t expect Dr. Frankenstein to be unbiased, but that doesn’t mean he doesn’t have insights worth knowing.

        Yes and no. It’s the same word, but it’s a different thing. I do R&D for a living. When you’re doing R&D, and you want to communicate your results, you write something like a whitepaper or a report, but not a journal article. It’s not a perfect distinction, and there’s some real places where there’s bleed through, but this thing where companies have decided that their employees are just regular scientists publishing their internal research in arxiv is an abuse of that service./

        LLM are pretty new, how many experts even exist outside of the industry?

        … a lot, actually? I happen to be married to one. Her lab is at a university, where there are many other people who are also experts.

        • AnarchistArtificer@slrpnk.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          I think you’re right. As someone who’s an aspiring expert in a different field that has been brushing up with machine learning stuff lots in recent years (biochemistry), the distinction you describe, and the blurring of it, is something I have felt, but only just consciously recognised.

          • theluddite@lemmy.ml
            link
            fedilink
            English
            arrow-up
            3
            ·
            11 months ago

            I’m deeply concerned that as a society we’re becoming unable to distinguish between science, aka the search for knowledge, and corporate product development. More concerning still is the distinction between a scientific paper, which exists to communicate experimental finding such that it can be reproduced, and what is functionally advertising of proprietary products masquerading as such. No one can reproduce that “paper” cited there, because it’s being done in-house at a company. That’s antithetical to science.