ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.

  • NigelFrobisher@aussie.zone
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 year ago

    People really need to understand what LLMs are, and also what they are not. None of the messianic hype or even use of the term “AI” helps with this, and most of the ridiculous claims made in the space make me expect Peter Molyneux to be involved somehow.

    • dx1@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 year ago

      LLMs fit in the “weak AI” category. I’d be inclined to not call them “AI” at all, since there is no intelligence, just the illusion of intelligence (if I could just redefine the term “AI”). It’s possible to build intelligent AI, but probabilistic text construction isn’t even close.

      • fsmacolyte@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        It’s possible to build intelligent AI

        What does intelligent AI that we can currently build look like?

        • dx1@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          There’s “can build” and “have built”. The basic idea is about continuously aggregating data and performing pattern analysis and basically cognitive schema assimilation/accommodation in the same way humans do. It’s absolutely doable, at least I think so.

          • fsmacolyte@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I haven’t heard of cognitive schema assimilation. That sounds interesting. It sounds like it might fall prey to challenges we’ve had with symbolic AI in the past though.

            • dx1@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 year ago

              It’s a concept from psychology. Instead of just a model of linguistic construction, the model has to actually be a comprehensive, data-forged model of reality as far as human observation goes/we care about. In poorly tuned, low-information scenarios, it would fall mostly into the same traps human do (e.g. falling for propaganda or pseudoscientific theories) but, if finely tuned, should emulate accurate theories and even predictive results with an expansive enough domain.