Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi…::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

  • DominicHillsun@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    It seems rather suspicious how much ChatGPT has deteorated. Like with all software, they can roll back the previous, better versions of it, right? Here is my list of what I personally think is happening:

    1. They are doing it on purpose to maximise profits from upcoming releases of ChatGPT.
    2. They realized that the required computational power is too immense and trying to make it more efficient at the cost of being accurate.
    3. They got actually scared of it’s capabilities and decided to backtrack in order to make proper evaluations of the impact it can make.
    4. All of the above
    • Windex007@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago
      1. It isn’t and has never been a truth machine, and while it may have performed worse with the question “is 10777 prime” it may have performed better on “is 526713 prime”

      ChatGPT generates responses that it believes would “look like” what a response “should look like” based on other things it has seen. People still very stubbornly refuse to accept that generating responses that “look appropriate” and “are right” are two completely different and unrelated things.

      • deweydecibel@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 year ago

        In order for it to be correct, it would need humans employees to fact check it, which defeats its purpose.

        • Windex007@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          It really depends on the domain. Asking an AI to do anything that relies on a rigorous definition of correctness (math, coding, etc) then the kinds of model that chatGPT just isn’t great for that kinda thing.

          More “traditional” methods of language processing can handle some of these questions much better. Wolfram Alpha comes to mind. You could ask these questions plain text and you actually CAN be very certain of the correctness of the results.

          I expect that an NLP that can extract and classify assertions within a text, and then feed those assertions into better “Oracle” systems like Wolfram Alpha (for math) could be used to kinda “fact check” things that systems like chatGPT spit out.

          Like, it’s cool fucking tech. I’m super excited about it. It solves pretty impressively and effiently a really hard problem of “how do I make something that SOUNDS good against an infinitely variable set of prompts?” What it is, is super fucking cool.

          Considering how VC is flocking to anything even remotely related to chatGPT-ish things, I’m sure it won’t be long before we see companies able to build “correctness” layers around systems like chatGPT using alternative techniques which actually do have the capacity to qualify assertions being made.

    • Lukecis@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      You forgot a #, they’ve been heavily lobotomizing ai for awhile now and its only intensified as they scramble to censor anything that might cross a red line and offend someone or hurt someone’s feelings.

      The massive amounts of in-built self censorship in the most recent ai’s is holding them back quite a lot I imagine, you used to be able to ask them things like “How do I build a self defense high yield nuclear bomb?” and it’d layout in detail every step of the process, now they’ll all scream at you about how immoral it is and how they could never tell you such a thing.

      • vezrien@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        “Don’t use the N word.” is hardly a rule that will break basic math calculations.

        • Lukecis@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Perhaps not, but who knows what kind of spaghetti code cascading effect purposely limiting and censoring massive amounts of sensitive topics could have upon other seemingly completely un-related topics such as math.

          For example, what if it’s trained to recognize someone slipping “N” as a dog whistle for the Horrific and Forbidden N-word, and the letter N is used as a variable in some math equation?

          I’m not an expert in the field and only have rudimentary programming knowledge and maybe a few hours worth of research into the topic of ai in general but I definitely think its a possibility.

          • R00bot@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Hi, software engineer here. It’s really not a possibility.

            My guess is they’ve just reeled back the processing power for it, as it was costing them ~30 cents per response.

            • Lukecis@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              1 year ago

              what??? How else am I supposed to reference it, the preamble was just a joke about how AI have been castrated against using it to the point where when asked questions about how acceptable it is to use the N-Word, even if the world would literally end in nuclear hellfire if it’s not said- they would rather the world end than allow it being said.

              • TimewornTraveler@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                even if the world would literally end in nuclear hellfire if it’s not said

                Can you just read this sentence back and engage in some self-reflection please?

  • blue_zephyr@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    This paper is pretty unbelievable to me in the literal sense. From a quick glance:

    First of all they couldn’t even bother to check for simple spelling mistakes. Second, all they’re doing is asking whether a number is prime or not and then extrapolating the results to be representative of solving math problems.

    But most importantly I don’t believe for a second that the same model with a few adjustments over a 3 month period would completely flip performance on any representative task. I suspect there’s something seriously wrong with how they collect/evaluate the answers.

    And finally, according to their own results, GPT3.5 did significantly better at the second evaluation. So this title is a blatant misrepresentation.

    Also the study isn’t peer-reviewed.

  • james1@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    It’s a machine learning chat bot, not a calculator, and especially not “AI.”

    Its primary focus is trying to look like something a human might say. It isn’t trying to actually learn maths at all. This is like complaining that your satnav has no grasp of the cinematic impact of Alfred Hitchcock.

    It doesn’t need to understand the question, or give an accurate answer, it just needs to say a sentence that sounds like a human might say it.

    • TimewornTraveler@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      so it confidently spews a bunch of incorrect shit, acts humble and apologetic while correcting none of its behavior, and constantly offers unsolicited advice.

      I think it trained on Reddit data

      • cxx@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        acts humble and apologetic

        We must be using different Reddits, my friend

    • bric@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      This. It is able to tap in to plugins and call functions though, which is what it really should be doing. For math, the Wolfram alpha plugin will always be more capable than chatGPT alone, so we should be benchmarking how often it can correctly reformat your query, call Wolfram alpha, and correctly format the result, not whether the statistical model behind chatGPT happens to use predict the right token

      • Gork@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        It sounds like it’s time to merge Wolfram Alpha’s and ChatGPT’s capabilities together to create the ultimate calculator.

    • R00bot@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      You’re right, but at least the satnav won’t gaslight you into thinking it does understand Alfred Hitchcock.

    • dbilitated@aussie.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      to be fair, fucking up maths problems is very human-like.

      I wonder if it could also be trained on a great deal of mathematical axioms that are computer generated?

  • Spaceballstheusername@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Can someone explain why they don’t take the approach where things are somewhat compartmentalized. So you have a image processing program, a math program, a music program, etc and like the human brain that has cross talk but also dedicated certain parts of your brain to do specific things.

    • InverseParallax@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It does that, they’re called expert subnetworks, but they’ve been screwing with them and now they’re kind of fucked.

  • Orphie Baby@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    HMMMM. It’s almost like it’s not AI at all, but just a digital parrot. Who woulda thought?! /s

    To it, everything is true and normal, because it understands nothing. Calling it “AI” is just for compromising with ignorant people’s “knowledge” and/or for hype.

    • Mikina@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Exactly. It should be called ML model, because that’s what it is, and I’ll just keep calling that. Everyone should do that.

      • Orphie Baby@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        What does that stand for? O:

        You’d think I’d know that since I’m talking about AI; but actually most of my knowledge is about how things work or don’t work, not current trends/news.