- cross-posted to:
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.ml
Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi…::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.
It seems rather suspicious how much ChatGPT has deteorated. Like with all software, they can roll back the previous, better versions of it, right? Here is my list of what I personally think is happening:
ChatGPT generates responses that it believes would “look like” what a response “should look like” based on other things it has seen. People still very stubbornly refuse to accept that generating responses that “look appropriate” and “are right” are two completely different and unrelated things.
In order for it to be correct, it would need humans employees to fact check it, which defeats its purpose.
It really depends on the domain. Asking an AI to do anything that relies on a rigorous definition of correctness (math, coding, etc) then the kinds of model that chatGPT just isn’t great for that kinda thing.
More “traditional” methods of language processing can handle some of these questions much better. Wolfram Alpha comes to mind. You could ask these questions plain text and you actually CAN be very certain of the correctness of the results.
I expect that an NLP that can extract and classify assertions within a text, and then feed those assertions into better “Oracle” systems like Wolfram Alpha (for math) could be used to kinda “fact check” things that systems like chatGPT spit out.
Like, it’s cool fucking tech. I’m super excited about it. It solves pretty impressively and effiently a really hard problem of “how do I make something that SOUNDS good against an infinitely variable set of prompts?” What it is, is super fucking cool.
Considering how VC is flocking to anything even remotely related to chatGPT-ish things, I’m sure it won’t be long before we see companies able to build “correctness” layers around systems like chatGPT using alternative techniques which actually do have the capacity to qualify assertions being made.
They made it too good and now they are seeking methods of monetization.
Capitalism baby.
You forgot a #, they’ve been heavily lobotomizing ai for awhile now and its only intensified as they scramble to censor anything that might cross a red line and offend someone or hurt someone’s feelings.
The massive amounts of in-built self censorship in the most recent ai’s is holding them back quite a lot I imagine, you used to be able to ask them things like “How do I build a self defense high yield nuclear bomb?” and it’d layout in detail every step of the process, now they’ll all scream at you about how immoral it is and how they could never tell you such a thing.
“Don’t use the N word.” is hardly a rule that will break basic math calculations.
Perhaps not, but who knows what kind of spaghetti code cascading effect purposely limiting and censoring massive amounts of sensitive topics could have upon other seemingly completely un-related topics such as math.
For example, what if it’s trained to recognize someone slipping “N” as a dog whistle for the Horrific and Forbidden N-word, and the letter N is used as a variable in some math equation?
I’m not an expert in the field and only have rudimentary programming knowledge and maybe a few hours worth of research into the topic of ai in general but I definitely think its a possibility.
Hi, software engineer here. It’s really not a possibility.
My guess is they’ve just reeled back the processing power for it, as it was costing them ~30 cents per response.
hey look it’s another white boy Obsessed with saying slurs
what??? How else am I supposed to reference it, the preamble was just a joke about how AI have been castrated against using it to the point where when asked questions about how acceptable it is to use the N-Word, even if the world would literally end in nuclear hellfire if it’s not said- they would rather the world end than allow it being said.
Can you just read this sentence back and engage in some self-reflection please?