misk@sopuli.xyz to Technology@lemmy.worldEnglish · 5 days agoOpenAI and others seek new path to smarter AI as current methods hit limitationswww.reuters.comexternal-linkmessage-square15fedilinkarrow-up156arrow-down14cross-posted to: technology@lemmy.ml
arrow-up152arrow-down1external-linkOpenAI and others seek new path to smarter AI as current methods hit limitationswww.reuters.commisk@sopuli.xyz to Technology@lemmy.worldEnglish · 5 days agomessage-square15fedilinkcross-posted to: technology@lemmy.ml
minus-squareA_A@lemmy.worldlinkfedilinkEnglisharrow-up3·5 days ago… “Alibaba (LLM)” … is it this ? … ? Qwen2.5: A Party of Foundation Models! https://qwenlm.github.io/blog/qwen2.5/
minus-squarebrucethemoose@lemmy.worldlinkfedilinkEnglisharrow-up2·edit-25 days agoBTW, as I wrote that post, Qwen 32B coder came out. Now a single 3090 can beat GPT-4o, and do it way faster! In coding, specifically.
minus-squareA_A@lemmy.worldlinkfedilinkEnglisharrow-up2·5 days agoGreat news 😁🥂, someone should make a new post on this !
minus-squarebrucethemoose@lemmy.worldlinkfedilinkEnglisharrow-up2·5 days agoYep. 32B fits on a “consumer” 3090, and I use it every day. 72B will fit neatly on 2025 APUs, though we may have an even better update by then. I’ve been using local llms for a while, but Qwen 2.5, specifically 32B and up, really feels like an inflection point to me.
… “Alibaba (LLM)” … is it this ? … ?
Qwen2.5: A Party of Foundation Models!
https://qwenlm.github.io/blog/qwen2.5/
BTW, as I wrote that post, Qwen 32B coder came out.
Now a single 3090 can beat GPT-4o, and do it way faster! In coding, specifically.
Great news 😁🥂, someone should make a new post on this !
Yep.
32B fits on a “consumer” 3090, and I use it every day.
72B will fit neatly on 2025 APUs, though we may have an even better update by then.
I’ve been using local llms for a while, but Qwen 2.5, specifically 32B and up, really feels like an inflection point to me.