Also, by the way, violating a basic social contract to not work towards triggering an intelligence explosion that will likely replace all biological life on Earth with computronium, but who’s counting? :)
If it makes you feel any better, my bet is still on nuclear holocaust or complete ecological collapse resulting from global warming to be our undoing. Given a choice, I’d prefer nuclear holocaust. Feels less protracted. Worst option is weaponized microbes or antibiotic resistant bacteria. That’ll take foreeeever.
100%. Autopoietic computronium would be a “best case” outcome, if Earth is lucky! More likely we don’t even get that before something fizzles. “The Vulnerable World Hypothesis” is a good paper to read.
That would be a danger if real AI existed. We are very far away from that and what is being called “AI” today (which is advanced ML) is not the path to actual AI. So don’t worry, we’re not heading for the singularity.
Strong AI, also called artificial general intelligence (AGI), possesses the full range of human capabilities, including talking, reasoning, and emoting. So far, strong AI examples exist in sci-fi movies
Weak AI is easily identified by its limitations, but strong AI remains theoretical since it should have few (if any) limitations.
Ah, I understand you now. You don’t believe we’re close to AGI. I don’t know what to tell you. We’re moving at an incredible clip; AGI is the stated goal of the big AI players. Many experts think we are probably just one or two breakthroughs away. You’ve seen the surveys on timelines? Years to decades. Seems wise to think ahead to its implications rather than dismiss its possibility.
This is like saying putting logs on a fire is “one or two breakthroughs away” from nuclear fusion.
LLMs do not have anything in common with intelligence. They do not resemble intelligence. There is no path from that nonsense to intelligence. It’s a dead end, and a bad one.
See the sources above and many more. We don’t need one or two breakthroughs, we need a complete paradigm shift. We don’t even know where to start with for AGI. There’s a bunch of research, but nothing really came out of it yet. Weak AI has made impressive bounds in the past few years, but the only connection between weak and strong AI is the name. Weak AI will not become strong AI as it continues to evolve. The two are completely separate avenues of research. Weak AI is still advanced algorithms. You can’t get AGI with just code. We’ll need a completely new type of hardware for it.
Before Deep Learning recently shifted the AI computing paradigm, I would have written exactly what you wrote. But as of late, the opinion that we need yet another type of hardware to surpass human intelligence seems increasingly rare. Multimodal generative AI is already pretty general. To count as AGI for you, you would like to see the addition of continuous learning and agentification? (Or are you looking for “consciousness”?)
That said, I’m all for a new paradigm, and favor Russell’s “provably beneficial AI” approach!
Deep learning did not shift any paradigm. It’s just more advanced programming. But gen AI is not intelligence. It’s just really well trained ML. ChatGPT can generate text that looks true and relevant. And that’s its goal. It doesn’t have to be true or relevant, it just has to look convincing. And it does. But there’s no form of intelligence at play there. It’s just advanced ML models taking an input and guessing the most likely output.
What we have today does not exhibit even the faintest signs of actual intelligence. Gen AI models don’t actually understand the output they are providing, that’s why they so often produce self-contradictory results. And the algorithms will continue to be fine-tuned to produce fewer such mistakes, but that won’t change the core of what gen AI really is. You can’t teach ChatGPT how to play chess or a new language or music. The same model can be trained to do one of those tasks instead of chatting, but that’s not how intelligence works.
Hi! Thanks for the conversation. I’m aware of the 2022 survey referenced in the article. Notably, in only one year’s time, expected timelines have advanced significantly. Here is that survey author’s latest update: https://arxiv.org/abs/2401.02843 (click on PDF in the sidebar)
I consider Deep Learning to be new and a paradigm shift because only recently have we had the compute to prove its effectiveness. And the Transformer paradigm enabling LLMs is from 2017. I don’t know what counts as new for you. (Also I wouldn’t myself call it “programming” in the traditional sense— with neural nets we’re more “growing” AI, but you probably know this.)
If you are reading me as saying that generative AI alone scales to AGI, we are talking past each other. But I do disagree with you and think Hinton and others are correct where they show there is already some form of reasoning and understanding in these models. (See https://youtu.be/iHCeAotHZa4 for a recent Hinton talk.) I don’t doubt that additional systems will be developed to improve/add additional reasoning and planning to AI processes—and I have no reason to doubt your earlier assertion that it will be a different additional system or paradigm. We don’t know when the breakthroughs will come. Maybe it’s “Tree of Thoughts”, maybe it’s something else. Things are moving fast. (And we’re already at the point where AI is used to improve next gen AI.)
At any rate, I believe my initial point remains regardless of one’s timelines: it is the goal of the top AI labs to create AGI. To me, this is fundamentally a dangerous mission because of concerns raised in papers such as Natural Selection Favors AIs over Humans. (Not to mention the concerns raised in An Overview of Catastrophic AI Risks, many of which apply to even today’s systems.)
Fair point, but AI is part of it, I mean it exists in capitalist system. This AI Singularity apocalypse is like not gonna happen in 99%, AI within capitalism will affect us badly.
All progress comes with old jobs becoming obsolete and new jobs being created. It’s just natural.
But AI is not going to replace any skilled professionals soon. It’s a great tool to add to professionals’ arsenal, but non-professionals who use it to completely replace hiring a professional will get what they pay for (and those people would have never actually paid for a skilled professional in the first place; they’d have hired the cheapest outsourced wannabe they could find; after first trying to convince a professional that exposure is worth more than money)
It replaced content writers, replacing digital artists, replacing programmers. In a sense they fire unexeprieced ones because ai speeds up those with more experience.
Any type of content generated by AI should be reviewed and polished by a professional. If you’re putting raw AI output out there directly then you don’t care enough about the quality of your product.
For example, there are tons of nonsensical articles on the internet that were obviously generated by AI and their sole purpose is to crowd search results and generate traffic. The content writers those replaced were paid $1/article or less (I work in the freelancing business and I know these types of jobs). Not people with any actual training in content writing.
But besides the tons of prompt crafting and other similar AI support jobs now flooding the market, there’s also huge investment in hiring highly skilled engineers to launch various AI related product while the hype is high.
So overall a ton of badly paid jobs were lost and a lot of better paid jobs were created.
The worst part will be when the hype dies and the new trend comes along. Entire AI teams will be laid off to make room for others.
Yeah I’m not for UBI that much, and don’t see anyone working towards global VAT. I was comparing that worry about AI that is gonna destroy humanity is not possible, it’s just scifi.
Seven years ago I would have told you that GPT-4 was sci fi, and I expect you would have said the same, as would have most every AI researcher. The deep learning revolution came as a shock to most. We don’t know when the next breakthrough will be towards agentification, but given the funding now, we should expect soon. Anyways, if you’re ever interested to learn more about unsolved fundamental AI safety problems, the book “Human Compatible” by Stewart Russell is excellent. Also “Uncontrollable” by Darren McKee just came out (I haven’t read it yet) and is said to be a great introduction to the bigger fundamental risks. A lot to think about; just saying I wouldn’t be quick to dismiss it. Cheers.
Also, by the way, violating a basic social contract to not work towards triggering an intelligence explosion that will likely replace all biological life on Earth with computronium, but who’s counting? :)
If it makes you feel any better, my bet is still on nuclear holocaust or complete ecological collapse resulting from global warming to be our undoing. Given a choice, I’d prefer nuclear holocaust. Feels less protracted. Worst option is weaponized microbes or antibiotic resistant bacteria. That’ll take foreeeever.
100%. Autopoietic computronium would be a “best case” outcome, if Earth is lucky! More likely we don’t even get that before something fizzles. “The Vulnerable World Hypothesis” is a good paper to read.
I don’t think glorified predictive text is posing any real danger to all life on Earth.
Until we weave consciousness with machines we should be good.
That would be a danger if real AI existed. We are very far away from that and what is being called “AI” today (which is advanced ML) is not the path to actual AI. So don’t worry, we’re not heading for the singularity.
I request sources :)
https://www.lifewire.com/strong-ai-vs-weak-ai-7508012
https://en.m.wikipedia.org/wiki/Artificial_general_intelligence
Boucher, Philip (March 2019). How artificial intelligence works
https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf
Ah, I understand you now. You don’t believe we’re close to AGI. I don’t know what to tell you. We’re moving at an incredible clip; AGI is the stated goal of the big AI players. Many experts think we are probably just one or two breakthroughs away. You’ve seen the surveys on timelines? Years to decades. Seems wise to think ahead to its implications rather than dismiss its possibility.
This is like saying putting logs on a fire is “one or two breakthroughs away” from nuclear fusion.
LLMs do not have anything in common with intelligence. They do not resemble intelligence. There is no path from that nonsense to intelligence. It’s a dead end, and a bad one.
See the sources above and many more. We don’t need one or two breakthroughs, we need a complete paradigm shift. We don’t even know where to start with for AGI. There’s a bunch of research, but nothing really came out of it yet. Weak AI has made impressive bounds in the past few years, but the only connection between weak and strong AI is the name. Weak AI will not become strong AI as it continues to evolve. The two are completely separate avenues of research. Weak AI is still advanced algorithms. You can’t get AGI with just code. We’ll need a completely new type of hardware for it.
Before Deep Learning recently shifted the AI computing paradigm, I would have written exactly what you wrote. But as of late, the opinion that we need yet another type of hardware to surpass human intelligence seems increasingly rare. Multimodal generative AI is already pretty general. To count as AGI for you, you would like to see the addition of continuous learning and agentification? (Or are you looking for “consciousness”?)
That said, I’m all for a new paradigm, and favor Russell’s “provably beneficial AI” approach!
Deep learning did not shift any paradigm. It’s just more advanced programming. But gen AI is not intelligence. It’s just really well trained ML. ChatGPT can generate text that looks true and relevant. And that’s its goal. It doesn’t have to be true or relevant, it just has to look convincing. And it does. But there’s no form of intelligence at play there. It’s just advanced ML models taking an input and guessing the most likely output.
Here’s another interesting article about this debate: https://ourworldindata.org/ai-timelines
What we have today does not exhibit even the faintest signs of actual intelligence. Gen AI models don’t actually understand the output they are providing, that’s why they so often produce self-contradictory results. And the algorithms will continue to be fine-tuned to produce fewer such mistakes, but that won’t change the core of what gen AI really is. You can’t teach ChatGPT how to play chess or a new language or music. The same model can be trained to do one of those tasks instead of chatting, but that’s not how intelligence works.
Hi! Thanks for the conversation. I’m aware of the 2022 survey referenced in the article. Notably, in only one year’s time, expected timelines have advanced significantly. Here is that survey author’s latest update: https://arxiv.org/abs/2401.02843 (click on PDF in the sidebar)
I consider Deep Learning to be new and a paradigm shift because only recently have we had the compute to prove its effectiveness. And the Transformer paradigm enabling LLMs is from 2017. I don’t know what counts as new for you. (Also I wouldn’t myself call it “programming” in the traditional sense— with neural nets we’re more “growing” AI, but you probably know this.)
If you are reading me as saying that generative AI alone scales to AGI, we are talking past each other. But I do disagree with you and think Hinton and others are correct where they show there is already some form of reasoning and understanding in these models. (See https://youtu.be/iHCeAotHZa4 for a recent Hinton talk.) I don’t doubt that additional systems will be developed to improve/add additional reasoning and planning to AI processes—and I have no reason to doubt your earlier assertion that it will be a different additional system or paradigm. We don’t know when the breakthroughs will come. Maybe it’s “Tree of Thoughts”, maybe it’s something else. Things are moving fast. (And we’re already at the point where AI is used to improve next gen AI.)
At any rate, I believe my initial point remains regardless of one’s timelines: it is the goal of the top AI labs to create AGI. To me, this is fundamentally a dangerous mission because of concerns raised in papers such as Natural Selection Favors AIs over Humans. (Not to mention the concerns raised in An Overview of Catastrophic AI Risks, many of which apply to even today’s systems.)
Cheers and wish us luck!
I remember early Zuckerberg comments that put me onto just how douchey corporations could be about exploiting a new resource.
Ah, AI doesn’t pose as danger in that way. It’s danger is in replacing jobs, people getting fired bc of ai, etc.
Those are dangers of capitalism, not AI.
Fair point, but AI is part of it, I mean it exists in capitalist system. This AI Singularity apocalypse is like not gonna happen in 99%, AI within capitalism will affect us badly.
All progress comes with old jobs becoming obsolete and new jobs being created. It’s just natural.
But AI is not going to replace any skilled professionals soon. It’s a great tool to add to professionals’ arsenal, but non-professionals who use it to completely replace hiring a professional will get what they pay for (and those people would have never actually paid for a skilled professional in the first place; they’d have hired the cheapest outsourced wannabe they could find; after first trying to convince a professional that exposure is worth more than money)
It replaced content writers, replacing digital artists, replacing programmers. In a sense they fire unexeprieced ones because ai speeds up those with more experience.
Any type of content generated by AI should be reviewed and polished by a professional. If you’re putting raw AI output out there directly then you don’t care enough about the quality of your product.
For example, there are tons of nonsensical articles on the internet that were obviously generated by AI and their sole purpose is to crowd search results and generate traffic. The content writers those replaced were paid $1/article or less (I work in the freelancing business and I know these types of jobs). Not people with any actual training in content writing.
But besides the tons of prompt crafting and other similar AI support jobs now flooding the market, there’s also huge investment in hiring highly skilled engineers to launch various AI related product while the hype is high.
So overall a ton of badly paid jobs were lost and a lot of better paid jobs were created.
The worst part will be when the hype dies and the new trend comes along. Entire AI teams will be laid off to make room for others.
Seems relevant.
https://www.notebookcheck.net/UPS-lays-off-12-000-managers-as-AI-replaces-jobs.802229.0.html
Your worry at least has possible solutions, such as a global VAT funding UBI.
Yeah I’m not for UBI that much, and don’t see anyone working towards global VAT. I was comparing that worry about AI that is gonna destroy humanity is not possible, it’s just scifi.
Seven years ago I would have told you that GPT-4 was sci fi, and I expect you would have said the same, as would have most every AI researcher. The deep learning revolution came as a shock to most. We don’t know when the next breakthrough will be towards agentification, but given the funding now, we should expect soon. Anyways, if you’re ever interested to learn more about unsolved fundamental AI safety problems, the book “Human Compatible” by Stewart Russell is excellent. Also “Uncontrollable” by Darren McKee just came out (I haven’t read it yet) and is said to be a great introduction to the bigger fundamental risks. A lot to think about; just saying I wouldn’t be quick to dismiss it. Cheers.