Deep learning did not shift any paradigm. It’s just more advanced programming. But gen AI is not intelligence. It’s just really well trained ML. ChatGPT can generate text that looks true and relevant. And that’s its goal. It doesn’t have to be true or relevant, it just has to look convincing. And it does. But there’s no form of intelligence at play there. It’s just advanced ML models taking an input and guessing the most likely output.
What we have today does not exhibit even the faintest signs of actual intelligence. Gen AI models don’t actually understand the output they are providing, that’s why they so often produce self-contradictory results. And the algorithms will continue to be fine-tuned to produce fewer such mistakes, but that won’t change the core of what gen AI really is. You can’t teach ChatGPT how to play chess or a new language or music. The same model can be trained to do one of those tasks instead of chatting, but that’s not how intelligence works.
Hi! Thanks for the conversation. I’m aware of the 2022 survey referenced in the article. Notably, in only one year’s time, expected timelines have advanced significantly. Here is that survey author’s latest update: https://arxiv.org/abs/2401.02843 (click on PDF in the sidebar)
I consider Deep Learning to be new and a paradigm shift because only recently have we had the compute to prove its effectiveness. And the Transformer paradigm enabling LLMs is from 2017. I don’t know what counts as new for you. (Also I wouldn’t myself call it “programming” in the traditional sense— with neural nets we’re more “growing” AI, but you probably know this.)
If you are reading me as saying that generative AI alone scales to AGI, we are talking past each other. But I do disagree with you and think Hinton and others are correct where they show there is already some form of reasoning and understanding in these models. (See https://youtu.be/iHCeAotHZa4 for a recent Hinton talk.) I don’t doubt that additional systems will be developed to improve/add additional reasoning and planning to AI processes—and I have no reason to doubt your earlier assertion that it will be a different additional system or paradigm. We don’t know when the breakthroughs will come. Maybe it’s “Tree of Thoughts”, maybe it’s something else. Things are moving fast. (And we’re already at the point where AI is used to improve next gen AI.)
At any rate, I believe my initial point remains regardless of one’s timelines: it is the goal of the top AI labs to create AGI. To me, this is fundamentally a dangerous mission because of concerns raised in papers such as Natural Selection Favors AIs over Humans. (Not to mention the concerns raised in An Overview of Catastrophic AI Risks, many of which apply to even today’s systems.)
Deep learning did not shift any paradigm. It’s just more advanced programming. But gen AI is not intelligence. It’s just really well trained ML. ChatGPT can generate text that looks true and relevant. And that’s its goal. It doesn’t have to be true or relevant, it just has to look convincing. And it does. But there’s no form of intelligence at play there. It’s just advanced ML models taking an input and guessing the most likely output.
Here’s another interesting article about this debate: https://ourworldindata.org/ai-timelines
What we have today does not exhibit even the faintest signs of actual intelligence. Gen AI models don’t actually understand the output they are providing, that’s why they so often produce self-contradictory results. And the algorithms will continue to be fine-tuned to produce fewer such mistakes, but that won’t change the core of what gen AI really is. You can’t teach ChatGPT how to play chess or a new language or music. The same model can be trained to do one of those tasks instead of chatting, but that’s not how intelligence works.
Hi! Thanks for the conversation. I’m aware of the 2022 survey referenced in the article. Notably, in only one year’s time, expected timelines have advanced significantly. Here is that survey author’s latest update: https://arxiv.org/abs/2401.02843 (click on PDF in the sidebar)
I consider Deep Learning to be new and a paradigm shift because only recently have we had the compute to prove its effectiveness. And the Transformer paradigm enabling LLMs is from 2017. I don’t know what counts as new for you. (Also I wouldn’t myself call it “programming” in the traditional sense— with neural nets we’re more “growing” AI, but you probably know this.)
If you are reading me as saying that generative AI alone scales to AGI, we are talking past each other. But I do disagree with you and think Hinton and others are correct where they show there is already some form of reasoning and understanding in these models. (See https://youtu.be/iHCeAotHZa4 for a recent Hinton talk.) I don’t doubt that additional systems will be developed to improve/add additional reasoning and planning to AI processes—and I have no reason to doubt your earlier assertion that it will be a different additional system or paradigm. We don’t know when the breakthroughs will come. Maybe it’s “Tree of Thoughts”, maybe it’s something else. Things are moving fast. (And we’re already at the point where AI is used to improve next gen AI.)
At any rate, I believe my initial point remains regardless of one’s timelines: it is the goal of the top AI labs to create AGI. To me, this is fundamentally a dangerous mission because of concerns raised in papers such as Natural Selection Favors AIs over Humans. (Not to mention the concerns raised in An Overview of Catastrophic AI Risks, many of which apply to even today’s systems.)
Cheers and wish us luck!