It is your responsibility to prove your assertion that if we just throw enough hardware at LLMs they will suddenly become alive in any recognizable sense, not mine to prove you wrong.
You are anthropomorphizing LLMs. They do not reason and they are not lazy. The paper discusses a way to improve their predictive output, not a way to actually make them reason.
But don’t take my word for it. Go talk to ChatGPT. Ask it anything like this:
“If an LLM is provided enough processing power, would it eventually be conscious?”
“Are LLM neural networks like a human brain?”
“Do LLMs have thoughts?”
“Are LLMs similar in any way to human consciousness?”
Just always make sure to check the output of LLMs. Since they are complicated autosuggestion engines, they will sometimes confidently spout bullshit, so must be examined for correctness. (As my initial post discussed.)
You’re assuming i’m saying something that i’m not, and then arguing with that, instead of my actual claim.
I’m saying we don’t know for sure what they will be able to do when they’re scaled up. That’s the end of my assertion. I don’t have to prove that they will suddenly come alive, i’m not claiming they will, i’m just claiming we don’t know what will happen when they’re scaled, and they seem to have emergent properties as they scale up. Nobody has devised a way of predicting what emergent properties happen when, nobody has made any progress whatsoever on knowing what scaling up accomplishes.
Can they reason? Yes, but poorly right now, will that get better? Who knows.
The end of my claim is that we don’t know what’ll happen when they scale up, and that you can’t just write it off like you are.
If you want proof that they reason, see the research article I linked. If they can do that in their rudimentary form that we’ve created with very little time, we can’t write off the possibility that they will scale.
Whether or not they reason LIKE HUMANS is irrelevant if they can do the job.
And i’m not anthropomorphizing them without reason, there aren’t terms for this already, what would you call this behavior of answering questions significantly better when asked to fully explain reasoning? I would say it is taking the easiest option that still meets the qualifications of what it is requested to do, following the path of least resistance, I don’t have a better word for this than laziness.
Furthermore predictive power is just another way of achieving reasoning, better predictive power IS better reasoning, because you can’t predict well without reasoning.
It is your responsibility to prove your assertion that if we just throw enough hardware at LLMs they will suddenly become alive in any recognizable sense, not mine to prove you wrong.
You are anthropomorphizing LLMs. They do not reason and they are not lazy. The paper discusses a way to improve their predictive output, not a way to actually make them reason.
But don’t take my word for it. Go talk to ChatGPT. Ask it anything like this:
“If an LLM is provided enough processing power, would it eventually be conscious?”
“Are LLM neural networks like a human brain?”
“Do LLMs have thoughts?”
“Are LLMs similar in any way to human consciousness?”
Just always make sure to check the output of LLMs. Since they are complicated autosuggestion engines, they will sometimes confidently spout bullshit, so must be examined for correctness. (As my initial post discussed.)
You’re assuming i’m saying something that i’m not, and then arguing with that, instead of my actual claim.
I’m saying we don’t know for sure what they will be able to do when they’re scaled up. That’s the end of my assertion. I don’t have to prove that they will suddenly come alive, i’m not claiming they will, i’m just claiming we don’t know what will happen when they’re scaled, and they seem to have emergent properties as they scale up. Nobody has devised a way of predicting what emergent properties happen when, nobody has made any progress whatsoever on knowing what scaling up accomplishes.
Can they reason? Yes, but poorly right now, will that get better? Who knows.
The end of my claim is that we don’t know what’ll happen when they scale up, and that you can’t just write it off like you are.
If you want proof that they reason, see the research article I linked. If they can do that in their rudimentary form that we’ve created with very little time, we can’t write off the possibility that they will scale.
Whether or not they reason LIKE HUMANS is irrelevant if they can do the job.
And i’m not anthropomorphizing them without reason, there aren’t terms for this already, what would you call this behavior of answering questions significantly better when asked to fully explain reasoning? I would say it is taking the easiest option that still meets the qualifications of what it is requested to do, following the path of least resistance, I don’t have a better word for this than laziness.
https://www.downtoearth.org.in/news/science-technology/artificial-intelligence-gpt-4-shows-sparks-of-common-sense-human-like-reasoning-finds-microsoft-89429
Furthermore predictive power is just another way of achieving reasoning, better predictive power IS better reasoning, because you can’t predict well without reasoning.
It’s your job to prove your assertion that we know enough about cognition to make reasonable comparisons.
May as well ask me to prove that we know enough about calculators to say they won’t develop sentience while I’m at it.