It is not hand-waving; it is the difference between an LLM, which, again, has no cognizance, no agency, and no thought – and humans, which do. Do you truly believe humans are simply mechanistic processes that when you ask them a question, a cascade of mathematics occurs and they spit out an output? People actually have an internal reality. For example, they could refuse to answer your question! Can an LLM do even something that simple?
I find it absolutely mystifying you claim you’ve studied this when you so confidently analogize humans and LLMs when they truly are nothing alike.
Define your terms. And explain why any of them matter for producing valid and “intelligent” responses to questions.
Do you truly believe humans are simply mechanistic processes that when you ask them a question, a cascade of mathematics occurs and they spit out an output?
Why are you so confident they aren’t? Do you believe in a soul or some other ephemeral entity that wouldn’t leave us as a biological machine?
People actually have an internal reality. For example, they could refuse to answer your question! Can an LLM do even something that simple?
Define your terms. And again, why is that a requirement for intelligence? Most of the things we do each day don’t involve conscious internal planning and reasoning. We simply act and if asked will generate justifications and reasoning after the fact.
It’s not that I’m claiming LLMs = humans, I’m saying you’re throwing out all these fuzzy concepts as if they’re essential features lacking in LLMs to explain their failures in some question answering as something other than just a data problem. Many people want to believe in human intellectual specialness, and more recently people are scared of losing their jobs to AI, so there’s always a kneejerk reaction to redefine intelligence whenever an animal or machine is discovered to have surpassed the previous threshold. Your thresholds are facets of the mind that you both don’t define, have no means to recognize (I assume your consciousness, but I cannot test it), and have not explained why they’re important for fact rather than BS generation.
How the brain works and what’s important for various capabilities is not a well understood subject, and many of these seemingly essential features are not really testable or comparable between people and sometimes just don’t exist in people, either due to brain damage or a simple quirk in their development. The people with these conditions (and a host of other psychological anomalies) seem to function just fine and would not be considered unthinking. They can certainly answer (and get wrong) questions.
Do you truly believe humans are simply mechanistic processes that when you ask them a question, a cascade of mathematics occurs and they spit out an output? People actually have an internal reality.
Those two things can be true at the same time.
I find it absolutely mystifying you claim you’ve studied this when you so confidently analogize humans and LLMs when they truly are nothing alike.
“Nothing alike” is kinda harsh, we do have about as much in common with ChatGPT as we have with flies purpose-bred to fly left or right when exposed to certain stimuli.
No, they can’t. The question is fundamentally: do humans have any internal thoughts or feelings, or are they algorithms? If you believe other people aren’t literally NPCs, then they are not LLMs.
That doesn’t even begin to be a dichotomy. Unless you want to claim humans are more than Turing complete (hint: that’s not just physically but logically impossible) we can be expressed as algorithms. Including that fancy-pants feature of having an internal world, and moreso being aware of having that world (a thermostat also has an internal world but it’s a) rather limited and b) the thermostat doesn’t have a system to regulate its internal world, the outside world does that for it).
It is not hand-waving; it is the difference between an LLM, which, again, has no cognizance, no agency, and no thought – and humans, which do. Do you truly believe humans are simply mechanistic processes that when you ask them a question, a cascade of mathematics occurs and they spit out an output? People actually have an internal reality. For example, they could refuse to answer your question! Can an LLM do even something that simple?
I find it absolutely mystifying you claim you’ve studied this when you so confidently analogize humans and LLMs when they truly are nothing alike.
Define your terms. And explain why any of them matter for producing valid and “intelligent” responses to questions.
Why are you so confident they aren’t? Do you believe in a soul or some other ephemeral entity that wouldn’t leave us as a biological machine?
Define your terms. And again, why is that a requirement for intelligence? Most of the things we do each day don’t involve conscious internal planning and reasoning. We simply act and if asked will generate justifications and reasoning after the fact.
It’s not that I’m claiming LLMs = humans, I’m saying you’re throwing out all these fuzzy concepts as if they’re essential features lacking in LLMs to explain their failures in some question answering as something other than just a data problem. Many people want to believe in human intellectual specialness, and more recently people are scared of losing their jobs to AI, so there’s always a kneejerk reaction to redefine intelligence whenever an animal or machine is discovered to have surpassed the previous threshold. Your thresholds are facets of the mind that you both don’t define, have no means to recognize (I assume your consciousness, but I cannot test it), and have not explained why they’re important for fact rather than BS generation.
How the brain works and what’s important for various capabilities is not a well understood subject, and many of these seemingly essential features are not really testable or comparable between people and sometimes just don’t exist in people, either due to brain damage or a simple quirk in their development. The people with these conditions (and a host of other psychological anomalies) seem to function just fine and would not be considered unthinking. They can certainly answer (and get wrong) questions.
Those two things can be true at the same time.
“Nothing alike” is kinda harsh, we do have about as much in common with ChatGPT as we have with flies purpose-bred to fly left or right when exposed to certain stimuli.
No, they can’t. The question is fundamentally: do humans have any internal thoughts or feelings, or are they algorithms? If you believe other people aren’t literally NPCs, then they are not LLMs.
That doesn’t even begin to be a dichotomy. Unless you want to claim humans are more than Turing complete (hint: that’s not just physically but logically impossible) we can be expressed as algorithms. Including that fancy-pants feature of having an internal world, and moreso being aware of having that world (a thermostat also has an internal world but it’s a) rather limited and b) the thermostat doesn’t have a system to regulate its internal world, the outside world does that for it).