• 0 Posts
  • 35 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • So they’ve failed at pushing full return to office and now they’re commissioning unscientific studies to try to make hybrid seem necessary?

    These results really can’t be applied to all jobs. Some jobs obviously require in-person but many white collar jobs can be done entirely remotely saving workers time, money and freeing up infrastructure for those that need/want to go in. Not to mention other benefits to mental health and reduction of emissions involved in commuting.







  • You don’t really have one lol. You’ve read too many pop-sci articles from AI proponents and haven’t understood any of the underlying tech.

    All your retorts boil down to copying my arguments because you seem to be incapable of original thought. Therefore it’s not surprising you believe neural networks are approaching sentience and consider imitation to be the same as intelligence.

    You seem to think there’s something mystical about neural networks but there is not, just layers of complexity that are difficult for humans to unpick.

    You argue like a religious zealot or Trump supporter because at this point it seems you don’t understand basic logic or how the scientific method works.




  • You obviously have hate issues

    Says the person who starts chucking out insults the second they get downvoted.

    From what I gather, anyone that disagrees with you is a tech bro with issues, which is quite pathetic to the point that it barely warrants a response but here goes…

    I think I understand your viewpoint. You like playing around with AI models and have bought into the hype so much that you’ve completely failed to consider their limitations.

    People do understand how they work; it’s clever mathematics. The tech is amazing and will no doubt bring numerous positive applications for humanity, but there’s no need to go around making outlandish claims like they understand or reason in the same way living beings do.

    You consider intelligence to be nothing more than parroting which is, quite frankly, dangerous thinking and says a lot about your reductionist worldview.

    You may redefine the word “understanding” and attribute it to an algorithm if you wish, but myself and others are allowed to disagree. No rigorous evidence currently exists that we can replicate any aspect of consciousness using a neural network alone.

    You say pessimistic, I say realistic.








  • Possible, yes. It’s also entirely possible there’s interactions we are yet to discover.

    I wouldn’t claim it’s unknowable. Just that there’s little evidence so far to suggest any form of sentience could arise from current machine learning models.

    That hypothesis is not verifiable at present as we don’t know the ins and outs of how consciousness arises.

    Then it would logically follow that all the other functions of a human brain are similarly “possible” if we train it right and add enough computing power and memory. Without ever knowing the secrets of the human brain. I’d expect the truth somewhere in the middle of those two perspectives.

    Lots of things are possible, we use the scientific method to test them not speculative logical arguments.

    Functions of the brain

    These would need to be defined.

    But that means it should also be reproducible by similar means.

    Can’t be sure of this… For example, what if quantum interactions are involved in brain activity? How does the grey matter in the brain affect the functioning of neurons? How do the heart/gut affect things? Do cells which aren’t neurons provide any input? Does some aspect of consciousness arise from the very material the brain is made of?

    As far as I know all the above are open questions and I’m sure there are many more. But the point is we can’t suggest there is actually rudimentary consciousness in neural networks until we have pinned it down in living things first.




  • I’d appreciate it if you could share evidence to support these claims.

    Which claims? I am making no claims other than AIs in their current form do not fully represent what most humans would define as a conscious experience of the world. They therefore do not understand concepts as most humans know it. My evidence for this is that the hard problem of consciousness is yet to be solved and we don’t fully understand how living brains work. As stated previously, the burden of proof for anything further lies with yourself.

    What definitions? Cite them.

    The definition of how a conscious being experiences the world. Defining it is half the problem. There are no useful citations as you have entered the realm of philosophical debate which has no real answers, just debates about definitions.

    Explain how I’m oversimplifying, don’t simply state that I’m doing it.

    I already provided a precise example of your reductionist arguing methods. Are you even taking the time to read my responses or just arguing for the sake of not being wrong?

    I’ve already provided my proof. I apologize if I missed it, but I haven’t seen your proof yet. Show me the default scientific position.

    You haven’t provided any proof whatsoever because you can’t. To convince me you’d have to provide compelling evidence of how consciousness arises within the mind and then demonstrate how that can be replicated in a neural network. If that existed it would be all over the news and the Nobel Prizes would be in the post.

    If you have evidence to support your claims, I’d be happy to consider it. However, without any, I won’t be returning to this discussion.

    Again, I don’t need evidence for my standpoint as it’s the default scientific position and the burden of proof lies with yourself. It’s like asking me to prove you didn’t see a unicorn.