For a long time I’ve thought it would be cool to upload my consciousness into a machine and be able to talk to a version of myself that didn’t have emotions and cravings.
It might tell me that being around my parents has consistently had a negative effect on my mood for years now, even if I don’t see it. Or that I don’t really love X, I just like having sex with her. Maybe it could determine that Y makes me uncomfortable, but has had an overall positive effect on my life. It could mirror myself back to me in a highly objective way.
Of course this is still science fiction, but @[email protected] has pointed out to me that it’s now just a little bit closer to being a reality.
With Private GPT, I could set up my own localized AI.
https://generativeai.pub/how-to-setup-and-run-privategpt-a-step-by-step-guide-ab6a1544803e
https://github.com/imartinez/privateGPT
I could feed this AI with information that I wasn’t comfortable showing to anyone else. I’ve been keeping diaries for most of my adult life. Once PrivateGPT was trained on the basic language model, I could feed it my diaries, and then have a chat with myself.
I realize PrivateGPT is not sentient, but this is still exciting, and my mind is kinda blown right now.
Edit 1: Guys, this isn’t about me creating a therapist-in-a-box to solve any particular emotional problem. It’s just an interesting idea about using a pattern recognition tool on myself, and have it create summaries of things I’ve said. Lighten up.
Edit 2: It was anticlimactic. This thing basically spits out word salad no matter what I ask it, even if the question has a correct answer, like a specific date.
I think there’s an (understandable) urge from the technically minded to strive for rationality not only above all, but to the exclusion of all else. There is nothing objectively better about strict objectivity without relying on circular logic (or, indeed, arguing that subjective happiness is perfectable through objectivity)
I am by no means saying that you should not pursue your desire, but I would like to suggest that removing a fundamental human facet like emotions isn’t necessarily the utopian outlook you might think it.