- cross-posted to:
- technews
- [email protected]
- cross-posted to:
- technews
- [email protected]
I think the end goal would be AR/VR built into your glasses that are as light as current day glasses. We are probably a long time away from that, but I feel like most VR headsets right now are beta versions of this end goal.
“a long time” probably means like a decade for this kind of stuff, so at least there’s that to look forward to
No for one simple reason: I have a wife. We like to experience content together (watching movies/TV, playing games). None of which I can do without not one but two of these things. No thanks.
It’s funny how obvious this point is and yet it seems to be getting kind of quietly ignored.
I’ve heard a lot of pundits excitedly talking about using this headset to get rid of TVs in their house. I keep wondering how they think that’ll go over with their families.
The Vision product is a more like a monitor than AR glasses you wear all day. It makes nods to the practicality of strapping a monitor to your face: you can unplug, slide the battery in your pocket, stand up and walk into a different room to get something without disengaging from the monitor. If someone wants to chat, you can fade in reality and let them see your eyes so that the two of you can more comfortably (we’ll see about this!) exchange a few words.
Without things like that, strapping a monitor to your face to get great eye tracking, immersive photos/video, and the giant digital canvas for your application windows might prove too inconvenient. For example, needing to pull the goggles off to answer a quick question from someone else in the room could make the whole endeavor not worth the hassle in some settings. If those settings turn out to be popular (e.g. using this at work in an office), then Apple is one step ahead.
I think that AR glasses you wear when out and about will be a different product. Admittedly, the photography aspect of Vision is a tentative move in this direction. I think it’s being positioned more as a thing where you’d pull it out to capture a particular scene, then put it away again, rather than something you’d wear for an entire outing (the battery life largely precludes such a use, after all). I don’t think it’s a great fit for this now as it seems like it’d require the equivalent of a camera bag to bring with you, but undoubtedly some people will capture some amazing images.
Yall remember google glasses? Seems like a decade ago.
I can’t imagine walking around in public with something like this Apple headset on, let alone with the insane price tag… which means that people are definitely going to do it.
There was a hilarious few week period in 2013 where I saw multiple people slam into the handrails and doors on muni busses in SF while glassed out. Also people yelling at them about not consenting to being recorded etc but that was much less amusing. Within a couple weeks you entirely stopped seeing them in public spaces.
Yep. I remember their users being called ‘glassholes’ too.
Apple Vision users new nik?: ISores
Nope
That’s why when Google revealed the Glasses I thought “that’s it, that’s the headwear device that will be the future, it’s literally just glasses!”.
Alas, I should have known back then there was one thing going against that device’s survival odds: it was a project from project slayer, Google.
Short answer : no.
Long answer : noooooooooooooooo.
For all the faults Google glass had, at least they were similar in size to regular glasses. I would only consider these things if they were as non-intrusive as possible, aka not ski goggles
No one talks about how bad it is to have tons of little LEDs in your eyes. My eyes are already messed up, and I can’t use VR for more than an hour before I feel like I want to die. So it’s a HARD pass from people like me. Talk to me once you put screens in the walls, not on them.
I don’t even want to wear clothes half the time never mind a giant computer that’s tracking my eyeballs.
No. The future of tech should be about getting more capabilities out of fewer (and/or less intrusive) screens. Would love to see more advances in e-ink displays and open-source, ‘ambient’ voice-controlled UIs.
oh no. I hate voice controlled tech. it’s off for me. I would not use that at all.
Agreed. I don’t want to use voice controls for anything but I agree with the OPs more general point of getting more capabilities out of fewer screens
I don’t see any downside at all if it’s layered on top of some other (very capable) keyboard-driven UI that can do all the same things.
I don’t see any downside at all if it’s layered on top of some other (very capable) keyboard-driven UI that can do all the same things.
The downside is that no existing tech company has enough self-control to actually keep these kinds of recordings private.
That’s why we need something open-source and self-hosted.
Several such solutions already exist. Problem is, only folks like us mess around with it. Non-geeks, not so much.
there doesn’t need to be a keyboard. just good hand gestures which can’t be performed by accident, and good face recognition software. if apple headset will have this, I’m gonna bankrupt.
I got to try messing around with a Hololens a couple of years back. The hand tracking wasn’t perfect but it was pretty cool. It read my “typing in the air” gestures to set a WPA2 key very accurately (much to my surprise). The parameters of the demo I was playing around in (picking up and moving virtual packages around in a model city to control drones flying around that part of the convention center) was pretty cool.
No. Absolutely not.
I hope this trend won’t last. My smartphone consumption is too high already.
Headsets already feel outdated. They seem inconvenient, uncomfortable, and take you away from life instead of enhancing it. Whatever happened to google glass? I disliked that for many reasons but at least it wasn’t a headset.
Google happened to it. Right when some of us started doing practical things with it. Still haven’t forgiven them for that.
Can’t have a product potentially get past the early adopters phase can they.
I still don’t think I should have told them I was working on a software prosthetic for it.
Oh, what is that?
I was writing code for Google Glass that implemented facial recognition. A friend of mine suffered a TBI in an automobile wreck and developed partial facial prosopagnosia as a result. I was basically writing software that would recognize faces within 15 feet of the wearer and compare it to images of their contacts in their Google account, and would throw up an AR subtitle identifying the person on a match. Not too long after I filed the developer applications and outlined my project, the Glass project flatlined.
Did you end up taking it anywhere from there?