- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
The attack has been dubbed GoFetch: https://gofetch.fail/
This requires local access to do and presently an hour or two of uninterrupted processing time on the same cpu as the encryption algorithm.
So if you’re like me, using an M-chip based device, you don’t currently have to worry about this, and may never have to.
On the other hand, the thing you have to worry about has not been patched out of nearly any algorithm:
The second comment on the page sums up what I was going to point out:
I’d be careful making assumptions like this ; the same was true of exploits like Spectre until people managed to get it efficiently running in Javascript in a browser (which did not take very long after the spectre paper was released). Don’t assume that because the initial PoC is time consuming and requires a bunch of access that it won’t be refined into something much less demanding in short order.
Let’s not panic, but let’s not get complacent, either.
That’s the sentiment I was going for.
There’s reason to care about this but it’s not presently a big deal.
Sure. Unless law enforcement takes it, in which case they have all the time in the world.
Yup, but they’re probably as likely to beat you up to get your passwords.
Ah yes, good old Rubber-hose cryptanalysis.
Apple is not a secure ecosystem.
No system is free from vulnerabilities.
No system is perfect, sure, but rolling their own silicon was sorta asking for this problem.
As opposed to what? Samsung, Intel, AMD and NVIDIA and others are also “rolling their own silicon”. If a vulnerability like that was found in intel it would be much more problematic.
“Govt-mandated backdoor in Apple chips revealed”
There, fixed that for you.
Wow, what a dishearteningly predictable attack.
I have studied computer architecture and hardware security at the graduate level—though I am far from an expert. That said, any student in the classroom could have laid out the theoretical weaknesses in a “data memory-dependent prefetcher”.
My gut says (based on my own experience having a conversation like this) the engineers knew there was a “information leak” but management did not take it seriously. It’s hard to convince someone without a cryptographic background why you need to {redesign/add a workaround/use a lower performance design} because of “leaks”. If you can’t demonstrate an attack they will assume the issue isn’t exploitable.
The more probable answer is that the NSA asked for the backdoor to be left in. They do all the time, it’s public knowledge at this point. AMD and Intel chips have the requisite backdoors by design, and so does Apple. The Chinese and Russian designed chips using the same architecture models, do not. Hmmmm… They have other backdoors of course.
It’s all about security theatre for the public but decrypted data for large organizational consumption.
I don’t believe that explanation is more probable. If the NSA had the power to compell Apple to place a backdoor in their chip, it would probably be a proper backdoor. It wouldn’t be a side channel in the cache that is exploitable only in specific conditions.
The exploit page mentions that the Intel DMP is robust because it is more selective. So this is likely just a simple design error of making the system a little too trigger-happy.
They do have the power and they do compel US companies to do exactly this. When discovered publicly they usually limit it to the first level of the “vulnerability” until more is discovered later.
It is not conjecture, there is leaked documents that prove it. And anyone who works in semiconductor design (cough cough) is very much aware.
newly discovered side channel
NSA: “haha yeah… new…”
Any chance of recall?
Nope since it’s an intended feature.
Oops
Whoopsie