Prompt for a login or to check for updates on every start or once a week. It wouldn’t be difficult to get the numbers up for the number of online devices.
I feel like this question has been valid for almost as long as I can remember (e.g. the Mr. Robot extension incident). I find myself struggling to tell if Mozilla is an inherently flawed company or if it's just inherent to trying to survive in such a space.
Would expect with a message meet that criteria of exiting with a more helpful error message? From the postmortem it seems to me like they just didn’t know it even was panicing
Pixi is great! It doesn't purely use uv though. I just love it. It solves "creating a repo that runs natively on any developer's PC natively" problem quite well. It handles different dependency trees per OS for the same library too!
Using The Approved Set™ from your browser or OS carries no privacy issues: it's just another little bit of data your machine pulls down from some mothership periodically, along with everyone else. There's nothing distinguishing you from anyone else there.
You may want to pull landmarks from CAs outside of The Approved Set™ for inclusion in what your machine trusts, and this means you'll need data from somewhere else periodically. All the usual privacy concerns over how you get what from where apply; if you're doing a web transaction a third party may be able to see your DNS lookup, your connection to port 443, and the amount of traffic you exchange, but they shouldn't be able to see what you asked for or what you go. Your OS or browser can snitch on you as normal, though.
I don't personally see any new privacy threats, but I may not have considered all angles.
Different machines will need to have variations in when they grab updates to avoid thundering herd problems.
I could see the list of client-supplied available roots being added to client fingerprinting code for passive monitoring (e.g. JA4) if it’s in the client hello, or for the benefit of just the server if it’s encrypted in transit.
Vaguely. Basically each CA will run one of these. Relying parties (browsers) will need to fetch the Merkle tree heads periodically for at least the CAs that sign certificates for the sites the users browse, or maybe all of the WebPKI CAs rather than just those that sign certs for the visited sites. There's on the order of 100 CAs for the whole web though, so knowledge that your browser fetched the MT heads for, say, 5 CAs, or even 1 CA, wouldn't leak much of anything about what specific sites the user is visiting. Though if the user is visiting, say, China sites only, then you might see them only fetch the MT heads for CN, and then you might say "aha! it's a Chinese user", or something, but... that's not a lot of information leaked, nor terribly useful.
I think most folks involved are assuming the landmarks will be distributed by the browser/OS vendor, at least for end-user devices where privacy matters the most - Similar to how CRLSets/CRLite/etc are pushed today.
There's "full certificates" defined in the draft which include signatures for clients who don't have landmarks pre-distributed, too.
The privacy aspect isn’t that you fetched the heads. It’s that you are sending data that is likely usable (in combination with other data being leaked) that websites can use to fingerprint you and track you across websites (since these heads are sent by the client in TLS hello)
The heads don't change often. I think we should list all the metadata that gets leaked to see just how many bits of fingerprint we might get. Naively I think it's rather few bits.
Sure, but every few bits is enough to disambiguate just a little bit more. Then each little trickle is combined to create a waterfall to completely deanonymize you.
For example, your IP + screen resolution + the TLS handshake head might be enough of a fingerprint to disambiguate your specific device among the general population.
ESNI/ECH nominally exists, but it's not really seeing very widespread deployment. Last I checked Caddy was the only web server/reverse proxy that fully supports it. The rate of adoption is glacial.
My point is that there really hasn't been a point where domain level traffic information has been truly anonymous. Whether this is an oversight or state actors have made the outcome a reality, I have no idea. Probably a bit of both.
Widely know amongst very niche groups, most of whom have either been burnt by the issue or heard about someone who has and have it ingrained in their mind out of fear of debugging such a thing.
I’d bet the majority of ML people are unaware, including those doing lower level stuff.
reply