Everyone talks about how evil browser fingerprinting is, and it is, but I don’t get why people are only blaming the companies doing it and not putting equal blame on browsers for letting it happen.
Go to Am I Unique and look at the kind of data browsers let JavaScript access unconditionally with no user prompting. Here’s a selection of ridiculous ones that pretty much no website needs:
- Your operating system (Isn’t the whole damn point of the internet that it’s platform independent?)
- Your CPU architecture (JS runs on the most virtual of virtual environments why the hell does it need to know what processor you have?)
- Your JS interpreter’s version and build ID
- List of plugins you have installed
- List of extensions you have installed
- Your accelerometer and gyroscope (so any website can figure out what you’re doing by analyzing how you move your phone, i.e. running vs walking vs driving vs standing still)
- Your magnetic field sensor AKA the phone’s compass (so websites can figure out which direction you’re facing)
- Your proximity sensor
- Your keyboard layout
- How your mouse moves every moment it’s in the webpage window, including how far you scroll, what bit of text you hovered on or selected, both left and right clicks, etc.
- Everything you type on your keyboard when the window is active. You don’t need to be typing into a text box or anything, you can set a general event listener for keystrokes like you can for the mouse.
If you’re wondering how sensors are used to fingerprint you, I think it has to do with manufacturing imperfections that skew their readings in unique ways for each device, but websites could just as easily straight up record those sensors without you knowing. It’s not a lot of data all things considered so you likely wouldn’t notice.
Also, canvas and webGL rendering differences are each more than enough to 100% identify your browser instance. Not a bit of effort put into making their results more consistent I guess.
All of these are accessible to any website by default. Actually, there’s not even a way to turn most of these off. WHY?! All of these are niche features that only a tiny fraction of websites need. Browser companies know that fingerprinting is a problem and have done nothing about it. Not even Firefox.
Why is the web, where you’re by far the most likely to execute malicious code, not built on zero trust policies? Let me allow the functionality I need on a per site basis.
Fuck everything about modern websites.
There’s 2 separate universes here.
Devs and tech companies care only for UX, convenience, and reduced friction to use any service. They would put their granny’s home address and SSN in the headers if it made a page load 10ms faster. Their incentives are all short-sighted to hit the next goal to outcompete other devs/companies and ship their end of history killer app that will solve all problems - and that will still get bloated and enshittified within 18 months.
Then there’s us, a subset of rational people educated about how much data gets transmitted, who are horrified by the general state of being online, and are hard to impress when it comes to more than just saying “privacy!” when promoting anything at all.
IMO, we have to DIY and cobble together so much of our own protection, we’re closer to artists that live a strange life that few people understand, seems weird from the outside, but we love for the peace of mind. Which is not enough to be any appreciable segment of the market to move the needle on any product worth real money.
Wow! That’s a great way to put it!
Now I understand why my neighbors look at me like I’m one of the guys performing this act:
https://www.youtube.com/watch?v=byz7JCf5thM
That’s beautiful
Have they ever considered that pages would load faster if they didn’t include 20MB of JavaScript?
Just yesterday I was on a news website. I wanted to support it and the author of the piece so I opened a clean session of firefox. No extensions or blocking of any kind.
The “initial” payload (i.e. after I lost patience approximately 30s after initial page load and decided to call a number) was 14.79MB transferred. But the traffic never stopped. In the network view you could see the browser continually running ad auctions and about every 15s the ads on the page would cycle. The combination of auctions and ads on my screen kept that tab fully occupied at 25-40% of my CPU. Firefox self-reported the tab as taking over 400MB of RAM.
This was so egregious that I had to run one simple test. I set my DNS on my desktop to my PiHole and re-ran my experiment.
Initial payload went from almost 14.79 -> 4.00MB (much of which was fonts and oversized images to preview other articles). And the page took 1/4 the RAM and almost no CPU anymore.
Modern web is dogshit.
This was the website in question. https://www.thenation.com/article/politics/welcomefest-dispatch-centrism-abundance/
Yes, but the manager with a shitty MBA doesn’t care about overall company appearance of performance, as long as their department looks good on paper. And they figured that would be easier by using four different external libraries, and then let another department figure out the rest.
Yeah, this is so fucked up ! When you archive reddit pages, those are over 20 fucking MB for just a conversation ! That’s fucking insane…
I can reduce it to less than 500KB with alternative frontends, but still… This makes absolutely no sense and I’m scared to find out what they are hiding in between all those lines of code !