The EU is poised to pass a sweeping new regulation, eIDAS 2.0. Buried deep in the text is Article 45, which returns us to the dark ages of 2011, when certificate authorities (CAs) could collaborate with governments to spy on encrypted traffic—and get away with it. Article 45 forbids browsers from...
Centralized CAs were and are a mistake. HTTPs should work more like ssh-keys where the first time you connect to a website it’s untrusted, but once you have validated it the website you want, it never bothers you again unless the private key changes. Private key rotations can be posted on public forums, or emailed, or any number of other ways and users that don’t care can ignore the warnings like they do anyway, while users who DO care, can perform their own validation through other channels.
The most important aspect is that there is no “authority” that can be corrupted, except for the service you are connecting to.
There is no way a user can know the website is real the first time it’s visited, without it presenting a verifiable certificate. It would be disastrous to trust the site after the first time you connected. Users shouldn’t need to care about security to get the benefits of it. It should just be seamless.
There are proposals out there to do away with the CAs (Decentralized PKI), but they require adoption by Web clients. Meanwhile, the Web clients (chrome) are often owned by the same companies that own the Certificate Authorities, so there’s no real incentive for them to build and adopt technology that would kill their $100+ million CA industry.
There is no way a user can know that their traffic hasn’t been man-in-the-middled by a compromised CA either. And why is it “disastrous” to trust a website after you have cryptographically verified its the same website you visited before? It would present the same public/private key pair that you already trust.
That’s where the SSH analogy comes from. On the initial connection you get the signature of the web-site you are trying to visit and your browser trusts it from then on. If something changes later, then the scary warning comes up.
I hope for you, that you don’t SSH into any random machine and just import their cert.
Usually you know the machines you are trying to connect to. That gives you the ability to add their cert to your trusted hosts before connecting the first time. So for browsing the WWW this makes not much sense, since you connect to way too many unknown hosts. It would create a ‘red is green’ mentality where users just import any unknown cert.
The only similarity i see, which makes sense, would be e-banking and such. The bank could send you their certificate with the login credentials by post.
Why? There is absolutely zero risk in SSHing into “random” machines especially since I’m using public ssh-keys. Of course the first time I connect to a machine it’s going to be untrusted, but who cares? I’m using SSH to ensure others can’t sniff my traffic.
If i want to sniff your traffic, ill set up another machine as MITM attack.
I guess as long as you stay inside a secure company network, it wouldn’t be that bad. But if you go through the WWW, my advice is to manually add trusted hosts.
Setting up a mitm on the internet is a non-trivial task and I’m quite confident you have neither the access, nor the ability to do that. Very few people do. So let’s just say that isn’t an attack vector that anyone should be concerned with.
No one can remove all risk but the security threshold between intercepting an initial connection and compromising a CA are vastly different. The latter would be much more difficult to pull off which is why we use them. Sounds like this EU rule is going to put a ceiling on that though.
making sure a small part is very secure vs having to verify every domain I visit? yeah, let me keep using the current system… are you aware of the amount of domains you connect to every day?
Also, I might be wrong, but if I remember correctly browsers/OS-es tend to come with a list of trusted certificate keys already, which makes adding compromised keys to that list not as easy as you suggest. (I don’t even know if that happens or if they just update as part of security updates of OS/browsers)
Literally everybody does SSH wrong. The point of host keys is to exchange them out-of-band so you know you have the right host on the first connection.
And guess what certificates are.
Also keep in mind that although MS and Apple both publish trusted root lists, Mozilla is also one of, if not the, biggest player. They maintain the list of what ultimately gets distributed as ca-certificates in pretty much every Linux distro. It’s also the source of the Python certifi trusted root bundle, that required by requests, and probably makes its way into every API script/bot/tool using Python (which is probably most of them).
And there’s literally nothing stopping you from curating your own bundle or asking people to install your cert. And that takes care of the issue of TOFU. The idea being that somebody that accepts your certificate trusts you to verify that any entity using a certificate you attach your name to was properly vetted by you or your agents.
You are also welcome to submit your CA to Mozilla for consideration on including it on their master list. They are very transparent about the process.
Hell, there’s also nothing stopping you from rolling a CA and using certificates for host and client verification on SSH. Thats actually preferable at-scale.
A lot of major companies also use their own internal CA and bundle their own trusted root into their app or hardware (Sony does this with PlayStation, Amazon does this a lot of AWS Apps like workspaces, etc)
In fact, what you are essentially suggesting is functionally the exact same thibg as self-signed certificates. And there’s absolutely (technically) nothing wrong with them. They are perfectly fine, and probably preferable for certain applications (like machine-to-machine communication or a closed environment) because they expire much longer than the 1yr max you can get from most public CAs. But you still aren’t supposed to TOFU them. That smacks right in the face of a zero-trust philosophy.
The whole point of certificates is to make up for the issue of TOFU by you instead agreeing that you trust whoever maintains your root store, which is ultimately going to be either your OS or App developer. If you trust them to maintain your OS or essential app, then you should also trust them to maintain a list of companies they trust to properly vet their clientele.
And that whole process is probably the number one most perfect example of properly working, applied, capitalism. The top-level CAs are literally selling honesty. Fucking that up has huge business ramifications.
Not to mention, if you don’t trust Bob’s House of Certificate’s, there’s no reason you can’t entrust it from your system. And if you trust Jimbo’s Certificate Authority, you are welcome to tell your system to accept certificates they issue.
A better solution would be to have both at the same time.
Browser says: x number of CAs say that this site is authentic (click here for a list). Do you trust this site? Certificate fingerprint: … Certificate randomart: …
And then there would be options to trust it once, trust it temporarily, trust it and save the cert. The first 2 could also block JS if wanted.
I can see this would annoy the mainstream users, so probably this should be opt-in, asked at browser installation or something like that.
But you only really need one to say it’s authentic. There are levels of validation that require different levels of effort. Domain Validation (DV) is the most simple and requires that you prove you own the domain, which means making a special domain record for them to validate (usually a long string that they provide over their HTTPS site), or by sending an email to the registered domain owner from their WHOIS record. Organization Validation (OV) and extended verification (EV) are the higher tiers, and usually require proof of business ownership and an in-person interview, respectively.
Now, if you want to know if the site was compromised or malicious, that’s a different problem entirely. Certificates do not and cannot serve that function, and it’s wrong to place that role on CAs. That is a security and threat mitigation problem and is better solved by client-based applications, web filtering services, and next-gen firewalls, that use their own reputation databases for that.
A CA is not expected to prevent me from hosting rootkits. Doesn’t matter if my domain is rootkits-are.us or totallylegitandsafe.net. It’s their job to make sure I own those domains. Nothing more. For a DV cert at least.
Public key cryptography, and certificates in particular, are an amazing system. They don’t need to be scrapped because there’s a ton of misunderstanding as to its role and responsibilities.
The plan was to have criminals use the storefront — an online end-to-end encryption service called Tutanota — to allow authorities to collect intelligence about them.
Oh, yes, sorry, I had a brainfart. Certs don’t usually (or at all?) have more than one root cert.
A CA is not expected to prevent me from hosting rootkits. Doesn’t matter if my domain is rootkits-are.us or totallylegitandsafe.net. It’s their job to make sure I own those domains. Nothing more. For a DV cert at least.
I thought that was the goal. Not to make sure that the website is secure, but that the connection is secure, and that I’ve connected to the server that I expected.
I don’t really care if a site is who they say they are, I’m the one connecting to the site, if the site does what I expect, they are serving their purpose. The only thing I use SSH/HTTPS for is to make sure that whatever communication between me and the site can’t be snooped. A CA allows a third party to snoop that traffic, and I have no indication they are doing it.
You are missing half the purpose of PKI. Identity is equally, if not more, as important as encryption.
Who gives a shit if your password is encrypted if somebody intercepts DNS and sends yourbank.com and makes it go to their own server that’s hosting a carbon-copy of the homepage to collect passwords?
And DNS isn’t the only attack vector for this. It can be done at the IP level by attacks that spoof BGP. It can be done by sticking a single-board computer in a trashcan at a subway stop. Have it broadcast a ton of well-known SSIDs and a ton of phones in the area will auto connect to it and can intercept traffic. Hell, if not for trusted CAs, it’d be very easy to just MITM all the HTTPS traffic anyway.
In reality, you would tofu the first website you went to and not know if it got intercepted or if they just rotated keys (which is also a common security practice and is handled by renewing certificates and part of the reason why publicly-issued CAs are trending down the life of certificates and it’s not a big deal for admins because of easy automation technology. HSTS and cert pinning is more of a PITA but really barely any effort when you consider the benefits of those).
Now, what certificates don’t protect, nor claim to protect, is typosquatting. If you instead go to yorbank.com, that’s on you, and protecting you from a malicious site that happened to buy it is the job for host-based security, web filters, and NGFWs.
Centralized CAs were and are a mistake. HTTPs should work more like ssh-keys where the first time you connect to a website it’s untrusted, but once you have validated it the website you want, it never bothers you again unless the private key changes. Private key rotations can be posted on public forums, or emailed, or any number of other ways and users that don’t care can ignore the warnings like they do anyway, while users who DO care, can perform their own validation through other channels.
The most important aspect is that there is no “authority” that can be corrupted, except for the service you are connecting to.
There is no way a user can know the website is real the first time it’s visited, without it presenting a verifiable certificate. It would be disastrous to trust the site after the first time you connected. Users shouldn’t need to care about security to get the benefits of it. It should just be seamless.
There are proposals out there to do away with the CAs (Decentralized PKI), but they require adoption by Web clients. Meanwhile, the Web clients (chrome) are often owned by the same companies that own the Certificate Authorities, so there’s no real incentive for them to build and adopt technology that would kill their $100+ million CA industry.
There is no way a user can know that their traffic hasn’t been man-in-the-middled by a compromised CA either. And why is it “disastrous” to trust a website after you have cryptographically verified its the same website you visited before? It would present the same public/private key pair that you already trust.
Where does the initial cryptographic verification come from? I’m not arguing that you can’t pin certificates.
That’s where the SSH analogy comes from. On the initial connection you get the signature of the web-site you are trying to visit and your browser trusts it from then on. If something changes later, then the scary warning comes up.
I hope for you, that you don’t SSH into any random machine and just import their cert.
Usually you know the machines you are trying to connect to. That gives you the ability to add their cert to your trusted hosts before connecting the first time. So for browsing the WWW this makes not much sense, since you connect to way too many unknown hosts. It would create a ‘red is green’ mentality where users just import any unknown cert.
The only similarity i see, which makes sense, would be e-banking and such. The bank could send you their certificate with the login credentials by post.
Why? There is absolutely zero risk in SSHing into “random” machines especially since I’m using public ssh-keys. Of course the first time I connect to a machine it’s going to be untrusted, but who cares? I’m using SSH to ensure others can’t sniff my traffic.
If i want to sniff your traffic, ill set up another machine as MITM attack.
I guess as long as you stay inside a secure company network, it wouldn’t be that bad. But if you go through the WWW, my advice is to manually add trusted hosts.
Setting up a mitm on the internet is a non-trivial task and I’m quite confident you have neither the access, nor the ability to do that. Very few people do. So let’s just say that isn’t an attack vector that anyone should be concerned with.
No one can remove all risk but the security threshold between intercepting an initial connection and compromising a CA are vastly different. The latter would be much more difficult to pull off which is why we use them. Sounds like this EU rule is going to put a ceiling on that though.
making sure a small part is very secure vs having to verify every domain I visit? yeah, let me keep using the current system… are you aware of the amount of domains you connect to every day?
Also, I might be wrong, but if I remember correctly browsers/OS-es tend to come with a list of trusted certificate keys already, which makes adding compromised keys to that list not as easy as you suggest. (I don’t even know if that happens or if they just update as part of security updates of OS/browsers)
Yeah, except you aren’t supposed to TOFU.
Literally everybody does SSH wrong. The point of host keys is to exchange them out-of-band so you know you have the right host on the first connection.
And guess what certificates are.
Also keep in mind that although MS and Apple both publish trusted root lists, Mozilla is also one of, if not the, biggest player. They maintain the list of what ultimately gets distributed as
ca-certificates
in pretty much every Linux distro. It’s also the source of the Pythoncertifi
trusted root bundle, that required by requests, and probably makes its way into every API script/bot/tool using Python (which is probably most of them).And there’s literally nothing stopping you from curating your own bundle or asking people to install your cert. And that takes care of the issue of TOFU. The idea being that somebody that accepts your certificate trusts you to verify that any entity using a certificate you attach your name to was properly vetted by you or your agents.
You are also welcome to submit your CA to Mozilla for consideration on including it on their master list. They are very transparent about the process.
Hell, there’s also nothing stopping you from rolling a CA and using certificates for host and client verification on SSH. Thats actually preferable at-scale.
A lot of major companies also use their own internal CA and bundle their own trusted root into their app or hardware (Sony does this with PlayStation, Amazon does this a lot of AWS Apps like workspaces, etc)
In fact, what you are essentially suggesting is functionally the exact same thibg as self-signed certificates. And there’s absolutely (technically) nothing wrong with them. They are perfectly fine, and probably preferable for certain applications (like machine-to-machine communication or a closed environment) because they expire much longer than the 1yr max you can get from most public CAs. But you still aren’t supposed to TOFU them. That smacks right in the face of a zero-trust philosophy.
The whole point of certificates is to make up for the issue of TOFU by you instead agreeing that you trust whoever maintains your root store, which is ultimately going to be either your OS or App developer. If you trust them to maintain your OS or essential app, then you should also trust them to maintain a list of companies they trust to properly vet their clientele.
And that whole process is probably the number one most perfect example of properly working, applied, capitalism. The top-level CAs are literally selling honesty. Fucking that up has huge business ramifications.
Not to mention, if you don’t trust Bob’s House of Certificate’s, there’s no reason you can’t entrust it from your system. And if you trust Jimbo’s Certificate Authority, you are welcome to tell your system to accept certificates they issue.
A better solution would be to have both at the same time.
Browser says: x number of CAs say that this site is authentic (click here for a list). Do you trust this site? Certificate fingerprint: … Certificate randomart: …
And then there would be options to trust it once, trust it temporarily, trust it and save the cert. The first 2 could also block JS if wanted.
I can see this would annoy the mainstream users, so probably this should be opt-in, asked at browser installation or something like that.
But you only really need one to say it’s authentic. There are levels of validation that require different levels of effort. Domain Validation (DV) is the most simple and requires that you prove you own the domain, which means making a special domain record for them to validate (usually a long string that they provide over their HTTPS site), or by sending an email to the registered domain owner from their WHOIS record. Organization Validation (OV) and extended verification (EV) are the higher tiers, and usually require proof of business ownership and an in-person interview, respectively.
Now, if you want to know if the site was compromised or malicious, that’s a different problem entirely. Certificates do not and cannot serve that function, and it’s wrong to place that role on CAs. That is a security and threat mitigation problem and is better solved by client-based applications, web filtering services, and next-gen firewalls, that use their own reputation databases for that.
A CA is not expected to prevent me from hosting rootkits. Doesn’t matter if my domain is rootkits-are.us or totallylegitandsafe.net. It’s their job to make sure I own those domains. Nothing more. For a DV cert at least.
Public key cryptography, and certificates in particular, are an amazing system. They don’t need to be scrapped because there’s a ton of misunderstanding as to its role and responsibilities.
Oh, yes, sorry, I had a brainfart. Certs don’t usually (or at all?) have more than one root cert.
I thought that was the goal. Not to make sure that the website is secure, but that the connection is secure, and that I’ve connected to the server that I expected.
I don’t really care if a site is who they say they are, I’m the one connecting to the site, if the site does what I expect, they are serving their purpose. The only thing I use SSH/HTTPS for is to make sure that whatever communication between me and the site can’t be snooped. A CA allows a third party to snoop that traffic, and I have no indication they are doing it.
You are missing half the purpose of PKI. Identity is equally, if not more, as important as encryption.
Who gives a shit if your password is encrypted if somebody intercepts DNS and sends yourbank.com and makes it go to their own server that’s hosting a carbon-copy of the homepage to collect passwords?
And DNS isn’t the only attack vector for this. It can be done at the IP level by attacks that spoof BGP. It can be done by sticking a single-board computer in a trashcan at a subway stop. Have it broadcast a ton of well-known SSIDs and a ton of phones in the area will auto connect to it and can intercept traffic. Hell, if not for trusted CAs, it’d be very easy to just MITM all the HTTPS traffic anyway.
In reality, you would tofu the first website you went to and not know if it got intercepted or if they just rotated keys (which is also a common security practice and is handled by renewing certificates and part of the reason why publicly-issued CAs are trending down the life of certificates and it’s not a big deal for admins because of easy automation technology. HSTS and cert pinning is more of a PITA but really barely any effort when you consider the benefits of those).
Now, what certificates don’t protect, nor claim to protect, is typosquatting. If you instead go to yorbank.com, that’s on you, and protecting you from a malicious site that happened to buy it is the job for host-based security, web filters, and NGFWs.