81 points by aragilar 3 days ago | 11 comments
  • If you like this sort of thing, perhaps you'll enjoy my SSL/TLS and PKI history where I track a variety of ecosystem events starting with the creation of SSL in 1994: https://www.feistyduck.com/ssl-tls-and-pki-history/
  • The short lived certificates started making a lot more sense to me when I discovered I could get Let's Encrypt to issue IP address certs. Clearly, in this context of use we need our certificates to die quickly.

    You can now make any web server operate with a publicly valid TLS certificate without paying any money, registering a domain, configuring DNS or disclosing any personally identifiable information. It can be entirely automatic and zero configuration. The only additional service required is something like a STUN server so the public IP can be discovered and updated over time.

    • I am reading your comment and find the proposition interesting, but I can't quite understand the part about the STUN server - doesn't that "just" help me find my own public IP address ? Do you mean that I could then give out this address to others (instead of them having to do a DNS lookup) so they can connect to the webserver ?
      • > I am reading your comment and find the proposition interesting, but I can't quite understand the part about the STUN server - doesn't that "just" help me find my own public IP address ?

        He is hosting his domain on a machine behind a reverse proxy over which he has no control (common enough); in this case the server will not know its own public IP as all resolves to (for example) `www.mydomain.com` will return the address of the proxy. To get the public IP he uses a STUN (or similar) public-facing service.

        Not quite sure why he needs the public IP, though: from what I remember, the certs include the domain, not the IP.

        • You can issue a TLS certificate with a SAN that is a literal IPv4 address. You do not need a domain to serve TLS to clients. It definitely helps with the UX, but it's not mandatory for the browsers and other web tech to function.
          • If you're running private PKI, sure, you'll do it.

            What value is it when you are behind a proxy that can change IP? I mean, I'm going on the assumption that the proxy is not under his control, nor does it do the tls termination.

            • If your public interface address can change, it does dramatically reduce the value of a purely IP-addressed host. But I don't think it eliminates it entirely.

              With a dynamic IP you can still detect a change, reissue a cert for the new IP and proceed automatically. There are self-hosting and machine-to-machine scenarios where this amount of autonomy could be welcome.

      • Yes the point is to simply discover the public ip you present as on the internet. It's not a particularly hard problem to solve, but you often can't know your public interface just from inside the machine. Being behind a NAT with TCP 80/443 forwarded to the actual web server is an example.
  • It's worth noting that while splitting the PKI hierarchies is a good thing, the CABF does provide rules S/MIME (email signing) and Code Signing. Also, "WebPKI" never actually appears in the BR documents from what I can see, nor do they require the use of HTTP (hence why you can use these for SMTP).
  • There's a huge suggestion in here which would make PKI vastly more respectable: Disallowing root programs (browser operators) from also being CAs. I loudly suggested at the time Google Trust Services should be rejected but the Mozilla rep loudly defended approving a CA from a root program from a company that happens to pay their entire salary.

    PKI as it stands is only a few steps from Google just deciding everyone must have a short-lived certificate from Google to be on the web.

  • I feel this is a perfect complement to the current 1. link: https://satproto.org/ which implements its own CA system with different trade-offs.