CA/Browser Forum
Home » Posts » 2013-12-19 Minutes

2013-12-19 Minutes

Notes of meeting

CAB Forum

19 December 2013

Version 2

1. Roll Call: Phill Hallam-Baker, Atsushi Inaba, Ben Wilson, Doug Beattie, Atilla Biler, Dean Coclin, Mads Henriksveen, Wayne Thayer, Caroline Oldenburg, Steve Roylance, Eddy Nigg, Rich Smith, Ryan Sleevi, Eneli Kirme, Gerv Markham, Jeremy Rowley, Erwann Abalea, and Rick Andrews

2. Agenda Review: Rick Andrews asked that we discuss whether the CAB Forum incorporate by ballot Microsoft’s deprecation of SHA-1 under “Other Business.”

3. Minutes: Minutes of 5 December 2013 and of the face-to-face meeting in Ankara were approved for publication as circulated. Dean said that Jeremy had done a great job of pulling the RFC 3647 document together and asked about whether others besides him had reviewed Jeremy’s RFC 3647 Formatting Comparison attached to email of 7-Nov-2013. He said he thought that the document looked fine and he asked what would be the next step.

Jeremy: The first step was to create an outline for mapping and have people approve it, and now the step after that would be to revise some of the wording, and I did start that in the second draft I sent around to write it more like a CP than what we currently do. I’ll need to fill in the missing sections, and there are a lot, but I’ll send around the EV one first because I want to make sure that everyone is on board with what I’d done before and if it is then I’ll send around the EV one.

4. Discussion of Ballots: Ballots 89 and 107 will be reviewed after New Year’s Day.

Ballot 103

Ben asked Phill what we should do about “must staple.”

Phill: IANA has said they will give us a number, but Brian Smith and I have to go back and ask for it, and I’ll send the email out today.

Ben: But we do have a CAB Forum number. Do you want to include it as part of the draft to see if that’s acceptable, or do you want to wait? Well, we can send to you what we have, and then you can decide. Should I send it to you and Brian Smith?

Phill: The issue is that we need to get the thing published as an RFC, but send it to me, and that can give us some leverage to get it moving. Because they were shutting down PKIX it was floating between PKIX and IANA and now it has gone back to IANA under regular control.

Rick: If you send Phil the number that we chose, then I just get confused about where the OID needs to come from.

Phill: At the end of the day, we are the last major application of ASN.1. Everything else is going to be replaced by XML or JSON, so I don’t see why we’re having a problem getting these arcs set up because the only thing important is that everybody uses the same number.

5. Upcoming Face-to-Face Meetings:

Ben: There was some discussion about whether the meeting scheduled for Eilat should be moved to Tel Aviv.

Eddy: I don’t think it would be the same thing for us because we would have to move to Tel Aviv, which would be a problem for a few days, but I can make transportation to Eilat as smooth as possible – I will work on it over the next two months or so. It could be that we have a meeting near Ben Gurion Tel Aviv airport, but we’ll have to discuss this more to see what the members want.

Dean: It can be very difficult to meet as a group I think because from past experience people come in at different times from different places, and it would be difficult then gather for a bus ride.

Ben: I guess what we can do is just continue the discussion of the logistics. Are there any new logistical items for Mountain View?

Dean: We’ve been discussing which working groups would be held at what time, at least that was what Kirk was wondering so that he could plan his trip accordingly, and I think it was the Code Signing, the EV, and the Performance working group, chaired by Brian Smith from Mozilla.

Gerv: Brian is planning on attending the meeting, so you should definitely allocate time, and I will try to get feedback from him.

Ryan: Yes. Brian was specifically expecting a block of time, so you should allocate time for it.

Dean: We could start with the code signing working group in the morning, with 3 to 4 hours for that, and then it could be Brian’s group, followed by the last group for 2 hours. I really want Tom there for code signing, so even though he hasn’t RSVP’ed yet, we need to make sure he is coming to the meeting.

6. Scope of BRs

Jeremy: I did post this morning to move this discussion over to the public list and asked Iñigo to re-post his comments there. He argued that Qualified Certificates were not SSL certificates, and therefore they were outside the scope of the BRs, and that Qualified Certificates had to meet their own standard. He referred to 3739, but the point he missed is that “SSL certificate” is not defined, and so to say that it is out of scope if it meets other industry standards doesn’t really get us anywhere, since we don’t really know what an SSL certificate is, but is in the “you’ll know it when you see it” standard that we’ve been using so far.

It’s an issue because the Baseline Requirements cover internal server names, which you could argue should not be covered, because they are not intended for public use, or are not intended for use on the public internet, which is what the current scope is. So, they’re obviously things we’ve tried to put rules around, yet they are technically outside the scope of the BRs.

Rick: In some cases we define the BRs as anything that chains up to a public root.

Jeremy: We need to define SSL certificate because you could use a Qualified Certificate, which can have the “anyEKU” in it, and use it as an SSL certificate because it has the EKU in it. Coming up with the definition of “SSL Certificate” has been a little harder than I thought it would be. If you have a good definition of SSL certificate, then that is what we should use.

Rick: We learned a few months ago that each of the browsers has different definitions, so there is no commonly accepted standard.

Jeremy: That is the problem we’re running into with the Baseline Requirements, so what if we put that an SSL certificate is whatever is defined by each browser? And then you have a different standard depending on where you are. I was looking on the Mozilla policy and I didn’t see a definition there for what is an SSL certificate. They have a bunch of rules about an SSL certificate, but they don’t tell you what one is.

Ryan: Mozilla’s policy covers all certificates that chain to a Mozilla root.

Jeremy: There are rules for code signing, client, and SSL certificates, and the three terms are never well-defined. I’ve always based it on what the EKU is, but we’ve realized that you can’t do that, and a lot of certificates have the anyEKU or no EKU in them, which functions as good as an SSL certificate, especially when there is a domain name present.

Rick: Does anyone think we can propose some definition that the browsers can use to agree on modifying code so that only certificates that meet this definition are SSL certificates?

Phill: Have we asked the browsers the right questions in the questionnaire you’ve sent out to be able to find out what they do on this issue?

Rick: I’ll make sure to ask that question.

Phill: What a browser vendor considers to be a certificate and what their code considers to be a certificate are two different questions. As we’re getting to SHA1 phase-out, it could be that a lot of the code that doesn’t process EKUs and so on will also be dropping out of the system.

Gerv: I believe that there was a thread on the mailing list on what our code considers to be a certificate, and I did a code-read.

Jeremy: That’s the same discussion going on now.

Rick: And I think it was Ryan who said what Safari did, and I am thinking of posting what the three different browsers do up on the wiki.

Jeremy: My main concern in doing this exercise is mainly the CAs that don’t have access to wiki, and those that typically do not issue SSL certificates thinking they might not need to comply with the BRs, and part of the concern that springs this is that, and I don’t know if some of you saw the Microsoft paper that was posted on the Mozilla mailing list, but there is research coming out in February that evaluates CA compliance with the BRs, and it’s critical of especially smaller CAs, and they don’t make any real conclusions, but I’m speculating that part of it may be that they don’t know they have to comply with the BRs, or that we’re using mismatching definitions. So it would be nice to get this resolved by February so that at least we can get some kind of, we can start trying to understand what the hold-ups with compliance are, and whether or not knowledge is actually one of the big problems.

Ryan: So Jeremy, I responded on the list basically for this. If you look through Mozilla’s policies, Microsoft policies, Apple’s policies, we feel there shouldn’t be any problem as the root stores are formally requiring their own requirements, or the Baseline Requirements with documented caveats, so if you look through the January 2013 communication from Kathleen to all of the members of the root certificate database, they were all informed of the need to comply with the Baseline Requirements and almost all documented some degree noncompliance, and I would say probably half of them requested extensions for that noncompliance, so as I said, it’s a pretty well documented issue as to what going on, so I don’t know if that’s the same conclusion I can reach, especially since the CA communication has gone out several times.

Jeremy: It would be nice to eliminate that potential completely, because of the CAs that only issue Qualified Certificates, because if I got the email from Kathleen and I wasn’t issuing hardly any SSL certificates or I was issuing Qualified Certificates for devices, which technically are SSL certificates, or Grid certificates for devices, or FBCA certificates for devices–all of this should be covered by the BRs because they are doing this and enabling TLS server authentication. But if I looked at that and that’s all I’m doing I’d probably look at them and say they aren’t intended for SSL, or web servers, they’re intended for devices, and then I’d probably stop reading and determine that I’m compliant. My concern is that sure, they got the email from Kathleen, but there’s a lot of interpretation that you could honestly make and not realize that certificates that should be compliant are not because there are a lot of different interpretations that could be made without any malice.

Ben: How do we resolve this? What are our next steps? What action do we have to take to make it more uniform about what an SSL certificate is?

Jeremy: Right now I’d like to see the discussion continue on the list. I don’t think it’s a problem we’re going to solve short-term. I think what is going to have to happen, and I’ve posted it on the list, that we are going to have to start talking to people who are requiring the anyEKU, or those requiring omission of the EKU from the certificate, and get them to update their policies, since these are governments and research communities, it will take time, but I’d like to ask them and collect at least what their rationale is for including the anyEKU in the certificate versus actually specifying what they want it for. If we can work with them to find out the rationale then we can eventually get them to adopt a more limited approach so that their code signing certs are no longer server certs as well.

Rick: Is this an argument for eventually having CAs split roots for those that are used in browsers and those that are used for SSL and devices?

Ryan: That is our preferred solution.

Rich: Those other certificates you mentioned, like grid certificates, if you leave out the EKU, would they have a SubjectAltName or DNS name in the SubjectAltName or CN field?

Jeremy: Some do. Most do not. They have what I would refer to as an internal server name. It could be used with an internal server name.

Rich: Would that be in the SubjectAltName field or the CN?

Jeremy: Some of them have it, but most of them have it just in the CN, which if the CN was fully deprecated and SubjectAltName was used, you’d probably eliminate about 80% of the certificates, but the CN, not including the SubjectAltName does not cause the certificate to fail.

Rich: But it does cause it to fail BR compliance, so why not, in terms of what we define, in terms of the BR coverage, include “must have either server or client EKU and SubjectAltName and DNS name”?

Jeremy: Because the certificate could have just the CN field and work just as well. So basically we’re saying that a BR certificate, or a server certificate is one that is compliant with the BRs, which doesn’t really make it a good regulation because they could just exclude the SubjectAltName, have a valid server cert and ignore the rest of the BR requirements.

Rich: The browsers can enforce that down the road as legacy stuff expires. The browsers then have a clear guideline to say, “okay, if I encounter a certificate being used to validate a web site and encrypt data from website, I can identify that.”

Ryan: There are two replies to that. Our survey of the net indicates between five and seven percent of still contain these names and will for several years. So, I don’t think that is a viable short-term solution, but the other thing is that while I am very appreciative of technical definitions meant to resolve, and this goes to my responses to Rick earlier, the BRs don’t just cover issuance practices. They also look at some of the operational security and other elements or practices that really apply to all certificates issued from a given path. Like, how is the key managed? How is your life cycle management? And so I’d be very concerned with this notion of having both BR and non-BR compliant certificates issued from the same issuance hierarchy because then you create the opportunity for any number of incidents, whether they be security or operational, to say, “oh, this is on or not on our BR side.” It shouldn’t affect our BR side.

Jeremy: We already have that with what those who issue qualified certificates say for those who have a single root. We’ll have to hear from the people issuing Qualified Certificates, but I suspect that they’re issuing their SSL certificates from the same root as their qualified certificates, and so you have a non-BR compliant certificate issued alongside a traditional SSL cert. Both can be used for server authentication because both can include the anyEKU on them, and so we already have that very situation.

Ryan: I am already well aware that we already have that. I’m seeing that as a problem, not a feature. We’ve already seen incidents where the wrong certificate template metadata was associated and a mis-issuance happened. That was the Turktrust analysis. I’m concerned about the situation. That’s why I have highlighted that while I appreciate the effort to formalize the definition of what is a server certificate, we individually are much more supportive of efforts to say that all certificates issued from this particular subset of the hierarchy is going to conform to the Baseline Requirements. Then if a CA has additional issuance practices that require different things, they can cut those from a different subset of their overall force, and that way it poses no risk to the browser side of things.

Wayne: During our BR audit, one of the things that are auditors told us was that they were not willing to look at end entity certs issued from the same root as not being audited under a BR audit. So they basically said that anything from a root that is BR-audited needs to comply with the BRs. I think that supports this position. I know it’s not that way today, but that’s not how they want it to be.

Jeremy: How does that work? Initially at least, CAs got one EV root. If someone only enabled one EV root, you can’t do EV code signing and EV SSL until you get that root embedded. So this might not be a short-term fix, or there would have to be an exception for EV Code Signing because it falls under a standard that is similar to the BRs.

Rich: The other thing we run into is the limitation that the platform vendors place on how many roots a given CA can have embedded in the platform, if we start getting into splitting off everything into its own compartmentalized root hierarchies.

Phill: We also need to be careful we don’t cut off future projects by saying that you … really, the BR requirements are requirements on subject of the certificates and the validation practices. The actual use then made of the certificate should be a sufficient for being a server certificate, but not a necessary requirement. There are some cases where you’d want at least that degree of validation for an end entity certificate. And so I don’t want to say that the BR requirements only apply to server certificates.

Jeremy: But they do, because that is what the scope is.

Phill: Yes, but I’m thinking that that’s probably wrong because if we’re going to do things like re-do S/MIME so we can use it more widely, using a BR, or better, EV validation for S/MIME being sent by a bank, or whatever, would make a great deal of sense for authentication purposes.

Jeremy: Let’s continue this on the mailing list for now, and get this to a few people outside the CAB Forum to make a list of those that use the anyEKU or no EKU to you see where we are having conflicts.

7. Automating the inclusion of new gTLDs in domain name screening

Ben: Has anybody seen anything that might imply where we are on this issue?

Steve R.: How can we weigh in with some bigger pressure on this, because I believe Mozilla does the Public Suffix List, and I believe it will be good if we can have some kind of single source for the industry. Otherwise, we’re just going to be in a report in the future where a few CAs haven’t followed the rules or haven’t noticed how something has or hasn’t appeared on an ICANN list.

Ryan: I’ve had some discussions with Gerv and some of the other PSL maintainers trying to get a good understanding of the goal and objectives of the PSL. I don’t know whether or not we can really recommend it for CAs at this time. We need to understand the interaction between the IANA root zone database, the distinction between the IANA contracted, but not yet activated domains, and those that are going after the gTLD notifications, and what represents a potentially registerable domain, which is sort of what the Public Suffix List is covering. The PSL today covers a mixture of IANA root zones, IANA contracted zones, and then also reduced scoping for some of those zones, based on the stated policy of the domain name server. Getting a good data set, one that is both good enough and one that a CA can reasonably rely upon, is the issue. Obviously there have been discussions within IETF with various ways of doing it using various DNSSEC piggybacking mechanisms, but that is reasonably five to ten years out. The PSL is a good solution, but until we nail down some of these issues I don’t think that it’s necessarily the best solution. I think that if a CA is looking at the same thing as the Chrome Browser is looking at, is where you combine multiple data sources. Trying to get a more authoritative source is a good thing.

Gerv: I see the PSL as a superset of the root zone database. That is to say that if something is in the root zone database, it should be listed in the PSL. The PSL should also contain a bunch of information about the shape of the DNS below the root. When the PSL was created, the algorithms state that if a suffix is not listed, you treat it as if it is a flat suffix, like .com (i.e. registration immediately below the root). We did include flat suffixes in the list for completeness, even though their presence or absence made no difference according to the algorithm. Since then, people have started to use the database to decide what is and what is not a publicly available domain name. The fact that it was complete can be important, so we plan to maintain it. So, we are on the mailing list that announces the new gTLDs. We have that automatically feeding into a spreadsheet, and someone generates patches that are used to patch the PSL. Given that the PSL is updated when contracts are signed and not when things are placed under the root, we figure people should probably have that before the other databases have it so that it can percolate through the system as quickly as possible so that these names are recognizable. So the goal of the PSL has been to be as complete as possible and a superset of the root zone.

Ryan: Thanks, Gerv. So, omissions from the PSL, such as a variety of ccTLDs, are you saying are a bug in the PSL that we can quickly resolve?

Gerv: So, if there are ccTLDs that are not in the PSL, then yes, absolutely, that’s a bug.

Ryan: Then, from the Chrome side, I’d strongly encourage the PSL.

Ben: Good. Let’s keep working on this. This is very important and seems like a solvable problem.

**8. Pre-certificates and Certificate Transparency **

Ben: There was an email from Rob Stradling about some language in the CA/B Forum guidelines or requirements that made it to more difficult for a CA to issue a pre-certificate. Because we’re running behind on time, we’ll cross-off working group updates from the agenda.

Jeremy: I agreed with Rob Stradling’s email this morning that the pre-certificate and the certificate are the same certificate. There is the issue of serial numbers, because if you revoke the pre-certificate, then it would appear on the CRL. But there is the poison EKU that is removed, so I think they should have the same serial number.

Ben: Is there a quick way to a proposed solution, or is it a non-issue?

Mads: We need to talk about signing a certificate without these timestamps, and signing it as the final certificate. This would be two different objects that we sign. So I think both of these kinds of certificates have to be defined as certificates, according to the BRs.

Ryan: The pre-certificate that is associated with the certificate should be the exact same contents with the exception of the critical poison extension, which prevents the pre-certificate from being used by any client that implements RFC 5280 or RFC 2459, and the presence of the non-critical CT extension in which any compliant implementation can promptly ignore. That’s the distinction between the two, so they can both be treated by the client. The issue is what do about this poison CT extension. One, is it allowed according to BRs to be inserted? I think so, because we say “don’t sign extensions you don’t understand,” but any CA can understand this CT extension. The other part is whether we allow the duplicate subject and serial number signatures represent an issue. And I’m very eager to hear what other CAs and auditors feel on this point. For BR compliance, it is not like the pre-certificates are going to contain fields that are not compliant with the BRs. They are going to contain the exact same fields as the actual certificate, mod this critical poison extension.

Phill: I think if you are signing a certificate, it should be a certificate. Anything that needs to read CT only needs to read CT code, so they don’t need to be X.509 certificates. They could be in JSON, and they would still work. We could define a different format, or we could use the CSR format, and make sure there is a clean separation between the two sets. I don’t think we’re dug in to deploy code.

Ryan: So Phill, I don’t know if you’ve followed CT back, and we can take your suggestions to the mailing list, and I’m happy to explain the objectives of having something auditable. The goal of CT is to have a cryptographically provable audit log. That is the goal of the SCT and the pre-cert working together. The SCT – the Signed Certificate Timestamp – is basically unvarying, depending on which implementation of CT– the one that we originally proposed and the one that is drafted as the IETF ID-the SCT is either a cryptographic proof that something has been logged, or it is a cryptographic proof of a promise that a certificate will be logged. It’s current implementation is as a promise. The goal is that for a given certificate presenting and SCT, can you have a cryptographic proof that the certificate has been logged. So the way that these implementations of CT work is that they strip out the SCT extension from the certificate, they add the poison extension, and then they check if the cryptographic hash of that certificate has been signed by the log. The goal here is that when using CT it minimizes the amount of data that has to be presented by the server to simply being this SCT extension, because everything else can be inferred or obtained from the certificate that the server has presented. That is what the pre-cert enables. If you look at alternative schemes, like those that mention JSON, you essentially have to either load the certificate to provide this whole additional proof, which may be cryptographically weaker, or you have to send essentially a different structure and duplicate much of the content of the certificate, which from the perspective of the SSL performance working group, the goal should be making these handshakes smaller, not larger. So that is the balance and trade-offs, which we can certainly take to the list to discuss alternatives, but that is the motivation of the pre-cert. It allows you to look at the certificate and reverse and determine if the certificate has been logged, regardless of how you obtain the SCT.

Wayne: I would be interested in the ability of giving the CA the option of issuing the certificate until the log updates and you have confirmation, and the new proposal removes that obstacle that some people complained about, but I would be very excited to have both choices.

Ryan: I certainly understand and would like to discuss that, but it would require revising the ID’s explanation of how it works, but a slight tweak to the extension definitions would be more than a critical rewrite. I should also mention that with the delivery of the SCT there are three different mechanisms to deliver this SCT, only one of which requires that pre-signing. The delivery of the SCT within the certificate itself, and that requires the pre-cert, so that you can also verify the certificate without the SCT was indeed what was logged, but there is also the ability to deliver the SCT over a TLS extension or through an OCSP-stapled response as an OCSP extension. So there are a couple of options, but currently our view is that delivering it within the certificate itself is the most deployable solution out of all of these because using a TLS extension requires updating servers and using an OCSP session requires that the server support OCSP stapling, which is still in a very nascent phase, although Firefox is actively working towards making that a requirement of their program requirements, so perhaps we’ll get there before. But even if you block issuance until it has been logged and you have the Merkel Tree signed tree-hash, you still are going to have something without that proof needing to be signed by the CA. Because if you want to provide any proof within the certificate, you have to have a certificate without that proof that has been seen by the log.

Wayne: Just to get back to the original point, I think we just need a ballot here that explicitly changes the BRs to allow pre-certificates.

9. Working Group Updates Skipped due to lack of time

10. Any Other Business – Potential SHA1 Ballot

Rick: When Microsoft announced that they were deprecating SHA1, it was suggested that we craft a ballot to adopt that as a CAB Forum standard as well. It might be easier to explain to customers if we say it is an industrywide CAB Forum requirement, and not just a Microsoft requirement. So is anyone interested in a ballot that we can review and would be willing to endorse?

Doug: I’d need to talk to Ryan first, but I’m interested because I’ve had the same questions myself, and issues for the industry with respect to CAs, hierarchies, changes for end users, and if it is mandated then it would be a little easier.

Wayne: I did suggest that before Microsoft came out with their SHA2 guidance, and I’m not opposed to doing that as long as it doesn’t lock in some future date if Microsoft reevaluates the status of SHA1 in 2015. In my mind Microsoft is a pretty formidable force in the industry, and they’ve already said what the rules are, so having the rules documented somewhere else doesn’t carry a lot more weight than having Microsoft say them.

Rick: They did say they would reevaluate it, but then it might be sooner. Well, maybe I’ll take it up and try to write some language for a ballot and perhaps modify it with your comments.

Rich: Do we have any clear idea of how many users this will affect with XP SP2 or other systems before proceeding with a ballot?

Rick: I don’t know if anyone has numbers. Richard from WoSign in China said they have pretty substantial numbers in China. Microsoft is retiring XP in April of next year. A Japanese colleague has indicated there’s still a fair number of Japanese feature phones that don’t support SHA2 and have general features on them. In the US and Europe I haven’t heard anyone uncover any significant population of software that doesn’t support SHA2.

Rich: I’m just not sure that the CAB Forum should take on the job to advance that agenda–certainly not attempt to impede it either–but I think before we do a ballot we should have a better idea of not just XP but any other systems, especially given the concern about resetting standards that are supposed to be worldwide. So I am not against this, but we should probably at least be cautious before proceeding and try to figure out what the impact may or may not be.

Rick: I understand, and what drives me is that this is a security issue rather than a convenience issue. We should advance it based on security, not based on the convenience of customers, because if we were to find out a month from now about a very effective collision attack against SHA1, then everyone would be inconvenienced. Those concerns would probably go out the window if we were faced with a formidable attack.

Rich: I absolutely agree with that, but I guess I just don’t think we’re there yet, and now this comes with the caveat I am not a technical guru and Rob is much better qualified to look at things like this, but I have read some of the links and articles and things that have been circulated and it kind of seems like at least at this point it’s really a theoretical security concern at least still today and for the immediate future.

Ben: We might want to have something that we could give to everyone and circulate to a broader audience with longer period for consideration, and then we could get more feedback from the public, so that might be another approach.

**11. Next phone call - Thurs. Jan. 2 **

**12. Meeting Adjourned **

Latest releases
Code Signing Requirements
v3.7 - Mar 4, 2024

S/MIME Requirements
v1.0.3 - Ballot SMC05 - Feb 20, 2024

Adding CAA.

Edit this page
The Certification Authority Browser Forum (CA/Browser Forum) is a voluntary gathering of Certificate Issuers and suppliers of Internet browser software and other applications that use certificates (Certificate Consumers).