[cabf_validation] Draft Minutes of the Validation Subcommittee call on Jan 16, 2020
sleevi at google.com
Wed Jan 22 10:46:47 MST 2020
# Draft Minutes of the Validation Subcommittee call on Jan 16, 2020
Joanna Fox, Dean Coclin, Daniela Hood, Clint Wilson, Mike Reilly, Wayne
Thayer, Doug Beattie, Corey Bonnell, Janet Hines, Li-Chun Chen, Dimitris
Zacharopoulos, Rich Smith, Ryan Sleevi
* SC25 Update (Doug)
* Onion ballot (Wayne)
* Recap of GitHub issues filed by Ryan and draft ballots (Ryan)
* F2F Planning (Ryan)
Wayne read the Antitrust Statement.
## SC25 Status:
Doug provided an update on SC25. It was sent to the validation list,
instead of the servercert list. The other was the redline was not an
immutable link, so that’s another update. Corey provided some additional
updates. Ryan offered to help if there were any issues getting that
drafted. Wayne mentioned having trouble getting immutable links prepared.
Dimitris mentioned we have some instructions page on the wiki, which also
describes how to get the commit hash, which could do with some additional
## Onion Proposal:
Wayne circulated a draft on the validation list to incorporate the issuance
of .onion certificates into the BRs. To refresh: several years ago (2015?),
we added an appendix on how to issue a certificate containing a Tor .onion
address. Since then, the Tor address spec has been updated that addresses
the concerns that originally led us to require EV and the extension.
Several Tor domains are not able to obtain EV certificates for various
reasons, and folks reached out to see if it was possible to update the
language to allow issuance of DV. Wayne circulated a proposal based on
Fotis’ original proposal. Since then, not many comments, other than from
Let’s Encrypt. Ryan mentioned he’ll do another pass on the proposal, but
believes Google will endorse. He shared examples of existing users of the
.onion certificates, such as Facebook and NY Times, having operational
challenges deploying certificates due to the burden of replacing the
certificates. Wayne was going to send another email seeking a second
## Subject Ballot Name
Ryan shared that he opened Issue 154 on GitHub , explaining the two sets
of tables provided on that issue. The focus was trying to focus on things
currently trusted and have BR audits, but also examined those that are TLS
capable but may not have audits. The focus was on the first set, since that
represents things intentionally used for TLS, but the second set is
provided for completeness for people to look at.
The most popular subject field was the organizationalUnit, followed by
locality information, including state or province. The locality information
was largely dominated by a single CA/member. Another OID was serialNumber,
although several of those CAs have been subsequently clarified by the
issuer as not intended for TLS, even though they were capable of TLS, and
are in the process of being replaced. The last one of substantial note was
the organizationIdentifier field.
Several CAs included e-mail addresses in the CA certificate, which was
questionable to begin with, and a few included postal code. One CA made use
of an OID for LEI.
The table includes a breakdown of the subject field of the certificate,
along with the O field of the issuer or, for CA certificates issued so long
ago they lack an O field, the comonName. The intent of the ballot is to
provide a time-limited exception with a sunset, and then have a discussion
about what fields to continue. For example, for some CAs including locality
information, their locations have changed as they’ve gone through mergers
and acquisitions and are no longer correct. Same with e-mail addresses,
including some that bounce. This ballot doesn’t immediately address these
situations; it will allow them with a SHOULD NOT/MAY, to discourage them
but recognize things like cross-certification may need them.
For the set of CAs without BRs audit, the numbers are very different,
because that situation is much wilder of a wild west.
Dimitris stated that most of the European CAs are forced to use the
organizationIdentifier in the subject due to eIDAS Annex IV (b) . All of
the DG CONNECT consultants have convinced the supervisory bodies that this
is the field it needs to appear in. Dimitris stated he knows this for a
fact because HARICA had not included an organizationIdentifier, and
received a yellow-card from their supervisory body requiring that they add
it for their QWACs.
Ryan stated that while this ballot would allow it, the statement about
being required by regulation is not correct, but that’s a discussion we can
continue after the ballot. The nuance between what SBs require, what CABs
implementing the standards require, there’s a lot more fluidity. There’s no
question that organizationIdentifier is being used, though, and this ballot
would allow it to continue.
There are a few attributes where the CA has already stated they don’t plan
to issue more certificates with that attribute, so they may not be needed,
but we’ll go ahead and allow these fields and continue discussion about
what should be allowed.
Wayne mentioned he shared the link to GitHub, and pointed out that the
tables start out collapsed and you need to click to expand them.
## Arpa validation
[Ed Note: To make it easier for those reading the notes; on the call Ryan
kept saying in.arpa/in6-arpa. The correct names are in-addr.arpa / ip6.arpa
and have been corrected below to make it more understandable]
Ryan mentioned that another issue to discuss was Issue 153 . This came
up during the discussion of some lints for ZLint, and Ryan wanted to bring
it to the validation list to discuss. The question is about how .arpa
addresses should be valid.
The background was that there was a CA that issued a wildcard certificate
for an IPv6 address; *.something.ip6.arpa. For those not familiar with
in-addr.arpa and ip6.arpa addresses, they provide a mapping back from the
IP expressed as a hostname, in the reverse, back to the IP. For example,
220.127.116.11.in-addr.arpa maps to 18.104.22.168
Domains validated in this space are functionally being used as IP
addresses, because there’s this identity function. One of the questions was
about wildcards in in-addr.arpa and ip6.arpa, where we don’t allow
wildcards in IP addresses. The second question was in how in-addr.arpa and
ip6.arpa are administered. Arpa is one of the special-use domain names that
are maintained by IETF/IANA  and has its own policies. Within .arpa,
in-addr.arpa and ip6.arpa have special meaning, and the subdomains are
delegated to the regional internet registries, the entities responsible for
delegating and assigning IP addresses. Examples are ARIN or RIPE. The
question was whether or not these are registry controlled, which seem to be
supported by the RFCs, and that means these can’t have wildcards.
A suggested solution is that we call out in-addr.arpa or ip6.arpa as
requiring they be treated as IP addresses and use a validation process
similar to 22.214.171.124. This isn’t exactly right, because of how IP addresses
can be validated. Another possibility would be to forbid the use of these
addresses, and say use an IPAddress subjectAltName, and don’t expect these
names within a certificate.
While folks on the call probably haven’t had a chance to review this yet,
wanted to bring it on the call to figure out what questions folks might
have for discussion on the list.
Dimitris mentioned he was a little confused. When IP addresses are added to
certificates, CAs have to use the subjectAltName extension, and use a
specific iPAddress type. That type has the IP addresses, not the reverse
form in-addr.arpa. Ryan confirmed that’s correct, and the issue is that
this is a DNS name that can be used to express an IP address mapping. Ryan
explained a CA had issued a wildcard certificate, effectively
*.2.3.4.in-addr.arpa, which would effectively allow validating any IP from
Dimitris stated he didn’t think we should allow something like this. He
wasn’t sure if this was too aggressive, but it seems like a risk to allow a
certificate to resolve to any number of IP addresses you might or might not
control. Ryan agreed, and the issue was the BRs are not clear on the status
quo. One option was forbid wildcards, another was treat them like IP
addresses, or another is just to forbid in-addr.arpa/ip6.arpa.
Ryan wanted to get feedback from folks, and stated that from an analysis of
publicly trusted certificates, this was not common. He mentioned that the
CA that had issued was, as best he could tell, really the only one that was
doing this, and had already had issues with arpa, and that issuance with
.arpa is super rare.
Wayne wanted to know how many certificates are out there, and if it’s just
one CA that had issues several certificates, it seemed like the best step
would be to clarify that it’s not permitted until someone comes forward
with a use case that really makes sense.
Corey chimed in and stated it seemed like there were a significant number
of these certificates out there, sharing a crt.sh link  for this. Ryan
asked if it was publicly trusted, and Corey clarified he saw DigiCert,
Sectigo, Cloudflare, and a few others. Wayne qualified it was hundreds, a
few thousand. Corey stated he agreed with Dimitris and we should move
towards prohibiting these. We have several different RFCs clarifying the
intent was not for these to be used as hostnames.
Wayne stated the best starting position was to deny these, and get
clarification from CAs about why to allow these / why to permit them. Wayne
asked if Ryan was going to start a discussion on this. Ryan stated it was,
he wanted to bring it up on the call to see what sort of questions folks
had to make sure to address them on the list. Wayne added it to the Trello
board to continue tracking.
Wayne asked if there were any other topics to discuss for the F2F. Ryan
brought up the browser alignment ballot, which tries to align the BRs with
browser policy and which lives in a branch . Mozilla just published
their Policy 2.7  which still needs to be integrated. Ryan indicated he
hadn’t heard anything from Microsoft regarding any planned winter updates
to their root store policy. Mike clarified in the chat that they didn’t
expect any changes being needed to the ballot.
Ryan recapped the changes in the ballot.
Wayne asked where we were at in terms of taking it to a ballot, versus
waiting for the update to the documents on GitHub. Ryan indicated the plan
was to land this behind Jos’ markdown cleanup ballot, and that he had also
been waiting for Mozilla Policy 2.7 to be published. 2.7 is out now and he
needs to update, but he’s not sure the status on Jos’ ballot. The reason
for this is that this ballot has overlap with that other ballot.
Dimitris mentioned the infrastructure call was short, and just Dimitris and
Jos, but that he can go forward with updating the ballot and just needs to
find the time to update the ballot but is pretty much ready.
Wayne asked if the idea was that we shouldn’t have any other ballots when
the cleanup ballot is going through. Ryan clarified that no, that wasn’t
the case, it was just because there was overlap between the ballots, and
wanted to avoid having to prepare variants of “If the markdown ballot
passes, this is what things look like” and “If the markdown ballot fails,
this is what things look like” for this ballot. If the markdown ballot
isn’t soon, then it may be the Markdown ballot needs to address this. It
also overlapped with SC23/SC24, and so was nice to also not to have to
provide variants for that.
## F2F Meeting Planning
Wayne suggested we could begin with discussing the ballots currently in
process. A more meaty topic would be discussing default-deny and going
through the BRs and looking for issues to clarify. He suggested we could
spend some time in the F2F going through and clarifying this.
Ryan mentioned he was not optimistic if CAs haven’t been looking at their
CP/CPSes to look for potential issues. Ryan mentioned another topic was the
validation of organization information for EV certificates. This was
dependent on CAs going through some of their data sources and disclosing
them, but might make progress on going through. Dean agreed that it’d be
useful, and one of the things brought up in Thessaloniki as a topic for
Wayne said it seemed like a reminder was needed. Ryan agreed, as so far,
only DigiCert has shared data about their validation sources. Wayne was
going to send out a reminder to the list, and asked if any other CAs could
step up to commit to sharing their data sources.
Doug committed GlobalSign to share some of their data sources before the
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Validation