Plenary Meeting – Day 1 (Wednesday, 12 June)
Attendees: Agnieszka Strelcow (Certum), Aleksandra Kapinos (Certum), Alex Wight (Cisco), Anna Evlogimenou (Athens Stock Exchange), Antonios Eleftheriadis (HARICA), Arno Fiedler (D-TRUST), Atsushi Inaba (GlobalSign), Ben Wilson (DigiCert), Benjamin Gabriel (DarkMatter), Bruce Morton (Entrust), Chen Xiaotong (SHECA), Clemens Wanko (ACAB’c), David Hsiu (KPMG Taiwan), Dean Coclin (DigiCert), Derek Bronson (Amazon Trust Services), Devon O’Brien (Google), Dimitris Zacharopoulos (HARICA), Don Sheehy (WebTrust), Doug Beattie (GlobalSign), Enrico Entschew (D-TRUST), Eva Van Steenberge (GlobalSign), Fotis Loukos (SSL.com), Frank Corday (SecureTrust), Geoff Keating (Apple), George Fergadis (HARICA), Huo Haitao (360), Ioannis Kostopoulos (HARICA), Irina Hedea (Deloitte), J.P. Hamilton (Cisco), Jeff Ward (WebTrust/BDO), Jeremy Rowley (DigiCert), John Balafas (Athens Stock Exchange), Jos Purvis (Cisco), Karina Sirota (Microsoft), Katerina Tsagkalidou (HARICA), Khalid Nasser Bin Kleab (NCDC), Kirk Hall (Entrust Datacard), Leo Grove (SSL.co m), Mariusz Kondratowicz (Opera), Meshal Mohammad Alshahrani (NCDC), Michael Markevich (Opera), Michael Slaughter (Amazon Trust Services), Michal Malinowski (Certum), Mike Agrenius Kushner (PrimeKey), Mike Guenther (SwissSign), Mike Reilly (Microsoft), Nathalie Weiler (SwissSign), Nick Naziridis (SSL.com), Nikolaos Soumelidis (QMSCert), Peter Miskovic (Disig), PinJung Chiang (ChungHwa Telecom) , Richard Wang (360 Group), Robin Alden (Sectigo), Ryan Hurst (Google Trust Services), Ryan Sleevi (Google), Scott Rea (DarkMatter), Sissel Hoel (Buypass), Stefan Lautenschlager (D-TRUST), Stephan Wolf (GLEIF), Steve Roylance (Ubisecure Oy), Tadahiko Ito (SECOM), Tim Callan (Sectigo), Tim Hollebeek (DigiCert), Tim Shirley (SecureTrust), Timo Schmitt (SwissSign), Tobias Josefowitz (Opera), Tom Zermeno (SSL.com), Tomas Gustavsson (PrimeKey), Tony Perez (GoDaddy), Trevoli Ponds-White (Amazon Trust Services), Vijay Kumar (eMudhra), Wayne Thayer (Mozilla), Wei Yicai (GDCA), Xiu Lei (GDCA), Xue Yingjun (State Cryptography Administration).
Approval of Minutes from previous teleconference
The minutes of the previous teleconference were not distributed for review.
The purpose of minutes and expectations from minute-takers
Presenter: Dimitris Zacharopoulos (HARICA)
Note Taker: Bruce Morton (Entrust Datacard)
In review of previous minutes, some did a transcript and others did a summary. IPR Policy 8.3d needs to cover Contribution “in the process of developing a Draft Guideline for the purpose of incorporating such material into a Draft Guideline or a Final Guideline or Final Maintenance Guideline”
- “The need to attribute specific ideas and suggestions to specific people” is derived from the fact that our IP agreements are bound on attributing Contributions on the basis of the minutes.“
- There is much discussion on procedural issues which is not a contribution, so none of that has to be memorialized in granular detail.
- People need to be held accountable for their positions. If the Forum has continued to do something because so-and-so proposed it or does not do another thing because so-and-so opposed it, then they ought to be held accountable.”
So we implicitly agree that minutes are a summary and not full transcript, and the question is just the level of detail that needs to be captured.
- Minutes are not supposed to be transcripts but some dialogues that describe positions and different viewpoints and helpful to readers and necessary for IPR
- Minute takers should be knowledgeable of the IPR Policy. They need to document more details “transcript-like” when they detect IP risks
- If participating Members detect a potential IP risk in a discussion, they should raise a warning to the minute taker
- The quality of minutes will greatly improve if the minutes if the note takers request access and use the recordings
- Ryan Sleevi, Google: Concerned about last slide being biased towards more IP. We have a public@ mailing list and a default use of that, along with public voting, to ensure transparency of position and accountability. Concerned we should be biased more towards transcripts, to capture positions and controversies, as well as reducing the burden on minute takers to be aware of what contributions may represent IP risks. Can look at other SDOs, like IETF, W3C, OASIS; a number of groups use things like “live minutes”, for example, W3C using IRC. Somewhere a mix between transcript and summary, where every time someone new is speaking, there’s an indication about who’s speaking, and every few sentences from the speaker you get a summary. Others can offer real-time corrections or clarifications real-time to help capture things. Admittedly, this process is non-ideal for non-native English speakers, due to the real time nature of things, but hopefully allows others to help.
- Dimitris: Not intending to be biased towards IP. Wanting to make sure that IP matters are captured fairly exactly, while providing summaries of some of the other discussions.
- Jos Purvis: In terms of technology, would prefer to stay away from live-chat like solutions and more towards a document management solution, so folks can go page by page. A different matter to think about is why do we not have the public at the meetings? As relayed by a former member of the Forum, one of the reasons is that it’s difficult to go on the record on certain matters, especially if those are immediately public. The real-time engagement and minute process allows for more frank discussions about possibly being willing to support such-and-such, without having to worry about being seen as speaking for the company. The more we pin down individuals for commenting on behalf of their company on certain positions, this may stifle some of the discussion due to needing to get preclearance. A need to balance transparency and open collaboration.
- Ryan Sleevi: Not terribly thrilled about that distinction. Public voting was an example where some organizations would not necessarily support X if they knew their vote would be public. The challenge is that the public is impacted by these decisions, but aren’t really able to be represented here. Further, customers of a CA wouldn’t be able to know if the CA was invoking them as an example for or against something, “taking their name in vain”. The decisions to have public mailing lists and public voting were to try to bias towards transparency, while acknowledging there are some challenges, such as regulatory restrictions on public comments or the like.
- Jos Purvis: Agree there’s challenges. If we bias towards transparency, we are somewhat obligating companies to not comment on certain issues, if public position for company is X. May make it hard to discuss changes, if discussing a change may be seen as committing the company to implement the change.
- Ryan Sleevi: Benefit is that minutes go through review, so such companies can correct minutes, by redacting or revising. Same way we allow members to ask things not to be minuted. We have minute review not just for matters of IP, but also to allow members to correct or clarify things for public consumption.
- Dimitris: Can we provide guidance for minute takers? Perhaps a few sentences about the basic expectations for minute takers, even if hard to achieve? Possible ideas mentioned are to default to detailed minutes, transcript type, that it’s OK to ask for a summarization if needed.
- Ryan Sleevi: From other SDOs, a few suggestions: Try to capture each time a new speaker is speaking, which can be tricky if they don’t identify themselves. Doesn’t need to be word-for-word verbatim, but try to capture each point the speaker is making, which is admittedly a bit tricky to nail down.
- Jos Purvis: A big benefit of identifying is that when you’re going through the recordings, can be incredibly hard to identify folks since you can’t generally see them. For F2F recordings, identify before speaking.
How to improve F2F minute review and publication timeline
Presenter: Dimitris Zacharopoulos (HARICA)
Note Taker: Doug Beattie (GlobalSign)
Challenge: Today it takes too long to write, review and post minutes, so let’s look at some options.
Dimitris went through the presentation which consisted of 2 Proposals
- Review when each slot is ready, approve in batches, mark remaining slots as pending
- While this has a shorter timeframe and public disclosure, it places more burden on the reviewers to keep reviewing batches of minutes.
- Engage two minute takers for each slot to speed up delivery, however it will take more effort and it’s not so easy to find volunteers
Jos Purvis (Cisco): F2F is actually a set of different meetings, can we approve them independently? The overall meeting is actually a set of meetings including: Forum, Server, Code Signing, etc. This might facilitate timeline to approve.
Dimitris Zacharopoulos (HARICA): We still have 26 slots to document, and I’ve seen gaps in each, so this might not help get the notes out more quickly. We still have delayed notes for one or two section, and this will still mean multiple reviews and longer timeline.
Jos: If minutes for the Forum aren’t complete, then you can’t approve the minutes in total because of the carryover between discussions it’s hard to approve sections independently, at least that is what I’ve heard.
Dimitris: The biggest concern is number of reviews that are needed. Reviewers prefer to review the meeting minutes once, not multiple times.
Dimitris: We need to do something about this [trade-off between what reviewers want, time it takes to write the minutes, and in getting the information released publicly]. Perhaps we may need to review the bulk of the minutes, then have a couple small amendments to the minutes to be reviewed. Getting the minutes released publicly is important for transparency. Eventually we need to find a solution, even if it comes to a majority vote.
Kirk Hall (Entrust DataCard): Maybe the Chair should set schedule based on telecoms and publish. If new sections come in later, then approve and release. Suggested a Bylaw change to permit shorter review periods or something to speed up the review and release process.
Dimitris: Let’s set minimum date for review to push notes to be done quickly and efficiently. Will always approve during a telecom or face to face, no change to that. If we have 90% of the slots ready, then put on agenda for approval. We would identity missing sections and approve at subsequent meeting. If we have 90% of the slots ready for approval, then we should put those forward for approval. When we publish we’d mark remaining slots as pending, then as those are approved they would be added to the official minutes
Dimitris: If there are no objections, will put forward that approach (incremental approvals).
Ben Wilson (DigiCert): In the event that there are missing sections, we could also re-assign and then have new minute taker review the recordings and do it.
Report from Code Signing WG
Presenter: Dean Coclin (Digicert)
Note Taker: Robin Alden (Sectigo)
We held a Code Signing Working Group meeting yesterday afternoon.
The ballot for inclusion of the Code Signing Guidelines has passed.
We will now start a 60 day IP review period because this full guideline)
We have started looking at a working list of changes to be made to improve the document in the future.
Timestamping: In scope? – Maybe we cover it in a separate document.
Reprofile to RFC3647 format?
We hope this Code Signing document will be adopted as a CABF doc.
Report from Forum Infrastructure WG
Presenter: Jos Purvis (Cisco)
Note Taker: Wayne Thayer (Mozilla)
Jos Purvis of Cisco summarized the Infrastructure WG meeting:
- Discussed the wiki migration. It is now complete.
- Website migration to WordPress is in progress and likely to be transparent to everyone when it happens
- Mailing list migration is on hold due to issues migrating to a new IP address. When ready to migrate, members will be informed.
- There was a good discussion on member permissions and representation. Conclusion is that we’ll be sending out lists of who is included on lists and wiki and ask companies to make corrections.
- There was a discussion on documents and their canonical versions. Will be having a discussion later on the usage of GitHub.
- Dimitris asked about changes to the document creation process that have been initiated by Jos. Jos said the new process is ready but requires numbering changes to guidelines. This will require a ballot. Once passed, we can begin using the new improved process.
- Wayne asked about conversion of the Infrastructure WG to a Forum subcommittee. Jos said that we agreed to make this change via ballot and anyone who has time to create one is encouraged to do so.
Creation of Additional Groups – Secure Mail
Presenters: Tim Hollebeek (Digicert), Ben Wilson (Digicert)
Note Taker: Tom Zermeno (SSL.com)
Presentation: Tim initially reported that there had not been much progress on the S/MIME working group, but that it would be his #1 priority now that SC17 was complete. He then opened the floor for questions.
- Dimitris: Our previous discussions about the charter had some differences of opinion and we didn’t make much progress. Can we use the F2F to try to resolve some of those differences?
- Dean Coclin (recapping differences): In Cupertino, talked about the charter. One of the issues was the scope of the charter, and the inclusion of validation of identities and not just domains. Ryan had suggested limiting the charter to just domains. Dean’s compromise solution was to define in the charter that domain validation should be complete before taking on identity, but adding identity.
- Ryan Sleevi: Two areas of debate about charter scope. One challenge in the scope was whether we include the possibility of discussing other forms of identity, such as natural or corporate identity, or exclude it from the charter. Another challenge in the scoping was whether or not it’s limited to S/MIME, or given the similarities between identity and document signing and other cases, whether those other cases were included, which had been raised by some of the European members where they may use similar processes, especially with respect to validating client identity. Suggestion from Cupertino was to exclude identity wholesale, once we’ve got the S/MIME portion of domain/e-mail validation nailed down. The other identity cases are different consumers.
- Tim Hollebeek: Yeah, it’s likely we’ll eventually having WGs to tackle some of those, because they’re important. I don’t think anyone has proposed they should be in scope.
- Ryan Sleevi: Past discussion shows that there were some members in the past who were keen to have them in scope, as a single item, which is part of the motivation for keeping them out of scope of the charter. Another reason is because we know some folks want to tackle these issues, we haven’t decided whether it should be in one WG or multiple WGs. Our bylaws conversations acknowledged that we may want to have WGs to be able to build on eachothers’ documents, like a common client identity WG to describe validation and which an S/MIME WG can build on for S/MIME. We don’t need to solve that now, however, which is why it’s nice to punt from the charter, until we can decide how we want to tackle that space, knowing that it possibly overlaps with independent WGs.
- Tim Hollebeek: Perhaps we should have a session to revisit how WGs can use common documents, like code signing and timestamping.
- Dean Coclin: Not sure if compromise proposal was outlined at the F2F or came after. Does that work for folks?
- Doug Beattie: I think we all agree we want to do domain validation first. What’s involved in modifying the charter later, if we just charter this to run first?
- Dimitris: I think one of the biggest concerns I’ve heard is how do we define consumer? If the only consumer is e-mail, it’s fairly easy to define, if we extend it to include identity, it’s tougher.
- Ryan Sleevi: To Doug’s question, to change the charter, it’s a Forum-level ballot, which means possibly two weeks. If we put all of this in the charter, I don’t know what IP risks might exist for members, and whether that might discourage participation because everyone participating for domain would take on risk for identity. We can defer that until after we’ve got the domain settled. To Dean’s question, if we defined it that the charter said we wouldn’t take it on until after domain validation is completed, that might work, but not sure what impact that would have towards IP risks for members. To Dimitris: Agree that it’s tricky to nail down scope. If the charter just wants to tackle S/MIME, nominally easier. To tackle other identity, much harder. One way to address this would be to have the charter define the EKUs that are in scope or other attributes of the certificate, like S/MIME capabilities or document signing or id-kp-clientAuth.
- Dean Coclin: Why would we put EKUs in the charter?
- Ryan Sleevi: It’s how we avoid overlap between WGs. Code Signing WG defined its scope based on EKUs. Servercert may not, but may need to update charter to clarify.
- Dean Coclin: So is the suggestion we would NOT put those EKUs in?
- Tim Hollebeek: Ryan jogged my memory, that there had been folks wanting to treat client identity in the S/MIME WG. I think that’s a bad idea. If there’s a strong desire from folks to tackle the client identity and document signing, we could spin up WGs in parallel. Recommendation is to just tackle S/MIME for now.
- Ryan Sleevi: We had a lot of discussion from the past meetings about what people want, for or against. What we need is someone to just update a charter that takes a position for or against some of these points, which just gets us forward momentum to resolving this.
- Tim Hollebeek: So is Dean’s proposed path viable?
- Ryan Sleevi: It’s much easier for us, at least, to keep identity out of scope of the charter and to revisit later, and much easier if just limited to e-mail, from an IP perspective. I suspect everyone will have to talk to their counsel regardless of the charter, but makes it easier to limit the scope early and revisit later.
A quick summary:
- Ryan Sleevi (Google) brought up some concerns about IP, the scope of the working group, and Google’s ability to remain in the group based upon the scope.
- Ryan, Dean, and Tim discussed EKUs as a means to identify the consumer and scope the discussions so that multiple working groups did not overlap (S/MIME, Document Signing, Identity, etc).
- It was recognized that European CAs may desire a broader scope for the certificates.
- Eventually, all parties in the conversation came to the conclusion that it would behoove the Forum to scope the working group charter to domain validation, first, before adding other functionality once that portion was locked-down.
Report from Quantum Cryptography liason(s)
Presenters: Tim Hollebeek (Digicert), Tadahiko Ito (Secom)
Note Taker: Jos Purvis (Cisco)
Tim presented the updates from various outside bodies on quantum crypto work.
The first NIST standardization process met in Florida in Spring 2018, presenting all of the NIST Post-Quantum candidates. There’s been a lot of discussion of these since then on the mailers. There is a second meeting in August in Santa Barbara, and there is starting to be solid work on taking apart algorithms. Tim noted that it’s probably 2 years before a working standard emerges, and NIST estimates 3-4 years before a workable public standard is released.
Tim presented a link from DigiCert and ANSI X9 study group on when a workable Quantum Computer will emerge. He linked a blog entry he wrote on why making predictions around this is particularly difficult.
- Counting qubits is popular but not particularly useful as they’re not cross-comparable.
- US National Academy of Science (from their report): Growth in QC likely to be based on economic utility of QC and the discoveries therein, as happened with traditional computing
- ETSI Quantum-Safe Cryptography Working Group says things may go faster if traditional chip fabrication technologiess can be used; slower if not.
- How to factor 2048-bit RSA integers in 8 hours using 20 million noisy qubits (Link on arxiv.org)
- Cryptographic algorithm transitions are measured in decades, which creates problems. Tim noted that the WebPKI has the advantage of relatively short-lived certificates.
- Very preliminary work going at the IETF on how this transition would actually work (this will be discussed at IETF-Montreal in July)
- Lots of interest in this within CABF; Tim asked about whether we wanted a mailer for discussion on this topic.
Alex Wight, Cisco: Are there front-runners right now?
Tim: It’s still a very crowded pool; NIST keeps saying this won’t be like AES or SHA-3 with a single winner, they’ll pick multiple in different areas. There are some emerging back-runners, but anyone without a weakness is pretty much tied for first.
Alex: The ones I’ve looked at create a state machine to track usage of the private key.
Tim: Some do and some don’t. Most of the NIST ones are stateless; most of the hash-based ones are stateful (which are the simpler ones, better understood, and more usable today).
Dean Coclin, DigiCert: How many algorithms are there competing?
Tim: Still in the 30s. The number of fatal flaws pointed out is somewhere in the double-digits. We’re starting to see dropouts, but it’s probably another year or two before the big horse race.
Alex: Will you be creating a mailer on this?
Tim: Unless anyone has a good reason not to, it seems like a good idea.
Tim: Is there a precedent for a mailer without a subcommittee?
Ryan Sleevi, Google: The suggestion would be to create a subcommittee to cover this, to prevent IPR confusion. We can create it at the Forum level or below (probably doesn’t matter too much), but I suspect the problems in this space may be specific to consuming apps. The trade-offs will be different depending on the usage–e.g., an email message can be larger than a web cert. Tim: I think we’ll need a subcommittee at some point, but let’s create a Forum-level for now to avoid fracturing. Create a subcommittee at Forum with disclaimer of no Forum-level IPRs or docs.
Ryan Sleevi: Our new bylaws don’t require that disclosure for Forum-level ones.
Tim: Sounds good! Yay!
Instructions for creating ballots and challenges for moving canonical versions of all Guidelines to GitHub
Presenter: Dimitris Zacharopoulos (HARICA)
Note Taker: Peter Miskovic (Disig)
- Presentation: Instructions for creating ballots and challenges for moving canonical versions of all Guidelines to GitHub
Issue: Creating redlines for ballots
- Free option for ballot proposers
- Recommendation to use GitHub
- show changes compared to the latest “canonical” version of the Guideline
- When the redline is introduced in the ballot e-mail, it must be immutable to changes
- Either as an attachment
- For GitHub, point to the latest “commit”
- Ben Wilson (DigiCert): Commit it is that a branch or a repository or it is something as an updating official version.
- Dimitris Zacharopoulos (HARICA): Updating the official GitHub version is the last step. When we vote, the ballot proposer has two options. The current instructions from Wayne are that you use your own repository. So you create a clone of the “master” repository, then you add the changes to your own repository. The redline is the comparison between your repository and the cabforum master branch.
- Wayne Thayer (Mozilla): In GitHub terminology, you’re either going to commit to your own repository or the forum repository, but in a branch. You’re not going to commit to the master of the cabforum repo. What Dimitris is highlighting is that if you reference a branch in a ballot, then the branch can change over time, meaning the ballot is changing over time, but commits are immutable, and we want to make sure we’re voting on something immutable.
- Tim Hollebeek (DigiCert): I see it done both ways, in personal branches and branches off the official repository. You can reference commits in branches, even if it’s not your own repository. I would prefer not to do things in my personal clone, and maybe this is due to a lack of GitHub knowledge, but it seems easier for folks to open pull requests off branches that aren’t in my personal clone. This makes it easier to collaborate on ballots, if people can open pull requests.
- Ryan Sleevi (Google): It’s the same experience. Folks can open pull requests on either repository. For folks not familiar with GitHub terminology, we have commits, which are immutable references to the content using SHA-1 or SHA-256, and we have pull requests. If we create pull requests, we have the ability to discuss inline and specific words and phrases, which is useful for discussing. The downside is that a pull request can change and evolve over time as new commits are added. However, even within a pull request, you can refer to specific commits within that pull request. My own preference is to have the cabforum repository fairly clean, and so that it only contains CABForum work product and official stuff, the version history and the document evolution. When it comes to proposals, its lightweight to fork the repository, work on changes yourself, collaborate, and ultimately create a pull request back to the official repository. My suggestion is when it comes time to balloting, including both: the commit being proposed, but also a pull request URL for inline discussion.
- Tim Hollebeek (DigiCert): For sure, you can use pull requests to comment. My specific request was more about trying to find a way to collaborate, going more from “I’d like to see you change X, Y, and Z” to being able to say “I took the commit you made, here’s my proposed modifications, click a button to say yes”.
- Ryan Sleevi (Google): The only way we could simplify that editing, if using the official repository, would be to give everyone editing privileges on the official repository, as the person opening the pull request needs to share edit privileges for that branch. Going back to Ben’s question, I think Ben is one of the few folks who can commit directly to the cabforum repository and create branches, so I think that may be motivating some of the confusion. For folks who don’t have access, they create a “fork” into their own repository, make a branch (or not), make changes, and can then open a pull request and refer to a specific commit within their branch/pull request for discussion.
Dimitris Zacharopoulos (HARICA): Discussion on GitHub is good, but it should not replace the mailing lists. However, the Bylaws are clear we should be doing the discussion on the public list. When we go to voting, we need to have a redline that is immutable. All the branches and comments on GitHub could disappear, but what we’re voting on needs to be immutable.
Issue: Updating the “canonical version”
- The Chair or Vice-Chair responsible to update the final Guideline
- GitHub redline makes the process super easy
- The Chair/Vice-Chair creates a “pull/merge request”
- This pull/merge request is public
- Reviewed by ballot proposer/endorsers
- Submitted by Chair or Vice-Chair and Approved by another Member
Issue: Editorial changes
- The Chair/Vice-Chair updates ToC, table with relevant dates, possible typos, via a “pull/merge request”
- The pull/merge request is public and anyone can report divergence
- Ryan Sleevi (Google): Do you have draft text for the Bylaws to permit this? We don’t have this permitted today. We discussed in Cupertino about wanting to explore this, but we need some sort of change. We’d discussed this in the governance working group and went through a fair bit of it. From a process point, we know folks are keen to explore this, but we haven’t figured out how to actually define the process.
- Dimitris Zacharopoulos (HARICA): This all discussion is to get input from everybody so you can update the Bylaws. Just now the Bylaws don’t allow the canonical version on GitHub.
- Tim Hollebeek (DigiCert): One of the befits of ballots being public is that everyone can review them for typos or issues. If we’re going to allow fixing of typos, which I’m supportive of, then one suggestion is that there is a public post required, saying “here’s the typos I found while merging, FYI”
- Ryan Sleevi (Google): Just to refresh folks for an example of typographical issues that can have meaningful impact. During the governance reform, we identified some examples. One of the examples was something we discussed earlier today, when you have multiple conditions of X and Y or Z. The punctuation and typography can impact whether that’s combining “X and Y” or “Z”, or whether that’s “X” and “Y or Z”. Those have impact, even to the degree of IP obligations, so things as subtle as a comma or semicolon can impact things.
- Jos Purvis (Cisco): I like the idea of being able to post “We’re making a typographical update”. I think we have precedence with other updates, where we include a sunset date where any member who thinks it would materially change the ballot can object, and that change can’t be made. You can do it like the minutes, we can raise it on the next teleconference and approve.
- Dimitris Zacharopoulos (HARICA): To be clear, only four people would be able to make these sorts of changes: The Chairs and Vice-Chairs of the WGs and the Forum.
- Ben Wilson (DigiCert): It is necessary to also update the tables with relevant dates in the ballots. Should the editing also include removing old dates that have passed?
- Dimitris Zacharopoulos (HARICA): I proposed this but a few people forgot this.
- Ryan Sleevi (Google): This also came up in governance. I’m not sure we want to remove entries, but we can talk about that separately. When it comes to flexibility afforded for editing, what might the Bylaws say? A proposal floating during governance was to include the table of enforcement dates and the table of changes to allow the chair and vice-chair to update them independently of the ballots, also highlighting that these tables are informative and non-normative.
- Dimitris Zacharopoulos (HARICA): Most of the fixes Tim has made recently has been fixing incorrect references.
- Tim Hollebeek (DigiCert): That’s maybe only a third of the changes. That one is interesting, because while it’s meant to be an editorial change, it functionally is a substantive change, because many of the original references were nonsensical. Maybe half of the “minor” changes are functionally substantive, because they’re changing the requirements in a way that makes them make sense.
- Jos Purvis (Cisco): If that’s the flexibility we want, then by giving that objection process, you could object if you disagree with the change. That is, you don’t have to object just because it changes, but whether you think it changes and that isn’t in the correct direction.
- Dimitris Zacharopoulos (HARICA): As Ryan mentioned, it’s hard to describe the flexibility in the Bylaws.
- Tim Hollebeek (DigiCert): And that’s why the spring cleaning has been done as ballots. Correcting them as they come up is probably desirable, and I wish we had more unanimous consent procedures for making smaller changes.
- Dimitris Zacharopoulos (HARICA): Perhaps there is a way to capture this in the Bylaws to allow the editor the flexibility.
- Ryan Sleevi (Google): An issue that came up in the governance WG when this was discussed was something others have mentioned, which is that a change in references may trigger a change in the IP obligations. The example for the governance WG was when we had 184.108.40.206, which describes how to validate a domain name. If we imagine a scenario where we had that text, but had incorrect references, such that 220.127.116.11 was not required, and it was a typo to references to 18.104.22.168 instead. We would have a situation where changing the reference from 22.214.171.124 to 126.96.36.199 would change the IP obligations, by making 188.8.131.52 required to implement. The reason we have ballots is to allow folks to assess that IP risk and vote on it. Maybe we want to give the flexibility, but that would likely raise the risk of having someone out of the office for a week and finding IP commitments.
Issue: Creating an official redline
- The Chair/Vice-Chair creates an official redline comparing changes to the previous Guideline
- PDF, DOCX, HTML versions are automagically created
- PDF is published on the public web site
- DOCX is uploaded to the wiki (optional)?
Issue: Next steps
- Formalize the process
- Tests with automatically produced red-lines
Server Certificate WG Plenary
Attendees: Adriano Santoni (Actalis), Agnieszka Strelcow (Certum), Aleksandra Kapinos (Certum), Alex Wight (Cisco), Anna Evlogimenou (Athens Stock Exchange), Antonios Eleftheriadis (HARICA), Arno Fiedler (D-TRUST), Atsushi Inaba (GlobalSign), Ben Wilson (DigiCert), Benjamin Gabriel (DarkMatter), Bruce Morton (Entrust), Chen Xiaotong (SHECA), Clemens Wanko (ACAB’c), David Hsiu (KPMG Taiwan), Dean Coclin (DigiCert), Derek Bronson (Amazon Trust Services), Devon O’Brien (Google), Dimitris Zacharopoulos (HARICA), Don Sheehy (WebTrust), Doug Beattie (GlobalSign), Enrico Entschew (D-TRUST), Eva Van Steenberge (GlobalSign), Fotis Loukos (SSL.com), Frank Corday (SecureTru st), Geoff Keating (Apple), George Fergadis (HARICA), Huo Haitao (360), Ioannis Kostopoulos (HARICA), Irina Hedea (Deloitte), J.P. Hamilton (Cisco), Jeff Ward (WebTrust/BDO), Jeremy Rowley (DigiCert), John Balafas (Athens Stock Exchange), Jos Purvis (Cisco), Karin a Sirota (Microsoft), Katerina Tsagkalidou (HARICA), Khalid Nasser Bin Kleab (NCDC), Kirk Hall (Entrust Datacard), Leo Grove (SSL.com), Mariusz Kondratowicz (Opera), Meshal Mohammad Alshahrani (NCDC), Michael Markevich (Opera), Michael Slaughter (Amazon Trust Services), Michal Malinowski (Certum), Mike Agrenius Kushner (PrimeKey), Mike Guenther (SwissSign), Mike Reilly (Microsoft), Nathalie Weiler (SwissSign), Nick Naziridis (SSL.com), Nikolaos Soumelidis (QMSCert), Peter Miskovic (Disig), PinJung Chiang (ChungHwa Telecom), Richard Wang (360 Group), Robin Alden (Sectigo), Ryan Hurst (Google Trust Services), Ryan Sleevi (Google), Scott Rea (DarkMatter), Sissel Hoel (Buypass), Stefan Lautenschlager (D-TRUST), Stephan Wolf (GLEIF), Steve Roylance (Ubisecure Oy), Tadahiko Ito (SECOM), Tim Callan (Sectigo), Tim Callan (Sectigo), Tim Hollebeek (DigiCert), Tim Shirley (SecureTrust), Timo Schmitt (SwissSign), Tobias Josefowitz (Opera), Tom Zermeno (SSL.com), Tomas Gustavsson (PrimeKey), Tony Perez (GoDaddy), Trevoli Ponds-White (Amazon Trust Services), Vijay Kumar (eMudhra), Wayne Thayer (Mozilla), Wei Yicai (GDCA), Xiu Lei (GDCA), Xue Yingjun (State Cryptography Administration).
Approval of SCWG Minutes from last teleconference
360 Root Program Update
Presenter: Huo Haitao (Halton) (360)
Note Taker: Enrico Entschew (D-Trust)
- Presentation: 360 Root Program Update
Halton’s presentation consisted of three parts:
- 360 browser update since March 2019
- 360 root store update
- Plan for the near future
360 browser update since March 2019:
360 has two browser products: Secure Browser and Extreme Browser. Current releases are based on Chromium 69. The Secure Browser is available for Windows and Linux and the Extreme Browser is available for Windows and MacOS.
Security related changes for the browsers
1. TLS 1.3 official edition support
- Backport boringssl changes from Chromium72 and enabled by default
- Downgrade protection feature enabled
2. CVE fixes backport
– 360 picks up the high risk CVE from Chromium, for eg:
- CVE-2019-5786: Use-after-free in FileReader.
- CVE-2018-20065 Handling of URI action in PDFium in Google Chrome prior to 71.0.3578.80 allowed a remote attacker to initiate potentially unsafe navigations without a user gesture via a crafted PDF file
3. (New) CRLSets support
- Like Google Chrome does, 360 maintain a global CRLSet in China
- Aim to block problematic certificates in emergency situations because costumers can’t visit google services in China
- Currently, the list is maintained by admin.
4. (New) Cert Error enhancement
- Comprehensive message to help non-tech people to understand risks of certificates errors before click ignore/proceed.
- Support 9 usual errors including: NET::ERR_CERT_DATE_INVALID, N NET::ERR_CERT_AUTHORITY_INVALID etc.
Status of error types from 360 browser side (May 2019)
- WEAK_SIGNATURE_ALGORITHM: SHA1
- COMMON_NAME_INVALID: various reasons: extensions, proxy, antivirus/firewall, mis-config on server
- AUTHORITY_INVALID: self-sign
- CERTIFICATE_TRANSPARENCY_REQUIRED: not in the CT log
Top five URLs with fake certs (May 2019)
- www.51test.net, v.qq.com, v.qq.com, mini.easday.com, www.baidu.com
- Errors are visible for the browser, need future investigation why it happens.
- One guess is some malware installed roots on user’s system (PC or phone), sign those website certs, install them and monitor the traffic
360 root store update:
13 CAs with 53 roots jointed 360 CA program by June 2019 Sectigo covers almost half of SSL protected connections based on the CA certificates contained in the 360 root store. digicert is second.
More in-depth analysis of SSL connections reveals more than 90% invalid SSL connections. After removing AD-related extensions, valid SSL connections increase up to 37%. 53% of them are covered by CA roots of 360 root store.
New to the 360 root store are: SSL.com, Certum, Sectigo, Google, SHECA
Introduction to UU, an example for an app that installs a self-signed root on the customer’s system to realize a redirect https with fake certificates eg. of *.google.com. The app can decrypt user traffic when the user visits websites.
Plan for the near future:
- Within the next 3 month the 360 Browsers will be updated to the latest chromium base (Chromium 76). This will be the Extreme Browser12 release. In Future browser of one branch/ Support all platforms: Windows/MacOS/Linux
- Extreme Browser11 turns into Secure Browser11
- Secure Browser10 for Linux public release (can be download from official website)
2. CRLSets auto crawl support with CRL in certs will happen as an improvement soon.
3. 360 browsers will show a warning on TLS 1.0/1.1 websites to support IT administrators update their TLS cypher suites.
4. TLS 1.3 support and CVE important fixes backport
Q Wayne Thayer (Mozilla) wants to know which CRLs flow into the CRLSets. A: intermediate, OV and DV certs
Q Wayne wants to know if every CRL from a CA in the program will be imported into CRLSet? A: Yes. 360 will use crawl mechanism to get the data online as fresh as possible.
Q Wayne, how often will the CRLSet data on a client field be updated? A: With a new version on the server the client will get the new data on the fly.
Q Wayne, the Client gets the CRLSet data when its validating a certificate? A: Yes.
Q Wayne has a question about the CA which was rejected due to the RSA 2048 data key. Does 360 have a policy on root certificates containing RSA 2048 keys that expire after a certain day? A: 360 has no such requirements and follows the BR. In this specific case the CA is a new CA and not trusted on any platform or any other browser. If others trust the CA, 360 will review the case again.
Apple Root Program Update
Presenter: Geoff Keating (Apple)
Note Taker: Mike Reilly (Microsoft)
- New major OSs
- iOS 13 (and iPadOS 13)
- macOS Catelina (10.15)
- watchOS 6
- tvOS 13
- TLS certs in new OSs
- More details at https://support.apple.com/en-us/HT210176
- For system CAs: immediately
- For user added CAs (e.g. Enterprises): starting 1 July
- EKUs now required
- Serverauth required for all TLS certs
- Algorithms restricted
- No more SHA-1 allowed
- No RSA Keys < 2048 bits allowed
- Policy restriction supported
- Can now restrict CAs to particular certificate policies
- Based on type of use, not cert content
- General X.509 basic policy not affected
- Useful in keeping Time stamping when shutting down other associated uses. E.g. useful for CAs shutting down services but needing to keep timestamping valid
- Common name ignored
- commonName completely ignored from now on
- Host names & IP addresses now must be in subjectAltName
- Some years before CAs can rely on this
- Ryan Hurst (Google) asked: How does Apple envision CAs producing a cert without a commonName? Geoff Keating response: You can produce one if you like and use it if you like. 1013 and later Critical SAN has been fixed.
- Validity Limited
- For system CAs: enforce BR limits
- For all CAs (including enterprise CAs): 825 days limit for certs issued after July 1
- Computed by notAfter – notBefore (with slop of about 25 hours)
- Question from Leo Grove (SSL.com) on trusting enterprise certs issued with longer than 825 days lifespan, will they be trusted? Geoff response: Not at all.
- Jos Purvis (Cisco) asked: So, for clarity for example, a 5 year cert would not be trusted when issued off of a private CA. Why? Geoff – yes, it would not be trusted. However, we reserve the right to change this as it’s still in beta. We found that it doesn’t affect many people. Jos: it does affect many people including Cisco. Geoff: we may have to change this policy depends on what breaks.
- Doug Beattie (Global Sign): We have some customers using SHA-1, 5 year certs, etc. Some private customers use “any EKU,” SHA-1, no dates. Enterprise CAs don’t need to follow BRs so why would we have iOS enforce these and break enterprises? Geoff: SHA-1 requirement will not change.
- Ryan Sleevi from Google: Chrome looks at same enforcement to both public and private but will allow some enterprise policy that allows them to make changes as needed for private PKI. commonName can be abused and good to see Apple is now going to ignore it. Chrome may be looking at similar policy updates in the future.
- Jos Purvis (Cisco) added the main concern is really around the 825 day limit which for example, would apply to closed FEDRAMP enclaves which are not open to the outside world. Trying to do automated cert issue in these scenarios could cause issues. Geoff: felt it would not require automated updates and could be done manually. Geoff also mentioned that all these policies can be bypassed if the certificate is directly trusted (e.g. Self Signed certs) with the exception of the no SHA-1 policy.
- Alex Wright (Cisco): When have expiration dates ever saved anyone from an incident? Let’s keep in mind that expirations dates have never really helped us avoid an incident. Ryan Hurst (Google) disagreed. Discussion on this disagreement to be taken offline.
- Ryan Hurst (Google): If we directly trust (enterprise CAs) does cert expiry requirement apply? Geoff: No as it’s being manually (directly) trusted.
- Ryan Sleevi (Google): Confirming that if a cert is added as a trust anchor directly then these policies don’t apply. Two categories for certs: either system added or administratively (directly) added. Geoff: correct, directly added certs will not have this policy enforced but after July 1st system added certs will need to meet the 825 day validity policy limit.
- Bad DER
- Bad DER in non-critical extensions now causes certificates to be rejected
- Example: name constraints is marked non-critical, and does not parse
- Ryan Sleevi (Google): Does this apply to all extensions? Geoff: only those that are understood as part of the certificate. It’s not based on the constructive bit set. If it doesn’t know what the extension is meant to be it will skip over it
- Robin Alden (Sectigo): is there public info on live examples of this? Geoff: all were enterprise certs but not publicly trusted certs from public CAs. Only found with local and enterprise CAs.
- Wayne Thayer (Mozilla): Any stats available on how much this happens and the impact? Geoff: yes, but not publicly available. To the best of his knowledge nothing seen thus far impacts a larger number of customers.
Cisco Root Program Update
Presenter: Jos Purvis (Cisco)
Note taker: Alex Wight (Cisco)
Presentation: Cisco CABF 47 Update
Things have not changed much since Cupertino. Internal team reorg, team personnel shifts, new priorities for Cisco.
Cisco Product Security Baseline (PSB) requirements were adjusted in last 6 mos to make the consumption/usage of trust store bundles more ubiquitous and normalized across Cisco’s product portfolio. Now, once Cisco trusts your CA, you’re trusted across more and more Cisco products.
Core Bundle – added AWS Root CAs, in conversations w/Google to add Google’s cloud CA. Removing Verisign G3 next release in Q3 CY19. On track to shift Intersect Bundle from hand-made to requiring CCADB membership starting in late Q4 CY19.
Google Root Program Update
Presenter: Devon O’ Brian (Google Chrome)
Note Taker: Robin Alden (Sectigo)
Devon’s presentation is at https://drive.google.com/open?id=1TcLbJlbHhe0Mff57TaZXsghtXdFb5mGB
A reminder about TLS 1.0 and 1.1 deprecation. We discussed this at Cupertino.
will hit in early 2020.
CT Log additions and removals (shards)
addition of 2022 and 2023 shards.
We Have removed all of the 2018 shards.
Published UI and UX research
by Chrome UI security team
Fixing HTTPS Misconfigurations at Scale: An Experiment with Security Notifications
The Web’s Identity Crisis: Understanding the Effectiveness of Website Identity Indicators
EV UI may be moving to page info in Chrome 77. The Canary (alpha) release shows the experience, although this may change before release.
Microsoft Root Program Update
Presenter: Mike Reilly (Microsoft)
Note Taker: Tim Hollebeek (Digicert)
- Presentation: Microsoft Root Program Update
Ben: The type of change/notes column is pretty narrow.
Karina: I’m assuming it would expand to fit if the content is longer.
Dimitris: Is there a way for members to register to get notifications of updates?
Karina: We’re looking into how to provide that functionality.
Robin: Does Microsoft intend to remove the EV indicator?
Mike: I run the root store, not Edge. We’re working with that team; those things are TBD.
Mozilla Root Program Update
Presenter: Wayne Thayer (Mozilla)
Note Taker: Ryan Sleevi (Google Chrome)
An “Audit Letter Validation (ALV)” button will soon be added to intermediate certificate records.
- For the intermediate certs with
“Audits Same as Parent”, CCADB will look up the cert hierarchy to find
the parent cert that has the audit statements. Then ALV will be run to
ensure that the intermediate cert is indeed in those audit statements
that are applicable according to the “Derived Trust Bits” field.
- Derived Trust Bits are determined from the cert’s EKU if present, otherwise they are based on the applicable root certs’ trust bits. (cross-signed roots also taken into account)
- We plan to automate running ALV on intermediate certs, and provide public-facing reports of the results.
CAs are responsible for the audits of their subordinate CAs.
- This will help us improve audit compliance for intermediate certs by ensuring that they are either in the scope of the root’s audit or covered by their own currently-valid audit.
Audit Case Workflow: https://ccadb.org/cas/updates#audit-case-workflow
- WebTrust: Enter the WebTrust Seal URL into the audit statement link field, and CCADB will automatically map it to the report URL.
- ETSI: Provide the URLs to the audit statements on your auditor’s website.
- If neither of those options are possible, then you can still use Bugzilla .
To improve the success rate of Audit Letter Validation (ALV), please have your auditors use the following format guidelines in all future audit statements.
- Accepted date formats (month names in English):
- Month DD, YYYY example: May 7, 2016
- DD Month YYYY example: 7 May 2016
- YYYY-MM-DD example: 2016-05-07
- No extra text within the date, such as “7th” or “the”
- Accepted date formats (month names in English):
- SHA256 Thumbprint
- No colons, no spaces, and no linefeeds
- Uppercase letters
- Should be encoded in the document (PDF) as “selectable” text, not an image
- Mozilla’s root store policy requires “SHA256 fingerprint of each root and intermediate certificate that was in scope”
Automated Test Website Validation
A new button called “Test Websites Validation” has been added to Audit and Root Inclusion Cases. The validation gets run automatically when a test website is entered or changed. If you update your test website without changing the URL, then you can re-run the validation via the button.
- Currently the validation checks that the test website will get the correct result in NSS (i.e. valid, expired, revoked), and that the TLS cert chains up to the root cert in the Root Case.
- In the future the validation will also include lint testing.
Root Inclusion Requests
Thanks to those of you who are entering and updating your root inclusion requests directly in the CCADB.
- Please remember to also update your Bugzilla Bug to indicate whenever you are ready for your information to be reviewed again. We have not yet implemented tools/integration to automate this interaction between Bugzilla and CCADB.
Mozilla Policy Update
I am working on a significant Root Store Policy update covering sixteen issues in total. I strongly encourage CAs to follow along on the mozilla.dev.security.policy list and to contribute to the discussion. It is especially important for CAs to identify policy changes that will be difficult for them to implement, or to implement in the required timeline. Unless otherwise stated, Mozilla expects CAs to comply with new policies within 1-2 months of the effective date of the new version.
Here is a rundown of the most significant proposed changes:
Beginning on 1-April, 2020, end-entity certificates MUST include an EKU extension containing KeyPurposeId(s) describing the intended usage(s) of the certificate. This requirement is driven by the issues we’ve had with non-TLS certificates that are technically capable of being used for TLS. Some CAs have argued that certificates not intended for TLS usage are not required to comply with TLS policies.
CP/CPS versions dated after 30-September 2019 can’t contain blank sections and must – in accordance with RFC 3647 – use “No Stipulation” to mean that no requirements are imposed. That term cannot be used to mean that the section is “Not Applicable”. For example, “No Stipulation” in section 184.108.40.206 “Wildcard Domain Validation” means that the policy allows wildcard certificates to be issued.
Section 8 “Operational Changes” will apply to unconstrained subordinate CA certificates. With this change, any new or existing unconstrained subordinate CAs that are sold or transferred to a third party must go through a public discussion before issuing certificates.
We’ve seen a number of instances in which a CA has multiple policy documents and there is no clear way to determine which policies apply to which certificates. With this change, CAs must provide a way to clearly determine which CP/CPS applies to each root and intermediate certificate. This may require changes to CA’s policy documents.
Mozilla already has a “required practice” that forbids delegation of email validation to 3rd parties for S/MIME certificates. WIth this update, I have proposed that we forbid delegation of verification of the domain component in our policy. This issue is still under discussion, and because it may have a significant impact on CA’s email validation practices, I encourage everyone to monitor and contribute to this discussion.
We’re also planning to add specific S/MIME revocation requirements to policy instead of the existing unclear requirement for S/MIME certificates to follow the BR 4.9.1 revocation requirements.
We still need to discuss a proposal to require newly included roots to meet all current requirements, even if the requirement wasn’t in place at the time the root was created. This would, for instance, forbid roots without BR audits or with negative serial numbers from being added to the Mozilla program. CAs would instead be expected to generate new roots.
Other changes include:
- Clarify the Mozilla-specific requirements in ECDSA curve-hash pairs in section 5.1 (this may be delayed pending an analysis of the impact of the change on existing certificates)
- Add the P-521 exclusion in section 5.1 of the Mozilla policy to section 2.3 where we list exceptions to the BRs.
- Change references to “PITRA” in section 8 to “Point-in-TIme Audit”, which is what we meant all along.
- Update required versions of audit criteria in section 3.1
- Formally require incident reporting
I am compiling all of these changes in the 2.7 branch on GitHub. My original target was for the updates to take effect on July 1st, but I currently expect them to take effect on September 1st.
Update on Intermediate Preloading and CRLite
Our Intermediate Preloading feature consists of preloading all intermediate CAs known to the Mozilla Root Program into users’ profiles. This feature is intended to resolve missing intermediate errors without the privacy compromise of AIA-fetching, and to ensure that Firefox only trusts intermediates which have been disclosed by CAs. We have landed this feature in Nightly, our experimental version of Firefox. It’s been temporarily disabled while we investigate potential performance risks. We’re targeting an official release a little bit later in the year.
We’re implementing an idea that comes from academia, namely CRLite, to push all end-entity revocation information to clients. The idea makes clever use of existing information about the certificate ecosystem that comes from CT logs and probabilistic data structures to efficiently and effectively push this information to clients. We’re in the final phases of landing code for our prototype. In comparison to the academic paper, we have reduced file sizes as well as reduced revocation checking times. This technique allows clients to do revocation checking in a fast and private way.
TLS 1.0 and 1.1 Deprecation
As was announced last year, Apple, Google, Microsoft, and Mozilla are coordinating to disable TLS 1.0 and 1.1 less than a year from now, in March 2020. TLS 1.0 still accounts for roughly 8000 of the top 1 million websites in Firefox. We could use CAs help in getting the word out about this change. One suggestion is for CAs to notify their customers whose servers don’t yet support TLS 1.2 during the renewal process. This would be a great service to those customers and the internet as a whole.
Q & A
Jeremy Rowley (DigiCert): Regarding the proposed changes to Section 8, is it expected to apply to S/MIME?
Wayne Thayer: Currently, yes.
Ryan Hurst (Google): You mentioned Section 8 policies applying to
existing intermediate CAs. Are those also expected to apply to
Wayne Thayer: Yes
Jeremy Rowley (DigiCert): What would these proposed changes mean for
cross-signed CAs that might already have roots in Mozilla’s program?
Wayne Thayer: I’d have to double-check Section 8 and the proposed language. I believe we have a path for when the receiving CA is already participating in our program.
Enrico Entschew (D-Trust): When will the required updates to the CP/CPS regarding multiple documents take effect?
Wayne Thayer: I’ll be coming back to that question.
Jeremy Rowley (DigiCert): Some CAs treat the requirement to not delegate
e-mail validation to a third-party fairly strictly. Other CAs do not
consistently follow this requirement. Should it be an incident report?
Will there be some sort of amnesty phase?
Wayne Thayer: The answer is “Yes”. What I mean is that because this is presently a “Forbidden Practice,” CAs should be treating it as an incident. However, because some ambiguity exists between ‘required practice’ and ‘forbidden practice’ and the policy documents, it’s not been as consistently enforced. My intent is to get to a point where we have only the policy document listing the required practices. CAs should be looking at Required/Forbidden practices and treating them as such. However, because it’s not presently in place in the policy, it may not be enforced to the same degree a policy requirement gets enforced.
Ben Wilson (DigiCert): Is it possible to use the list of affected sites
for TLS 1.0/1.1 deprecation to highlight the CAs, to help the CAs reach
out to their customers?
Wayne Thayer: Yes, I can try and get that information to you.
Don Sheehy (WebTrust/CPA Canada): You mentioned audit report
authentication and going to the seal to make sure the report is
authentic. What do you do in the scenario where there is a qualified
audit and thus no seal?
Wayne Thayer: That goes to the third option – post the attachment to the Bugzilla Bug. Then Kathleen reaches out to the auditor to verify the report. That’s the step we’d like to remove in general, because it’s busy work.
Guest Speaker – Use cases for digital certificates with embedded LEIs – current state and potential next steps
Presenter: Stephan Wolf (GLEIF)
- Presentation: Use cases for digital certificates with embedded LEIs – current state and potential next steps
Plenary Meeting – Day 2 (Thursday, 13 June)
Server Certificate Working Group
Presenter: Arno Fiedler (ETSI)
Note Taker: Clemens Wanko (ACAB’c / TUV Austria)
- Presentation: ETSI Update
There were updates in the following areas:
- ETSI ESI has adopted SR 119 403-3 (extended Audit Rules for PTC) as requested by Mozilla, official version is published:
- ETSI ESI is still discussing the comments on EN 319 403 (Audit Rules), no quick win at last meeting, new round ongoing, discussing TSP Key Lifecycle; matching ISO 17065 and Audit decision.
- ETSI has set up a new work item on updating EN 319411-1 (Certification Policy) update on BR/EVG Links.
- New ETSI secretariat eMail-Adress to communicate with CA/B-Forum will be defined.
- Information on available standards and current activities:
Presenter: Clemens Wanko (ACAB’c / TUV Austria)
Note Taker: Arno Fiedler (ETSI)
Presentation: ACAB’c Update
New secretary: Camille Gerbert; email@example.com; +353 (0) 876748511; firstname.lastname@example.org
- # increasing
- members certify >1/3 of ETSI/eIDAS CA
- members certify CA in >1/2 of EU countries
Services for auditors and CA
- working documents (e.g. CA/B Forum Audit Attestation Template)
- position papers & guidance on standard interpretation
- experience exchange
- contacts to relevant stake holders:
- CA/B Forum
- EU Commission
- ETSI update contribution (…403, …403-2 and …403-3)
- Position paper (ongoing)
- Certification scheme (open source)
- CA/TSP event in 2020
Special Challenges and concerns for Certification Authorities located in Asia
Presenter: Vijay Kumar (eMudhra)
Minute Taker: Stefan Lautenschlager (D-TRUST)
- Atsushi Inaba, Globalsign, Japan
- Chen Xiaotong, SHECA.com, China
- Richard Wand, 360 Group, China
- Tadahiko Ito, SECOM, Japan
- Vijay Kumar, eMudhra, India
- Wei Yicai, GDCA, China
- Xiu Lei, GDCA, China
- DarkMatter, UAE
- ChungHwa Telecom., Taiwan / Chinese Taipei
- NCDC, Saudi Arabia
- And Other Asian CAs
- Presentation: Special Challenges and concerns for Certification Authorities located in Asia
Issue 1 of 6: Some of the CAs use PrintableString, and cannot put Unicode characters
Establish awareness on using utf8string for Subject values.
PrintableString does not meet “O” value requirements.
No action required.
Issue 2 of 6: State/Locality name translation from Japanese to English has multiple forms
Tokyo has more than 10 English representations. QGIS has Japanese representation.
Is the CA to maintain standard translation for City and State OR Is there any official English source like UNLOCODE (an UNECE initiative)?
ISO 3166 only goes down to state level, does not include city names.
Ryan Sleevi (Google) suggests to begin by looking at the CAs’ current practice. Recommends to establish transparency by disclosing practice e.g. in CP/CPS or an appendix, establish best practice taking into account end user experience.
Issue 3 of 6: Organization name translation concerns
Organizations request specific (historic) spelling, different from official present day transliteration.
Should CA stick to local language as in QGIS, OR, Is CA allowed to decide on reasonably acceptable translation?
Jeremy Rowley, (Digicert): EV Guidelines contain instructions, OV is challenging.
Dimitris Zacharopoulos, Chair: review Guidelines for translation and find gap.
Issue 4 of 6: Ambiguity in translation from Asian Languages to English
Customers demand to give certificates in English language for global audience OR, CA insists to approve only in English language.
Same action as issue 3
Issue 5 of 6: Public Suffix List (PSL) needs more clarity
Local governments are forming static domains under PSL (e.g.: some-state.lg.jp) and then create live domains under them.
Comment by Ryan Sleevi (Google): The Country Code Top Level Domains (cc-tlds) are under the sovereignty of the country they belong to. ICANN policy is not mandatory. The cc-tld operator is responsible for subdomains. There is a documented process to update the PSL. Gerv had collected the information. ICANN and IANA point out the Operator of the CC-TLD. The policy comes from there. This issue has been messy for 2 decades.
Issue 6 of 6: Disputed Territories country code, which is not part of ISO 3166 specifications
E.g.: Republic of Crimea
Jeff Ward (WebTrust/BDO): Ballot 88 has instructions [https://cabforum.org/2012/09/12/ballot-88-br_9_2_4_errata-iso3166/] along the lines of:
Is it a Country, member of the UN or Territory recognized by at least 2 Nations?
Then take the code and record its use, otherwise only XX
Report from Network Security Subcommittee
Presenter: Ben Wilson (Digicert)
Minute Taker: Robin Alden (Sectigo)
Ben recapped that on Tuesday he had given an introduction of the sub groups of the Network Security Subcommittee.
* Threat modelling
* Pain points
* document structure.
* authentication & access control
Report from Validation Subcommittee
Presenter: Tim Hollebeek (Digicert)
Minute Taker: Wayne Thayer (Mozilla)
Tim Hollebeek summarized the Validation Subcommittee meeting:
- Did a quick review of the past 3 months – ballot SC17 passed
- Continuing work stemming from the validation summit – methods 6, 12, and 10 remain to be updated
- The goal is to complete validation summit work by the next F2F meeting
- Dean Coclin presented four ideas to improve EV:
- Define list of approved EV sources
- Include trademark/wordmark in certs
- Allow LEIs in certs
- Add CAA checks/respect CAA for cert type
Received feedback on these ideas, and the next step is to take it to the subcommittee
- Refer to Tim and Dean’s slides for further information on the subcommittee meeting
Presenters: Jeff Ward (BDO) & Don Sheehy (CPA Canada)
Minute Taker: Kirk Hall (Entrust Datacard)
- Presentation: WebTrust Update
1. “WETSI” – WebTrust and ETSI are working together on common issues Continuing discussions following the Berlin meeting: WebTrust Seal vs ETSI certification understanding Terminology – moving to common language Continuing issues faced Potential for working together
2. Current Status of Updated WebTrust Documents: a) WebTrust Baseline + NS vs 2.4 Effective for periods beginning on or after June 1, 2019 Updated SSL Baseline Audit Criteria to conform to SSL Baseline Requirements v1.6.2 and Network and Certificate System Security Requirements v1.2 Principle 1, Criterion 5 – The CA’s CP and CPS must now follow RFC 3647 format. RFC 2527 has been sunset. Principle 2, Criterion 2.14 – new criterion added to address certificates with underscore characters. Criteria 2.14-2.16 renumbered to 2.15-2.17. Principle 2, Criterion 4.6 – Re-validations cannot use methods 220.127.116.11.1 and 18.104.22.168.5 as of 1 August 2018 Principle 2, Criteria 5.2, 5.3 and 5.4 – Updated revocation criteria and timelines Principle 4 – Updates made to conform to CA/B Forum Ballot SC3
b) WebTrust for CA 2.2 Effective for periods beginning on or after June 1, 2019 Minor updates made to conform to ISO 21188:2018 Edition
c) WebTrust for Extended Validation vs 1.6.8 Effective for periods beginning on or after June 1, 2019 Principle 1, Criteria 4 – RFC 3647 requirement with the sunsetting of 2527 Principle 2, Criteria 5.2-5.4 – updated revocation requirements based on changes to BRs
No changes made to WebTrust for Extended Validation Code Signing, or Publicly Trusted Code Signing
d) WebTrust for RA vs 1.0 Effective for periods beginning on or after April 30, 2019 Provides a framework for third party assurance providers to assess the adequacy and effectiveness of the controls employed by a Registration Authority (RA) that performs either a portion or all of the registration related functions for a Certification Authority (CA) on an outsourced basis. Audit guidance for registration functions that are conducted directly by the CAs entirely are covered in the document, WebTrustSM/TM Principles and Criteria for Certification Authorities.
3. Reporting Requirements and Sample Reports a) Reporting requirements are illustrated on matrix at https://www.cpacanada.ca/en/business-and-accounting-resources/audit-and-assurance/overview-of-webtrust-services/principles-and-criteria b) Sample reports have been developed under each standard since W4CA program began – current ones are at https://www.cpacanada.ca/en/business-and-accounting-resources/audit-and-assurance/overview-of-webtrust-services/practitioner-qualification-and-guidance
4. WebTrust Reports Available – Full Lifecycle a) Rootkey Generation Ceremony Report (Birth Certificate) b) New – Key Protection (Provides assurance that once a key is created and up to the point it is moved into production, it was properly safeguarded. c) Point In Time (As of date for testing the design and implementation of controls) d) Period of Time (Same as Point in Time, but also tests transactions over a period between 2-12 months to ensure they are operating effectively) e) New – Key Transportation, Migration & Destruction (under development)
5. Current Status of Other WebTrust Task Force Projects a) Practitioner guidance for auditors Under development covering public and private CAs Version for US, Canada and International Will provide examples of tools and approaches as best practices Latest draft reviewed May 2019 meeting – expected release by end of 2019
6. SOC 2 Like Reporting a) Shell has been developed – A period of time report has been developed – point in time report does not have a section 4 Section 1- Overall audit results (opinion) Section 2- Management assertion Section 3- Description criteria (includes system description) Section 4- Detailed testing performed and results thereof Section 5 – Unaudited Management comments In essence, asking for reports that have detail similar to an AICPA SOC 2 report (SOC 2 reports issued on restricted distribution basis by audit profession for service organizations) Expected completion late 2019 b) Section 1 – Audit report – Summary Draft 1 – About 5 pages long for US version Reporting on description criteria for CA system and suitability of design and effectiveness of controls over the reporting period Sets out management and auditor responsibilities Sets out inherent limitations References tests of controls Provides opinion Sets out restricted use c) Section 2 – Management Assertion Summary Developed using WebTrust for CA and SOC 2 Required for all engagements d) Section 3 – System Description – Summary Information that is contained in CP/CPS will not be detailed in the System Description – rather it will have general reference Draft 1 at present based on comparison of RFC 3647, SOC 2 and SOC cyber e) Description Criteria Details DC1: The nature of the entity’s business and operations, including the principal products or services the entity sells or provides and the methods by which they are distributed DC 2: The principal service commitments and system requirements. This will include uptime commitments for business resumption, principal types of sensitive information created, collected, transmitted, used, or stored by the entity and others deemed important by the entity or significant third party users. DC 3: The components of the system to provide the services, including the following: a. Infrastructure b. Software c. People d. Procedures e. Data DC 4: For identified system incidents that (a) were the result of controls that were not suitably designed or operating effectively or (b) otherwise resulted in a significant failure in the achievement of one or more of those service commitments and system requirements, as of the date of the description (for a type 1) or during the period of time covered by the description (for a type 2), as applicable, the following information: a. Nature of each incident , b. Timing surrounding the incident, c. Extent (or effect) of the incident and its disposition DC 5: Any specific applicable trust services criterion that is not relevant to the system and the reasons it is not relevant DC6: The process for managing risk of the PKI operations in terms of both security and service integrity. f) Section 4 – Audit Testing and Results – Summary Provides general intro as to tests of controls and results Provides information as to types of testing conducted Details (by criteria) for all applicable WebTrust for CA and Baseline and Network Security Criteria We are developing sample controls for baseline +NS ( already in place for WebTrust for CA) This section’s template is about 180 pages long g) Section 5 – Unaudited Management Comments – Summary Expected to detail managements plan to deal with outstanding BugZilla or other issues as well as exceptions found in detailed testing
7. Current Status of WTF Projects – Lifecycle Reports a) Regular reports – based on various event scenarios b) Event reports for key generation, key protection, transport, migration and destruction to cover all expected events cradle to grave c) Types of Reports Scenario 1: New CA, key generation and immediate start to operations Scenario 2: New CA, key generation and delayed start to operations (CA certificate not signed) Scenario 3: New CA, key generation and immediate start to operations with some parked keys for future use Scenario 4: Existing CA, additional key generation for new CA during the period Scenario 5: Existing CA, additional key generation during the period with some parked keys for future use
8. Other Event Reports a) Event reports – completed: key generation, key protection. b) Event reports – in process ( being reviewed) transport, migration destruction to cover all expected events cradle to grave
9. Enhancement of CPA Canada Processes a) CPA Canada is revamping processes with an aim to strengthen the program and add more rigor. Included in the changes are: Replacement of Webtrust.org with CPA Canada – https://www.cpacanada.ca/en/business-and-accounting-resources/audit-and-assurance/standards-other-than-cas/publications/overview-of-webtrust-services Webtrust.org no longer supports current security protocols New pages reside in CPA Canada secure website – newer, modern look and feel Redirection of old webpages to corresponding new pages – mapping complete enabling all traffic to be directed to new webpages automatically New link will be www.cpacanada.ca/webtrust Can also go to CABF’s website and follow the link to WebTrust information b) More detailed application and process considerations for auditors, including international – Separation of practitioner enrollment application from trademark agreement sets stage for process automation c) Seal management: New Seal Deployment document is under development Improved rigor on expired seals – new seal expiration document under development d) Collaboration with Browsers: CPA Canada and browsers are working together to establish an automated process to feed seal and audit report IDs to browsers CPA Canada will notify browsers in the event a seal expires or is revoked. e) CPA Canada Reporting Structure/Roles Gord Beal – WebTrust falls into Guidance and Support activities of CPA Canada Janet Treasure – Seal system and licensing responsibility Bryan Walker –Licensing advisor Don Sheehy – Task Force and CABF liaison Jeff Ward – Chair of the WebTrust Task Force and primary contact All Task Force members provide WebTrust services to clients Volunteers are supported by additional technical associates and CPA Canada liaison but report to CPA Canada
Update on London Protocol
Presenters: Kirk Hall (Entrust Datacard)
Minute Taker: Kirk Hall (Entrust Datacard)
Presentation: Update on London Protocol
Update on London Protocol Kirk Hall June 13, 2019 1. Recap of London Protocol Named after London CABF meeting in June where Protocol announced Project of CA Security Council (CASC) Seven current participants: Buypass, D-Trust, Entrust Datacard, GlobalSign, GoDaddy, Sectigo, SecureTrust Any CA can participate – not just CASC members. 2. Objective of London Protocol Reinforces the distinction between Identity Websites (OV and EV) by making them even more secure for users than websites encrypted by DV (domain validated) certificates. Our philosophy: User security is best when done in depth, with multiple layers and parties involved (browsers, CAs, security applications/anti-phishing services) each providing their own contribution to fill the gaps in user security provided by others. No one security method or provider covers all user threats 100% of the time.
3. Objective of London Protocol: The London Protocol is not any one thing, but a framework that tests new ideas to improve identity assurance and user security, and then share the results with the larger community. Today we are working on four services, but there could be others as well: a) Anti-phishing solution b) Flag list system c) Identity collision system d) Transparency (as to data sources used for EV validation of SubjectDN data) – in development This information can then be utilized by: – Users / machines as to the type of website they are visiting – This information is leveraged today by antiphishing engines in their security algorithms – Other – such as browser UIs
4. London Protocol part 1 – Anti-Phishing solution Objective – minimize the possibility of phishing activity on websites encrypted by OV (organization validated) and EV (extended validation) certificates. These sites are already safer for users (much less phishing than DV sites), and we want to make them as close to absolutely safe as possible It may be possible for CAs to extend to DV certs in the future if warranted – but: a) Many DV sites are actually phishing sites, so notice about phishing content on their sites from issuing CA won’t actually do anything b) CA may not have good contact information for the DV customer (other than an email address) c) Some CAs who issue DV certs don’t even have an email address to reach the customer – no points of contact at all! d) Sites Deemed Dangerous by GSB – October 2018 (getting bad) e) Sites Deemed Dangerous by GSB – March 2018 (GETTING WORSE!)
5. Efficacy of browser filters – Browser Filters are great – but they don’t eliminate all phishing sites. They are not a complete solution for user security. (See table of browser filter response times at zero-hour, 2 days, in PowerPoint presentation)
6. Incidence of encrypted phishing by cert type – See tables of incidence of phishing by cert type in PowerPoint presentation for the months of September 2018, February 2019, and May 2019. Results as of May 2019: EV certs 0% incidence (versus 0.35% presence of overall cert population), OV certs 9.5% incidence (versus 16.6% presence of overall cert population), DV certs 90.5% incidence (versus 83.1% presence of overall cert population). The reason for finding OV phishing certs is discussed below. Conclusion: websites that use EV identity certificates have almost no phishing, OV certs also have no phishing except for shared certificates, DV certs have most phishing. Identity websites are safer for users.
7. Increase in OV Phishing Almost all OV phishing sites are issued to organizations that do not control the content of sites in their OV cert, e.g. hosters. For example: 995 out of 1,128 of these phishing sites (88%) using OV are issued off the “CloudFlare Inc ECC CA-2” issuing CA to the Subject Organization = “Cloudflare Inc” After talking with a few CAs it seems that there is a philosophy is that OV can be issued to either the content owner of the site or the operator. This philosophy is not shared by all CAs, but this is not a BR violation. The position is that “OV for shared certs is better than a DV certificate because at least an end user has a point of contact if there is an issue.” (Does Cloudflare actually respond to any user complaints?) Of course, the other side of the argument is that an end consumer could be confused if this OV data is relied upon as it relates to the site’s content. Should we address this issue in the Forum?
Potential options to address this OV shared cert issue a) Ignore this issue b) Match the site content with the organization identity in the certificate? How? (i) Self-declaration upfront? – Make the requestor declare if they are controlling the webpage content or not, we could flag in certificate, and / or (ii) Require Active or Passive Monitoring by the cert holder (Subscriber) and/or the issuing CA? – TBD
8. Methodology for Phishing detection a) The Phishing Detection Service currently relies on phishing data feeds from the following sources: OpenPhish, PhishTank, ADMINUSLabs, Blueliv, Anti-Phishing Working Group (APWG), Aslab – Others may be added. Ready to add other lists. Can we get a list of phishing data directly from Microsoft, Google, Others – PhishLabs? b) Confirms suspect URL against Google Safe Browsing (GSB) – so no disagreement that it is a phishing site. We would like to include lists that have a high accuracy rate c) Attempts to collect screenshots, certificate data, and other statistics to share with issuing CA Looking to include confirmed phishing, plus other lists, into a flag list system. – This service is being worked on by GoDaddy
What happens after a customer phishing site is detected and confirmed? Step 1: Participating CAs are notified when customer OV or EV site using their cert is flagged for phishing Step 2: Issuing CA contacts customer and provides details – URL(s) of phishing content, screen shots, nature of phishing content. If site is using a shared certificate with multiple SANs or independent pages, the customer is told which SANs or pages were flagged for phishing. Step 3: CA works with customer to help remove the phishing content, how to protect site. If customer will not remediate, CA can consider other steps, even to revocation – each CA decides. Step 4: Service continues to monitor a phishing website for 30 days (and send notices to CA) until the CA clears the website’s status on the phishing list.
9. Summary – London Protocol Part 1 – Anti-Phishing solution We think this is the first process by which website Owners are proactively notified directly by their Certification Authorities that their sites have been compromised with phishing content, and assisted with recommendations for how to remove phishing content and strengthen site. This is monitored throughout the entire lifetime of the certificate. The customers that we have contacted so far have been very grateful for our outreach.
10. What is the Flag List? (London Prot. Part 2) Provides a list of Organizations or common names… Proactive advisory for additional scrutiny – Not a blacklist Can be used for all CA’s to have a shared source to search for High Risk Certificate Requests flags for further investigation before issuing are automatically updated from trusted sources OFAC is ready to go CAs will be able to edit entries when they identify an issue, with reason. entries expire with time depending on source
11. Flag List Policy Seriousness Level 1-3 Rev 1 List Members Level 3 – High Attention OFAC List (updated daily) Level 2 – Medium Attention Phishbank (updated daily) Level 1 – Low Attention
Certificate Authority reported issue. Alexa top 100 (updated monthly) Certificate Authority Procedure If “flagged”: Increase scrutiny on the Company/Domain Review seriousness and list origin CA decided if they should proceed
12. Collision Notification API (London Prot. Part 3) Data store of certificates that have already been issued Searchable by organization or common name Optional country or country and region/state Strict matching We are building regional matching and format checking. Work in progress. Looking for partners to collaborate in build process.
13. Transparency (London Prot. Part 4 – NEW) Transparency (as to data sources used for EV validation of SubjectDN data) – in development. Will include: Data store of certificates that have already been issued Searchable by organization or common name Optional country or country and region/state Strict matching We are building regional matching and format checking. Work in progress. Looking for partners to collaborate in building process.
14. Fine Print / Who’s In Antitrust Laws; Withdrawal by CAs: The participating CAs will comply with all applicable antitrust laws, including the limitations specified by the Antitrust Notification read aloud prior to CA/Browser Forum meetings. Participating CAs may withdraw from this Protocol at any time upon notice to the other participating CAs. Current participants: Buypass, D-Trust, Entrust Datacard, GlobalSign, GoDaddy, Sectigo, SecureTrust. This voluntary Protocol is open to all CAs.
Special Challenges and concerns for Certification Authorities located in Europe
Presenters: Enrico Entschew (D-Trust)
Minute Taker: Peter Miscovic (Disig)
1st issue: Language
A. Language issue
- Not all in the BR an EVG is always clear to the non-English speaker
- The meaning is different from what you are thinking
- Enrico started his presentation by mentioning about language issues. He said it is difficult for non-english speaking CAs to understand and interpret the requirements accurately (“they are not crystal clear for non-english speakers”). It would be nice if we could somehow “mark” the points that are unclear and can be interpreted in different ways.
- Dimitris added that this is a challenge for every CA, even those with native english speakers but the challenge is bigger for non-english speaking CAs. One thing that was discussed was the possibility to document possible alternative interpretations based on the existing text of the requirements, then bring it to the Forum so we can have an improvement process. If the issue is in the validation section, then it would first be addressed to the validation subcommittee. If it is in any other section, probably the larger group.
- Enrico also mentioned that he (and surely others) is having difficulty following and understanding complex discussions at live meetings (F2F meetings and teleconferences). He recommended using easier english language as much as possible, just to get the message. This is not just a European issue but more likely for all non-english speaking members.
- Dimitris added that it would be great if after long and complex discussions there was a quick summary in the end with bullet-points to capture the gist, but he realises that this is hard for verbal discussions. Even in long e-mail threads it would be great if we could capture the points in a condensed way.
- Robin: In an ideal world, all Contributions would have a consise and clear description but is that really achievable? If it snot achievable, perhaps it would make sense to have a person responsible of producing such a “simplified” version.
- Dimitris responded that he would normally expect the minute-taker to capture the arguments and discussion and document them in a simple language way, but the minute taker must be able to follow the long and complex discussion.
2nd issue: Qualified website certificate
- 40 Trust service providers issue QWAC
- QWAC is an SSL certificate
- Audit Criteria are defined by ETSI but oriented on EV-Guidelines
- QWAC as a new type besides of DV, OV, EV TLS/SSL certificates, so the browser can recognize them
Dimitris Zacharopoulos (HARICA) adding:
- browser should not have special recognition
- they are a different requirement from eIDAS and BR
- maybe the right solution will be to have a new class in BR with his own policy identifier for QWAC and clear rule
- Enrico described the concern with QWACs. Over 40 TSPs issuing such certificates. They’re SSL certificates with nearly the same or the same criteria of the EV guidelines. There’s a desire not to see a QWAC not as an EV, OV, or DV certificate, but something in between, so that browsers can recognize.
- Dimitris mentioned he wasn’t sure if the concern was about recognition by browsers or special treatment. However, CAs are in a tough position, where they have to follow the laws of eIDAS, which describes QWACs and corresponding ETSI requirements, and the Baseline Requirements and EV guidelines, and have to juggle both requirements. There was a desire to explore perhaps making QWACs their own class within the BRs and EVGs, with their own identifiers, so they can align expectations and requirements. Currently, CAs don’t know if they should assert QWACs using the OV or EV policy OID.
- Ryan Sleevi (Google): Summarizing
eIDAS, it is a technology neutral framework that isn’t directly
prescriptive on the specific technologies. From the regulation
perspective, a QWAC doesn’t have to be a TLS certificate; a QWAC is
merely a profile for certificates to meet the requirements of Annex IV
of the Regulation, but nothing in the Regulation fundamentally requires
that they be used or usable as TLS server certificates. However, the
ETSI EN 319 41x series of specifications from ETSI describe a standard
for implementing QWACs, and it was ETSI that made a decision to have
QWACs overlap with TLS as part of the ETSI specifications. CAs concerned
about the complexities of complying with both the eIDAS Regulation and
of the BRs and EVGs when implementing QWACs may find an alternative
solution is to get involved in ETSI, and work within ETSI to remove the
id-kp-serverAuth EKU from the ETSI specifications. This would allow
provisioning QWACs and TLS website certificates independently.
Discussion in the Validation WG earlier in the week looked at ways that
you could have independent QWACs and TLS certificates. Since the eIDAS
Regulation is technology neutral, the complexity is largely an issue
because of ETSI ESI decisions, and you don’t have to fundamentally
- The suggestion here, to combine a definition of QWACs in the BRGs and EVGs, has one of two results. Either it’s trying to replace ETSI ESI in providing a technical profile of QWACs, or its setting up a conflict between the ETSI ESI documents and the CA/B Forum documents, particularly when things get out of sync. The PSD2 discussions highlighted the risks when things get out of sync, and the pain that can cause CAs. So recognizing QWACs in the BRGs/EVGs is really one of two things: providing an alternative definition for how to issue QWACs, or duplicating ETSI ESI’s definition of QWACs into the BRGs/EVGs and keeping them in sync.
- Dimitris (HARICA): We’re already in a situation where ETSI ESI is duplicating various requirements of the BRGs into the ETSI documents, and also referencing specific versions.
- Ryan Sleevi (Google): As discussions with ETSI and WebTrust can attest, the duplication that goes into either WebTrust or ETSI documents creates challenges. We see how when the BRGs/EVGs change, there are challenges in getting the ETSI and WebTrust documents updated, and that can create pain and confusion for CAs. Trying to recognize QWACs in the BRGs/EVGs is basically creating the same problem, in the reverse, where the CABForum would need to track the changes to the ETSI ESI documents or risk creating conflicts in the BRGs/EVGs. This is why I asked if folks had thought about removing id-kp-serverAuth from ETSI ESI.
- Dimitris (HARICA): I’m not sure if this was ever a goal when writing the specification or regulations. The goal was to harmonize. We don’t really want two parallel worlds that do “more or less” the same validation methods and technical specifications. We have some divergence, but they aren’t major.
- Ryan Sleevi (Google): The Regulation is clear it desires to take a technology neutral approach and leveraging industry best practices. There are many approaches that could achieve the objectives of QWACs that could build on these same ideas and frameworks, as we saw with LEIs in earlier discussions, without having to conflate technologies. I know it’s controversial, but there are benefits from decoupling them. For example, if browsers adopted DANE, either as a supplement or replacement to the Web PKI, there’s no ability to provide a QWAC, because DANE is potentially a replacement for those TLS certificates and the domain authentication part. Yet if you look at separating separating QWACs and TLS certificates, you could have both independently work with relying parties. It’s not that there’s not value in QWACs, it’s just that there’s no fundamental technological reason they need to simultaneously be TLS certificates. There are a number of other ways to express that relationship, which could allow broader use of QWACs.
- Clemens Wanko (Acab’c): On a more generic level, a question for the browsers: Do you think it’s possible for the CABForum, looking at QWACs, to come to a solution where the Forum accepts QWACs as they’re defined, and they’re accepted into browser root stores on the basis that they’ve been looked at under the European framework? If the answer is yes, we should discuss how to make that happen.
- Ryan Sleevi (Google): From Google’s perspective, there’s not really any path forward to have QWACs, with the audit regime and profile specified by ETSI and implemented by eIDAS, to being able to supplant the browser root store program requirements and supervision. Complying with ETSI and eIDAS alone are not going to be sufficient to be recognized. If TSPs want to comply with eIDAS and issue QWACs, as well as issue TLS certificates, then for now and the indefinite future, those TSPs would need to comply with the browser root program requirements independent of the eIDAS framework. My hope is that we can find ways to streamline the experience, such that if a TSP goes through the ETSI and eIDAS process, they can implement a set of controls to make it easier to go through the browser process and get recognized. This is why the discussion about id-kp-serverAuth is important: I don’t see a path where simply being a QWAC would be sufficient to be trusted. I think it’s important to ensuring that the CA/Browser Forum continues to be a place to discuss and develop a profile for interoperable certificates. I don’t think having two SDOs, ETSI and the CABForum, both defining the same thing being desirable. I want to find a way to let them both develop standards relevant for their communities, without having to worry about overlap and conflict. To summarize: Would a TSP being recognized as capable of issuing QWACs lead to automatic inclusion? No. Would complying with the ETSI ESI profile of QWACs be sufficient, without also complying with the browser root program requirements and the CA/B Forum requirements? No.
- Clemens Wanko (Acab’c): Thanks, that’s a clear statement. Do the other browsers have that same perspective? What do the CAs say?
- Wayne Thayer (Mozilla): Speaking for Mozilla, absolutely. There are a host of reasons why I think it’s unwise and impractical to think of a browser taking a QWAC as in and of itself fulfilling all the obligations of a browser root store program and recognizing it as a trusted certificate. There are a lot of reasons for that, that may not be practical to discuss now. When I think of the overall framework that Mozilla has implemented, it differs significantly in its objectives and goals. I think there are characteristics that are fundamentally different with eIDAS that makes it incompatible. I support the idea of making sure they’re not incompatible, but I absolutely agree that automatic recognition without further browser program vetting for Mozilla.
- Mike Reilly (Microsoft): Microsoft would be in the same position.
- Dimitris (HARICA): For clarity purposes, when the European CAs met, this was never on the agenda. We’re well aware of the challenges with this. Our discussion was what Ryan earlier highlighted: we have two sets of rules, and we need to make sure both are compatible with each other. The idea was to try and align to one set of requirements.
- JP Hamilton (Cisco): Cisco is not ready to commit to an answer for this question.
Any Other Business
Demo of dokuwiki
Presenters: Jos Purvis (Cisco)
Minute Taker: Dimitris Zacharopoulos (Harica)
Jos did a quick demo on how to handle attachments and media files in the new dokuwiki. He also presented how users have different access privileges based on access “tags”, depending on their WG membership. For example, only members that have declared participation in the Code Signing Working Group have access to the pages of that Working Group.
It is possible to move pages around but it is very likely that existing references will break so this should be avoided as much as possible. We would have to search any old references and replace them with the new. The wiki can “self-heal” its own links if one migrates content using the wiki tools. If we detect broken links, we should consider how to deal with that, perhaps in a future Infrastructure Working Group meeting.
Halton (360) mentioned that when someone tries to double-click to select a word, the page goes into “edit mode”. Perhaps we should disable this.
Members indicated that this feature should be turned off.
Mariusz (Opera) mentioned that in some photo pages, it loads each and every picture making it unresponsive. This applies only to pages with photos that are located in docuwiki and not to external links.
Jos suggested that members should use external sites to store pictures and content that is large enough and uses up storage from the sponsored resources kindly offered to the Forum.
Dimitris suggested that there might be an upload limit -say- 2MBs which might prevent people from uploading large files. Jos mentioned that some documents and presentations might be larger than that and Dimitris said that most public presentations are uploaded on the public web site (wordpress) and can be linked through the wiki. Uploading presentations on the public web site, without any page linking to these presentations, makes them “invisible” to the public but “visible” to the Members with access to the wiki when the draft minutes are created. This way we save half the storage because we don’t need to upload the same presentation files twice in two different servers. Jos agreed and other members raised no objections on using this method going forward.
Formation of Governance Subcommittee
Minute Taker: Jos Purvis (Cisco)
Dimitris introduced the concept of creating a Governance subcommittee. Now that the Forum permits subcommittees at the Forum level, it would be good to have a committee focused on the maintenance of the Bylaws, IPR requirements, and other Forum-level documents.
Ben Wilson from DigiCert noted that when we wrapped up the Governance working group for the Governance Reform work, we noted that it would be good to keep around a group to deal with governance of the Forum, for instance for Bylaws changes and the like. This was proposed as a Working Group, but members objected because it wouldn’t have a limited duration or narrow scope, as it would be maintaining an ongoing list of items. Ben clarified that this new subcommittee would not be for governing the Forum, instead for organizing the work of the Forum—addressing issues, and keeping the Forum moving forward. With the completion of GovReform v2, it is now possible to create this as a subcommittee of the Forum itself: no IPR entanglements, no guidelines, much like the Forum Infrastructure Working Group.
Dimitris presented a list of governance-related issues he has been tracking for some time that has been circulated on the management list that he felt would be a good list of issues for the subcommittee to begin working on. His doc (in Google Docs) includes who raised the issue, the specific concerns mentioned, etc.
Dimitris asked if Ben had plans to ballot such a subcommittee. Ben responded that he had a Forum charter ballot for a Bylaws Working Group that he would modify to create a Governance Subcommittee, revise, and then submit as a ballot.
Ryan Sleevi from Google noted that some of the items in Dimitris’ list have been addressed (or may have been) during the various Governance reform ballots, although not all of them had been. He asked for a refresh of the document to clear out anything that’s been addressed and have that fresher list be used to focus the charter for any such new group, although he said that those didn’t necessarily have to be used to specifically limit the work of the group. He requested that the group focus on balancing their work between too many small ballots and a large ‘boil the ocean’ ballot, and potentially select a date by which the first set of those items could be submitted to the Forum as a ballot to keep the work from dragging out for way too long. He also noted that the current Bylaws changes mean that Forum subcommittees cannot touch IPR matters, so that declaration wasn’t necessary in the charter of the subcommittee. Finally, he clarified that his desire around moving the work more specifically and faster was to get Google’s legal counsel involved in the work, which is difficult when the process takes too long or moves too slowly.
Ben suggested an ongoing list of issues be kept for such a subcommittee. Dimitris offered to share the doc he has on the management list to refresh it. Ryan agreed that this was a good idea.
Ryan noted that the Governance Reform subcommittee mailing list must be public, per the Bylaws, just to set expectations. (The previous incarnation of Gov work had been private and then went public later.)
Arrangements for Next Meeting
Next F2F meeting is taking place November 5-7 at Guangzhou, China hosted by GDCA.