Attendees: Alex Wight (Cisco), Ben Wilson (Digicert), Richard Wang (Wosign), Kirk Hall (Entrust), Dean Coclin (Entrust), Dimitris Zacharopoulos (Harica), Jeremy Rowley (Digicert), Tim Hollebeek (Trustwave), Doug Beattie (Globalsign), Rick Andrews (Symantec), Atsushi Inaba (Globalsign), Billy VanCannon (Trustwave), Tyler Myers (GoDaddy), Bruce Morton (Entrust), Neil Dunbar (Trustcor), Chris Bailey (Entrust), Ivan Ristic (Hardenize), Jody Cloutier (Microsoft), Peter Miskovic (Disig), Jeff Ward (BDO-WebTrust), Sissel Hoel (Buypass), Mads Henriksveen (Buypass), Anlei Hu (CNNIC), Yin An (CNNIC), Andrew Whalley (Google), Ryan Sleevi (Google), Jos Purvis (Cisco), JP Hamilton (Cisco), Josh Aas (Let’s Encrypt), Yi Zhang (CFCA), Erwaan Abalea (Docusign), Franck Leroy (Certinomis), Nick Pope (Thales/ETSI), Clemens Wanko (TUViT), Arno Fiedler (D-Trust), Robin Alden (Comodo), Inigo Barreira (Izenpe), Irena Hedea (Deloitte Luxembourg), Moudrick Dadashov (SSC), Phillipe Bouchet. On the phone: JC Jones (Mozilla), Gerv Markham (Mozilla), Li-Chun Chen (Chungwa Telecom)
Note Taker: Jeremy Rowley
Google’s CT policy was updated in May 2016. The update clarifies ambiguities about log operations. These changes are not material changes. These changes are expectations for complying with CT, not an EV policy. A CA using pre-certs must still have one SCT from a valid log and two SCTs from logs that were pending qualification or qualified at the time of cert issuance, one of which must be from Google and one of which must be from a non-Google log.
Two logs were removed: WoSign and Certly. Both had availability issues. Chrome will stop trusting Izpene’s log as of May 30. this root reused a key for STHs.
Chrome does not have plans to support name redaction in CT. There were concerns about how redaction will work in practice.
New UI refresh scheduled for Chrome 52-53. This is a significant refresh to harmonize with other Google properties and will change how the lock will display.
Chrome is working on an Expect OCSP flag. exceptOCSP is similar to mustStaple except that it will address the number of users who will have a worse user experience if OCSP stapling is required. It will help encourage fixing the variety of bugs still in server software that makes Google hesitant to use mustStaple. Data shows that less than 10% of the responses actually serve OCSP stapled responses.
ExpectCT is similar to expectCT. Expect CT will give server operators a chance to test CT prior to CT being required.
Chrome launched its revised security panel, which replaces lots of security information found elsewhere.
Chrome removed DH and TLS version fallback. If a server screws up TLS, Chrome won’t try a weaker protocol. ALPN is favored over SPDY. SPDY will continue working but will fall back to HTTP1.1 instead of HTTP2.
Chrome does not permit a site to access user location if the server is using HTTP. The server must use HTTPS. Camera and Microphone access were removed a while ago. All new features that have privacy or security implementations require TLS.
Keygen will fully disappear soon.
Added support for X25519 curve.
Note Taker: Billy VanCannon
Root Removals Roots now all tracked in Salesforce and public data. Everyone should be able to find them themselves.
SHA-1 Support In Firefox 43, in December last year, we turned off support for SHA-1 certificates with a notBefore date greater than 1st January 2016. No CA should be issuing such certificates. However, it turns out that many MITM middleboxes do issue them, and the result was brickage – Firefox no longer worked with secure sites, and couldn’t download an update as those updates were also over HTTPS. So we had to revert the change.
After gathering some telemetry, we have some confidence that locally-installed, non-public roots are the main source of SHA-1 usage (by four orders of magnitude). So we have changed the default policy to only allow SHA-1 signatures by non-default roots; SHA-1 certificates from publicly-trusted CAs issued after 2015-12-31 will be rejected. This change will ship in Firefox 48, currently scheduled for 2nd August.
We are still intending to disable SHA-1 in SSL entirely around the beginning of 2017. We have not yet precisely scheduled the change, but it will probably be in Firefox 51, currently scheduled for 24th January 2017.
If you want to experience a post-SHA-1 world now, you can go to about:config and set security.pki.sha1_enforcement_level to 1 (Forbidden).
Recent research has shown that in order to remove risks due to SHA-1, CAs need to not be using it for anything signed by a CA key trusted by our program. It’s not enough just to stop issuing SHA-1 certs. We don’t have firm policy around this, but it’s an area we’re actively interested in, and CAs should be looking at moving away from SHA-1 in all parts of their operations.
Short-lived Certs The 10-day threshold for short-lived certs shipped in Firefox 45 on 7th March. I’ve heard no feedback from CAs, and our telemetry shows no usage. If anyone has tried it out, please let us know.
Must Staple Must-staple shipped in Firefox 45 on 7th March. I’ve heard no feedback from CAs. If anyone has tried it out, please let us know.
Partial Support for Windows Cert Store We are adding support to Firefox on Windows so it recognises certs which have been manually added to the Windows cert store by administrators. This is for ease of administration in enterprises. (Billy emailed this question later: Does it include any cert added to the Windows store, ie root, intermediates, client, etc? Mozilla responded: The idea is to add only CAs that are trust anchors for TLS server auth certificates (so roots, basically).)
Intermediate Certs The deadline for all CAs to submit their non-constrained intermediate certs into Salesforce is June 30th (although some CAs have committed to an earlier date). Please make sure you hit that date; it’s important to us, and missing that deadline may be treated as a non-compliance incident. Rob Stradling’s report suggests that only just over a third of intermediate certs known to crt.sh as needing disclosure have been formally disclosed so far, so there is still some way to go.
We do now have a mass import capability for CAs who need it – link in the notes.
There has been some discussion about what certs need to be disclosed. We are looking for disclosure of all instances of any intermediate cert chaining via any path to a root we include that is technically capable of issuing SSL certs and which is not name constrained, either itself or in all its chains. If you want a good rule of thumb, consider if Mozilla used the Salesforce data to create a whitelist of nonconstrained intermediate certs that Firefox would allow. If not disclosing a cert would cause you problems in this scenario, you definitely need to disclose it. This is not an exhaustive test, but it is a useful one.
CA Communication Thank you to all CAs for replying to the most recent CA Communication in March. The information is being rolled into our program management plans.
Root Store Community We are proceeding with the plan for multiple root stores to share Salesforce in a Root Store Community. We’ll let the root stores involved speak about their experience and plans. We hope to have the major root store operators on board by the end of July.
Policy Revision Process: version 2.3 There has sadly been little change on this front – there are still a large number of items that need to be discussed. Kathleen is hoping to get back to this soon, but is currently fully loaded with the regular root inclusion/update process and the Salesforce work. If you want to help her, join in with the public discussions of root inclusion/change requests in mozilla.dev.security.policy.
URLs From The Above Firefox release schedule: https://wiki.mozilla.org/RapidRelease/Calendar
List of removed roots: https://wiki.mozilla.org/CA:RemovedCAcerts
SHA-1 usage telemetry: http://mzl.la/1TEYxYh
SHA-1 public disablement bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1254667
Short-lived certs telemetry: http://mzl.la/24ObOJC
Windows cert store bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1265113
Intermediate CA crt.sh report: https://crt.sh/mozilla-disclosures
Intermediate CA mass import: https://wiki.mozilla.org/CA:SalesforceCommunity:MassImport
Salesforce CA Community instructions: https://wiki.mozilla.org/CA:SalesforceCommunity
Salesforce Root Store Community information: https://wiki.mozilla.org/CA:SalesforceCommunity:RootStoreMembers
Subordinate CA reports: https://wiki.mozilla.org/CA:SubordinateCAcerts
Items being discussed for CA Policy 2.3: https://wiki.mozilla.org/CA:CertificatePolicyV2.3
Policy update process: https://wiki.mozilla.org/CA:CertPolicyUpdates
Note Taker: None – No Report
Note Taker: None – No Report
Note Taker: Doug Beattie
Removing roots and implications on Authenticode:
- Removing Roots with Authenticode issued certificates under them has never really worked. When you revoke/remove a root it disables all of the signed code that was ever issued under it when the point was to disallow future signings. This can result in unintended side consequences of breaking Authenticode for millions of people.
Microsoft is introducing 2 new properties to help resolve this for Windows 10 and higher operating systems:
- Disallow Date – When a date is set for this for a specific root, all Authenticode signed object will fail, and all signed objects with timestamps prior to this will continue to work.
Disallow EKU – When this is set for a root, none of the EKUs identified in this list will be trusted. For example, if this is set for ServerAuth, then none of the SSL certificates will be trusted.
- Microsoft plans to add “Disallow SHA1” as a new property. They will set Disallow SHA1 for all active roots on February 14, 2017 as part of an operating system update. When this happens IE/Edge will behave as though the certificate is expired or otherwise invalid.
- IE and Edge will take an update in June to remove locked icon for SHA-1 (for internal CAs the locked icon will remain)
Better testing of releases:
- Today MS allows enterprises to opt into obtaining early releases, opt into the Fast Ring. This allows MS to “flight” these update to receive the SST packages earlier than anyone else which allows them to be tested more broadly before the general push of the updates.
MS will be using the same version of SalesForce as Mozilla which will be on the same system. If the audit is submitted then the other program will get the audit data also. Google and Apple might also move to this. Each root store will have their own unique work flows and specific requirements, but by MS and Mozilla using the same Salesforce system data that is needed by both will only need to be entered once.
April 2016 Microsoft Root Program Changes CRM:xxxxxxx:
- Jody clarified item 5 in the announcement. The current wording is: “New intermediate CA certificates under root certificates submitted for distribution by the Program must separate Server Authentication, S/MIME, Code Signing and Time Stamping uses. This means that a single intermediate issuing CA must not be used to issue both server authentication, S/MIME, and code signing certificates. A separate CA must be used for each use case. Please note that this requirement does not apply to roots enrolled in the Program prior to July 1, 2015. Any root enrolled in the program before this date must comply with this requirement by January 1, 2017.”
- The intent is that all existing Subordinate CAs do not need to comply and that all new subordinate CAs MUST comply. Jody acknowledged that his intent was not clear in this statement and that it would be revised.
Note Taker: Robin Alden
The Validation ballot will be circulated probably beginning next week.
It will go into effect 6 months from approval. Both the old and the new validation methods will be valid in the interim.
Kirk: So you could move tomorrow.. Jeremy: You can move to the new methods at any time during that 6 month transition period.
OV, DV, and EV all affected.
This latest draft is not significantly different to previous drafts.
The issue of choosing Business Entity vs Private Organization was discussed. See the discussion on the group mail list or join the next call.
EV subject for state and country – we’re working on the language. There remains some confusion on the ‘if applicable’ language over state/country.
BR requirements will be harmonized with EV requirements for these fields. E.g. XX currently allowed in BRs but not EV.
SRVname ballot – PZB. He will recirculate. Interest in seeing that adopted to bootstrap the use and certification of SRVnames.
(IV certs – referenced Richard Wang’s email)
Jeremy working on a ballot to Expand the use of givenName and surname instead of organizationName. Will be discussed on the next call.
The calls are on Thursdays at 9am mountain time.
Note Taker: Tim Hollebeek
During the Working Group session we reviewed section 5.2 (Trusted Roles) and 5.3 (Personnel) of the BRs. The Working Group will continue to investigate and clarify Trusted Roles going forward.
A short discussion took place regarding the “CA” term being used in the CA/B Forum docs and whether we should change to “TSP” or “CSP” to better reflect the operating organization. The confusion in the term “CA” is because there are points in the CA/B Forum documents where the same term is being used to indicate the X.509 certificate of a Root or Intermediate Certification Authority. The Webtrust auditors noted that such a change would affect a great number of documents that would need to be updated. It was decided that two reviewers would independently read the CA/B Forum documents and indicate (label) how the context of the term “CA” is being used. In case of confusion, the language of the documents would be amended to indicate the correct usage of the term “CA”. Tim H and Dimitris volunteered for this but others are welcome to assist as well.
It was also mentioned that a relevant document by Steve Roylance was circulated in the past and it would be useful to be re-circulated.
The group also looked at 5.4 (Audit Logging). Many of the updates the Working Group have been working on have been based on the NIST Certificate Policy Guidelines and also the Network Security Requirements.
Since the Network Security Requirements are being inserted into the BRs piece by piece, it was agreed that the Network Security Requirements will be updated with any relevant changes as the appropriate sections are merged into the BRs.
The length of time for storing audit logs in section 5.5 was briefly discussed.
Note Taker: Tim Hollebeek
The CAB Forum Code Signing Working Group has been disbanded. ‘CAB Forum’ has been removed from the title of the last Code Signing Working Group draft. A paragraph has been added explaining the history of the document, including that it was not adopted by the full CAB Forum.
Note Taker: Ben Wilson
The Governance Reform Working Group met and explored the following issues:
• What are the problems that we are seeing and how do we solve those problems?
• What types of problems should the Forum be focused on?
• When it is focused on a particular issue, how does it make a decision?
• How is the Forum chartered and what does the Forum cover?
• How is the Forum governed?
• How are the rules made?
• What are the rules for creating and modifying standards?
We reviewed creating an umbrella organization and having working groups as subsets of the organization.
• Expanding the scope of Forum discussions and those allowed to participate;
• Logistics for meetings and operational organization;
• Having a single IPR policy with possible variations or RAND or RAND-Z, and scoping IPR obligations by working group participation like the W3C;
• Management organization and voting rights – representative or direct vote? Working group chartering disputes;
• Membership criteria; and
• Having a common set of requirements that are Forum-wide-things requiring industry-wide uniformity.
Note Taker: Arno Fiedler
Slides Presented by Nick Pope; Vice Chairman 3_Pope_ETSI_CABF_Bilbao-May2016
Clarifications: ETSI Standards (GSM, DECT, PAdES…..and Certification Policies ) have worldwide use
CA/B-Forum relevant is European Norm EN 319 411-1, European specific requirements for qualified services are separated to 319 411-2
ETSI provides precise audit criteria via checklists (395 specific items for OV/DV/EV)
Question by Jody: ETSI doesn’t enable full audit every year?
Nick’s Answer: The surveillance audit checks every year any changes to the CA’s systems.
Jody: Microsoft still requires a full audit every year
Nick: That’s okay for the Audit scheme, so add this requirements to your documents if you refer to 319 403 but add specific requirement for annual audit. In ETSI EN 319 403 V2.2.2 (2015-08) on Trust Service Provider Conformity Assessment – States: 7.4.6 Audit Frequency. “There shall be a period of no greater than two years for a full (re-)assessment audit unless otherwise required by the applicable legislation or commercial scheme applying the present document.
NOTE: A surveillance audit can be required by an entitled party at any time or by the conformity assessment body as defined by the surveillance programme according to clause 7.9.”
Dimitris: Our ETSI Auditor checked in the recent surveillance audit if there are any changes in the procedures of the TSP but also performed full re-assessment for critical controls.
Nick: Conformity assessment for Audits is based on ISO 17065 for products, processes and services the certification report identifies the responsible national body accrediting the auditor, ACABc will list and publish it
Mads: Question how to maintain the EN 319 411-1?
Nick: Its done by the ETSI ESI Committee, We had previously agreed to a Maximum 9 Month in total, but with new EN procedures it is expected that this will be quicker.
Ben: When the transition period to eIDAS Norms starts? Clemens: For qualified services its defined by eIDAS regulation, we recommend to use the same transition scheme in the CA/B_Forum Context
Inigo: As reported in Scottsdale new TS 102 042 audits for new applicants only possible until 30.06.2016, the existing already audited TSP can use TS 102 042 Audits until 01.07.2017, not longer.
Note Taker: Andrew Whalley
ACAB’C: The Accredited Conformity Assessment Bodies’ Council http://www.acab-c.com/
Presentation by Clemens Wanko
Clements started off by saying they are the good guys – they like to support our interests and they don’t want to have us reduce our security requirements 🙂
Topics for the session:
- Introducing ACAB’C (with apologies for the name!)
- Talking about auditing – the audit results that are provided. How to check the quality of the audit reports that you receive and how to work out who is a qualified auditor or not.
Start of presentation 4a_Bouchet_ACABc._intro
Be a platform for ISO 17065 CABs. Member CABs are obliged to sign code of conduct (based on ETSI standard)
Publish a list of accredited CAB’s (though since it’s a voluntary scheme, it won’t necessarily be a complete list).
Not restricted to Europe – anybody who meets the standards can join.
They are going to produce a standards mark.
Two categories of members: Conformity assessment bodies. These are accredited bodies. Supporting parties – interested party. The CA/B forum could become a supporting party.
For either, please send an email if you’re interested in joining. (firstname.lastname@example.org)
Members of the board:
Philippe Bouchet: LSTI Chairman
Clemens Wanko: TÜViT Vice Chairman Board
Armelle Trotin: LSTI General secretary
End of the first presentation.
ACAB’c wants to start a conversation and communication with the CA/B forum. Clemens Thanked the forum for inviting them to speak. Hopes to be able to make it regularly to CA/B events and continue the discussion.
Question: Are you a formal organisation? Answer: A formal club according to French law, incorporated in France; they can get an OV cert.
Question: What does it mean for the CA/B Forum to be a supporting party? Answer: The idea is to establish a contact. Doesn’t need to mean the body has to be a member. Could be an individual, could be setting up information sharing.
Question: When will the list go public? Already up on http://www.acab-c.com/accredited-bodies/ – only two at the moment.
Jody and Dean noted that it’s very tricky to work out if an auditor is accredited. Looking forward to getting a full list.
Start of presentation 4b_Wanko_CAB-Forum_Akkreditation_and_audit_intro
This is about “where does the quality come from”, what are the assessment results and how can you work out if the auditor is a qualified auditor or not?
One receives an audit attestation document, containing:
- Auditor and accreditation
- Auditee and audited PKI
- Dates and validity
You receive the audit attestation, and get a list of ISO/ETSI standards. The question is how can one cross check?
The name of accreditation body is listed on the letter, along with their logo. Go to http://www.european-accreditation.org/ea-members and look ensure the accreditation body is listed.
Question: What about an auditor who isn’t an EA member? Answer: That’s not a thing – you have to be accredited to the ISA standard.
Question: But what about a body that’s accredited to the ISA standard but isn’t in Europe? A: That’s where ACAB’c comes in! They could become a member.
Noted: That if you’re a US auditing company, there’s nobody to do the accreditation. You must have a place of business in the EU country to be accredited by the country’s scheme. But an EU auditing company could audit a non EU company. Internationally there’s a body called IAF, roughly the equiv. to EA. They don’t currently recognize ETSI but they do recognise 17065. Would require some more work to set it up, but the framework is there.
Question: I thought that the CAB performs the 403 assessment? Answer: No the CAB undergoes the assessment, it’s performed by the EA accredited institution.
Note: EA and IAF are just collaborations of national accreditation bodies. Outside Europe there are bodies that can accredit auditors within their nation to 17065, but there isn’t currently anybody outside Europe who has the ability to assess against both 17065 and 403.
Summary from Ryan: When you see a 411 from a CAB, the essential criteria is to make sure the CAB themselves has been assessed to both 17065 and 403. EA provides that, because it provides the assessment for both of those. IAF only provides 17065, however there may be national assessment bodies within IAF that can do both, in which case the CAB can do 411.
There is another structure above EA which is IAF – the International Accreditation Forum. You can use it in the same way as you do with EA. There’s a page (http://www.iaf.nu//articles/IAF_MEMBERS_SIGNATORIES/4) which lists the countries and the organisations in each one.
Note: The auditors are given a full audit every year.
Question: What is the assessment of the auditors to audit to a specific standard? How can the conformance body add a new thing to audit against? Answer: There’s a list that’s part of the “auditor and accreditation” section of the Audit Attestation (see slide 4) and it says what they are capable of auditing. The procedure to adding to this list is the auditor acknowledges the accreditation body, and they come to the auditor and check that there are the necessary policies and procedures in place to accredit against the new standard. And they come and do an eye witness audit, to check how the auditor audits and if they meet all the accreditation requirements which are stated in the new standard. That’s guaranteed by the accreditation scheme.
And now some more background – what’s happening behind the scenes: The audit process and what’s produced by the audit process.
Audit frequency and coverage is defined in the methodology document 319 403 and says:
- “The audit frequency must not be greater than two years for a full re-assessment audit”, but it goes on to say “unless otherwise required by the commercial scheme applying the present document, or by industry requirement”. So if you don’t want two years, you want one year, it’s easy and we shall be happy to give a corresponding statement in the audit statement.
Surveillance audits cover changes “as indicated by the CA”, so the auditors are relying on the CA and that the CA is not cheating. On top of the changes reported by the CA, the auditor adds other points. They follow the full lifecycle from request to revocation and are confident that any changes in procedure would be detected.
Question: What words would browsers look for in the attestation so they they know it’s a full audit. Answer: Would use the words “Performed a full audit as required by ETSI”, which would indicate it’s not a surveillance audit.
In Germany, the surveillance audit procedure has been working well; changes are worked through and are classified as either security critical or not. So things like changing a paper form doesn’t require to be looked at. The auditors can’t say they have never been cheated, but the threat of going out of business if CAs don’t properly report things seems to be working pretty well.
The CA (TSP) current operation is always covered by an audit.
SLIDE 14: Audit process (walking through the slides)
Two phases of the actual audit, and a documentation phase.
- Sit back in your office and read through the security policy. It must be detailed enough that it shows that it fulfills each ETSI requirement.
- On site: Verify implementation of security measures
- Takes several days – a week or even more. Including technical and penetration testing.
- Results in documentation.
- Audit results
- Additional content
Technical audit in more detail
- Technical process IT network Trustworthy systems audit Organisation & organisational procedures
Note that penetration tests can be outsourced to 3rd parties. What auditors want to see is not only the vulnerability report but the follow up showing what they’ve done with the results.
Trustworthy systems: HSMs. Identification, lifecycle. Often the problem is identifying FIPS-140 module in the rack.
Showing the certification flow, ending up on the eIDAS Qualified Trust Services Status List (TSL) if it’s a relevant audit.
Question: Does ETSI or Webtrust have a separate audit for RAs only?
Answer: ETSI do it – the CA has overall responsibility but can out source. The RA would have to do their own security policy description, their own security policy and that would be audited separately. There are lots of companies acting only as RAs.
Note: The audit certificate can only be issued to the CA. But the report underlying the certificate can be issued to the CA the RA or whoever.
If Dean is a CA, but Kirk is an RA, and we want to have that arrangement audited. The auditor comes and and audits Kirk, and produces a report but not a certificate. And then that report goes to Kirk, who would use it to convince the CA that he is doing a proper job. Then when the CA is being audited, the auditor checks the interface that connects the CA and the RA, and that Kirk is connecting to the CA through the interface in the proper way, then they would add Kirk to the CA’s certificate. Kirk would never have a certificate; but does have an assessment. In real life, the RA wants to have something to use in their marketing material, so they often point at the certificate they’re included in so they can say they are ETSI conformant.
SLIDE 25: Audit Documentation.
411-2 (eIDS qualified) references 411-1 for most parts. Which in turn references 319 401. There are also direct links to baseline requirements and EV guidelines, so whenever those are changed they are automatically referenced.
ETSI certificate is for marketing, can be shown to anybody. Audit attestation contains more details is for browsers, the community that wants to get trust from the CA.
Audit attestation letter was developed with Jody. They are happy to change it if anybody has any feedback.
Then there’s a certificate. Note that the logo is controlled, and registration number is used to cross check back to the EA webpage. The same is true for LSTI or any other accredited body.
Question: How long does it take to do a full ETSI audit? Answer: Thinking about a CA that issues website site certificates. First they have to audit all the RAs, the core of the CA, including issuance and revocation, and then they have to do the two stages referred to above. Stage 1 is usually about 6 weeks. Stage 2 (on site) is between three days and one week on site.
Summary: CABs under EU-accreditation. Allows simple crosscheck Audit frequency: full/surveillance or full/full plus changes
End of presentation.
The Forum showed their appreciation.
Question: Is there a problem getting auditors certified under this scheme? Recall there were some problems last year – where are we now? Answer: The Christmas shopping problem: everybody’s leaving it to the last minute. The accreditation bodies in the member states are still working on their accreditation standards. France has it all done. TUViT is working on it. In other European countries the situation is a little worse. France is in the lead – Germany is very far long, and the other are mostly behind.
17065 is in place for most of the CABs, it’s 403 that’s the problem. But for many folk, if you’re in the middle of going through 403 then it should have a high level of confidence that you’ll get it as long as you have 17065. So look for the 17065 accreditation, because the 403 might not be there yet.
Iñigo mentioned that if anybody has any problems they should get in touch with him and ACAB’C and they can point them in the right direction.
Question from Dean: Would you like ACAB’C to be an associate member, like ETSI. General consensus is that it would be useful to have ACAB’C as a formal member.
Dean asked for any objections: ACAB’C is admitted as an associate member, once they have completed the IPR agreement. Welcome!
Question from Iñigo – is Microsoft willing to change the ETSI audit policy to allow it outside of Europe?
Jody: Let’s see how it works in practice; they are willing to revisit. 3 concerns
- Full audit and not a surveillance audit
- Ability to backtrack from auditor to who accredited them
- Details about the attestation letter
The second and the third seem to have been resolved, and there’s a plan to resolve the first, but want to see it in practice before they make their minds up, but are encouraged with what they’ve heard today. The Microsoft contract trumps everything – and it requires an annual audit.
Question: Is there a move afoot in France to require QWCS? Will they replace RGS certs?
Answer: Nothing yet. Maybe banking and financial
Note Taker: Kirk Hall
WebTrust for CA Update (Don Sheehy / Jeff Ward)
Don Sheehy of Deloitte and Jeff Ward of BDO presented the following WebTrust report.
- Current Status of Projects (ongoing) – Don Sheehy
WebTrust for CA 2.x – being updated, minor changes now
EV WebTrust – updates are being developed in concert with CAB Forum Version 1.5.8 – a new release will be available very soon.
WebTrust Baseline and Network Security – WebTrust is updating the last version based on RFC 3647 changes, updates by CA-Browser Forum, and increased inclusion of RFC 5280, and will be released as version 2.1. The updated draft should be available by July or August.
EV Code Signing – no current update
WebTrust for RAs (Registration Authorities) – WebTrust is reviewing its first significant draft, but wants to know what does the CA-Browser Forum wants to do with EV authentication being performed by independent RAs – should RAs be included in the EV WebTrust audit report?
In response to a question, Don stated that for those using external RAs, a separate WebTrust audit for the external RA looking at a subset of controls is normally provided to the main WebTrust auditor, but the main WebTrust report doesn’t specifically refer to the external RA WebTrust audits specifically. (But the CA’s auditors will not provide a successful audit report unless it has been provided with a successful audit report for each external RA. Jody noted that some external RA audit reports (and even some CA audit reports) come as late as six months after end of the prior audit period, and asked if WebTrust can make this happen faster – Jody noted that the new audit report should be produced within 90 days. Kirk suggested that WebTrust always turn off the WebTrust seal after 90 days to force CAs and auditors to complete their new audits within 90 days.
WebTrust Code Signing – the audit standards are under development
Practitioner Guidance for Auditors – this document is under development
- Updates to WebTrust Web page – Jeff Ward
The list of authorized WebTrust practitioners is being updated (removing SOC and SysTrust references and auditors)
The web page is being changed to take out references to the SOC 3 seal
The ultimate plan is to revamp the page as part of CPA Canada
- Some new and old issues (Don Sheehy and Jeff Ward)
If a CA has a publicly trusted subroot (i.e., one chained to a publicly trusted root that is enabled for SSL) but only issues non-SSL certs from the subroot (S-mime and client authentication only) – it appears the Baseline Requirements would not apply. Is a Network Security-only audit report required? When asked, Microsoft, Google, and Mozilla responded yes, because the sub-CA is part of a trusted system could issue SSL.
The question of RFC 5280 inclusion in audit standards (e.g., auditing to confirm that a CA conforms to all parts of RFC 5280) – WebTrust is still waiting for guidance from the Forum. Will compliance with all parts of 5280 be added as a specific requirement to the Baseline Requirements?
Cloud questions starting to surface – WebTrust auditors are getting phone calls from the Forum (e.g., can OCSP responders be outsourced, etc.) Are cloud providers required to be covered by the WebTrust audit? WebTrust is looking for input from the Forum members. What controls would needed for the cloud service providers?
The attest/assurance language standards for auditors are changing in US and Canada
WebTrust is changing the name of the Task Force to “WebTrust/PKI Assurance Task Force”
Who is involved in setting and maintaining WebTrust standards? Here is a list:
Gord Beal Bryan Walker Brian Loney Lori Anastacio
Gord Beal – WebTrust falls into Guidance and Support activities of CPA Canada
Bryan Walker – Task Force support, seal system responsibility, licensing advisor
Brian Loney – seal billings and legal support
All Task Force members provide WebTrust services to clients Volunteers are supported by additional technical associates and CPA Canada liaison but report to CPA Canada
Task Force Members and Technical Support Volunteers
Don Sheehy (Chair), Deloitte Jeff Ward, (Vice Chair), BDO Reema Anand, KPMG David Roque, EY Daniel Adam, Deloitte Tim Crawford, BDO Zain Shabbir, KPMG Donoghue Clarke, EY Robert Ikeoda
Gord Beal – WebTrust falls into Guidance and Support activities of CPA Canada Bryan Walker – Task Force support, seal system responsibility, licensing advisor Brian Loney – seal billings and legal support All Task Force members provide WebTrust services to clients Volunteers are supported by additional technical associates and CPA Canada liaison but report to CPA Canada
Note Taker: Dimitris Zacharopoulos
Need to revise-update the net-sec requirements because it has not been updated since 2013. Discussion took place on whether we should keep a separate document or incorporate the net-sec into the BRs. Don mentioned that a CA did not issue SSL certificates and they still wanted to be audited for the net-sec guidelines. A number of members raised the question “What is the actual goal”? One of the answers was to perform a “sanity check” of the net-sec as it currently stands.
The Policy WG is trying to do two things at the same time:
- move net-sec guidelines to relevant BR sections which is already in RFC 3647 format
- revise/improve the net-sec guidelines with adjusted language from the Second Draft of NISTIR 7924
Ryan proposed to change the net-sec into 3647 format and it will align perfectly on the BRs which is already in RFC3647 format. We need to prioritize goal objectives. Currently the policy WG both moves and updates information related to network security. The WG decided to bring section-by-section changes to ballots.
Ryan raised a concern when you change one section, it would change references of other sections. Tim H. replied that the current changes do not affect any references.
There was concern about issues like the use of antivirus or virtualization which are not properly addressed in the net-sec requirements. The WG could prioritize to deal with this before the rest of the WG’s topics.
The net-sec guidelines Vulnerability Detection section mention protection against virus and malicious software. The word viruses seems redundant because of the use of the term “malicious”. There was discussion to replace “malicious” with “unauthorized” (add the “not necessarily antivirus” statement suggested by Josh Aas). Don requested that we should write what the expectations are from the group regarding this control so that auditors can adjust accordingly.
Neil said, that you need some assurances on how to attest security on a system. Need an auditable path.
4c (i) should be changed because it is not applicable. Also change the “Critical Vulnerability” definition. Ryan would like to see “critical vulnerabilities” to be audited in the next audit period. When a Critical vulnerability is detected by a CAB forum, there should be a ballot.
Peter Bowen mentioned a problem with the definition of “secure zones”. Ben will take this discussion back to the Policy WG.
Note Taker: Ryan Sleevi
Rick gave the background context for this present discussion, which is that during the Meeting 37 in Scottsdale, there was a request for more details about the process for checking CAA records. Rick’s goal was to make sure there’s a consistent understanding, so that people wouldn’t suggest they misunderstood the RFC.
Rick provided slides that gave his understanding for the process for how each Fully Qualified Domain Name would be checked. If any of the domains fail to match the CAA policy, the CA can block issuance, potentially sending out a notice to the iodef property. Ryan pointed out that the RFC was somewhat ambiguous for handling the iodef URL. The example raised is what a CA should do when an unrecognized or unsupported scheme appears (such as ldap:// or file://), or what to do if the URL is malformed or inconsistent, such as paths in the HTTP/HTTPS URI.
After explaining this, Rick suggested that the process for dealing with a CAA Failure could be to get on the phone with the customer and have a conversation about the request, as that would allow the CA to issue the certificate even if the customer was unable to update their DNS (such as configurations taking time). Rick asked if there was disagreement, and Neil pointed out that that in order to have a conversation about the request, it relies on some previous authorization of the domain holder, since it doesn’t make sense to call up the applicant when the CAA record explicitly states the CA shouldn’t be issuing. Robin saw CAA less about blocking the issuance, and more about treating the request as a “High Risk” request. Ryan expressed a desire for CAA to block issuance – otherwise, it doesn’t really provide value to subscribers.
Rick felt that Google’s case – desiring all certificates be authorized by a specific team – may not be representative of other organizations’ desires. Dean then brought up the discussion of CAA by Netcraft, and asked for clarification about how the ‘brand’ flags Netcraft mentioned would work. Ryan explained how CAA allows for issuer-specific properties, and how an issuer like Symantec could indicate custom flags, like brand=Thawte, to further indicate how they handle and process CAA records. Rick provided some history about how Symantec developed its CAA policies – ultimately settling on the use of symantec.com as the issuer name for CAA. Neil questioned about how “symantec.com” was decided, and Ryan pointed out that, at present, nothing would prevent Neil from stating in his CP/CPS that his CA will issue certificates if he sees a CAA record for “symantec.com” – and that is one of the things a CAA policy would hopefully address.
Mads asked for clarification about who updates the CAA record, and it was explained the domain holder, not the CA, does? He then asked more about the deployment, and whether anyone was using it, and Ryan explained that it’s a chicken and egg problem because, in the absence of policies requiring CAs to respect CAA, there’s no reason for customers to put CAA records out there.
Rick then circled back to whether anyone objected about making CAA failures a hard block, and Tim said he felt concerned. If there was a phase-in over 18 months to 24 months, he’d feel better. Ryan asked what steps might be taken to address Tim’s concerns to make the transition period shorter – such as six months. Chris Bailey was concerned that CAA was somewhat subjective in how a CA interprets it, and Ryan explained that the goal of a ballot would be to ensure consistent processing without the subjectivity, such as requiring the use of a domain the CA owns. Rick wasn’t supportive of a six month transition, because we’ve been making this transition incrementally for a long time.
Rick suggested that he would try to codify some aspects of the presentation into a ballot that required CAs support CAA, but to allow handling failures as exceptions to be defined. Ryan asked for details about what a reasonably objective exception handling process would look like – offering a strawman of requiring EV validation for any certs that failed CAA handling. Kirk and Bruce felt that was too onerous – why require EV validation when a DV cert failed? Ryan put forward another proposal – require contacting the domain holder based on WHOIS information. Rick said he thought it sounded reasonable, but wanted to get more feedback.
There was then a discussion, based on Symantec’s experience, of how to handle failures to resolve DNS names, such as internal server names. Chris Bailey suggested it should be OK to issue in this case, while Jeremy pointed out that was the question being posed – whether a DNS failure should be seen as OK to issue, or should block issuance. Robin mentioned similar issues with latency and availability of DNS servers causing transient failures. How long should timeouts be? Should there be retries? Andrew mentioned the case of internal server names, such that a CAA record may exist for example.com, but if the CA allowed the failure to find a CAA record for internal.example.com to be a sign it could issue, it would end up ignoring the CAA policy set.
Kirk wanted to know more about whether or not CAA was providing any value, and whether anyone was denying certificates on the basis of CAA. Josh Aas explained a bit about how Let’s Encrypt processes CAA records, a bug they had, but also that they have denied domains because of CAA policies set. Rick had trouble thinking of examples where Symantec hadn’t, but Robin indicated he was aware of domains with CAA records set because they didn’t want Comodo to issue.
At the end, Neil pointed out that Rick’s presentation failed to include the aspects of RFC 6844 regarding the handling of CNAMEs and DNAMEs, which Rick indicated he was going to update the proposal for.
There was no clear decision about CAA, or even consensus of understanding, but Rick was going to put forward a draft ballot to take the next step of requiring CAAs support CAA.
Day 2 – Thursday, May 26
Note Taker: Neil
Dimitris, as mentioned on the normal forum meeting call on May 12th, 2016, submitted an email to the list describing a method for incorporating ipAddress SANs which would be in compliance with BR, being a distillation of discussions to date.
Ben read this proposal to the group; Dean suggested that we edit this document to address any concerns.
Ryan expressed concern on the language on the guidance, wishing to change the recommendation to a requirement. Dimitris agreed.
The new document will be incorporated under the forum website, in the resources section, entitled “Guidance for Certificates with IP Addresses”
Dean: On the last call we discussed the notices for the IPR review period. We have ballots over the last year for which we haven’t sent out an IPR notice. A couple of approaches have been suggested. One is to treat the notice of ballot results as the IPR notice. However, that might not be the cleanest method.
Robin: Was a previous capture of activity done?
Ryan: Previously Ben had been sending notices, but those were not coordinated with the passage of the requirement, and according to the IPR requirements, the Guideline is not final until the notice has been sent and 30 days have passed. If we take the view that we need to have been sending IPR notices separately, then that would mean that the last version of the Guidelines is whatever we had two years ago.
Ben: Agreed with this – that if a notice hasn’t been sent out, it’s not a final guideline, but with the idea that it can be cured by sending out a catch-up notice. However, the best practice would be to send the notice with the ballot so that it says the guideline is final 30 days after the ballot is passed. In the past, I would look at all of the ballots and group BR and EV guidelines and provide notice separately that way for BR and EV.
Dean: Ryan what did you mean when you previously said that it would be problematic because it would affect current audits?
Ryan: So if you are trying to catch up, that means that none of the ballots were in effect. What does that mean for audits that were done during this period? What does that mean for audits that begin during the notification period? If we take this approach, then we need to provide guidance on what that means. If we do a catch-up ballot, then the last version of a final guideline was version 1.2 in 2014. That means that there was no 1.3, etc. So, can an auditor claim adherence to those subsequent versions? And in their CPSs the CAs say that they adhere to the latest version of the CABF guidelines, then what if you take the view that those were not in effect? The CA would argue that it’s non-compliance was OK. Then is it a qualifying audit?
Ben: One approach is to say that we have two different worlds here-one for CA compliance purposes (our requirements and what is expected) and the other for IPR Policy compliance purposes. With that approach, the requirement was final for compliance purposes in accordance with the ballot, but not final for purposes of IPR rights. It’s final in the IPR world when we complete the IPR rights notice and review procedure.
Ryan: I think that is problematic because let’s say you were required to do something that is encumbered. If a CA with IPR was exploiting this and all CAs were complying with the requirement, that creates a problem. That’s why we have the IPR Policy-to make sure that people don’t implement stuff until we get the air cleared and the disclosures are out there. While this may be a worst case scenario, this is the risk environment, and most of this risk is borne by the CAs in the room.
Dean: If we were to take the first proposal, which is to take the ballot as the notice, then we need to provide more clarity.
Ryan: Right, and if we look at the document lifecycle process work flow, and how much of it we follow, and where it talks about the adoption ballot and the final ballot, maybe that means we should actually be doing two ballots, but we should look at it to see how it fits with the draft guideline and final guideline and the IPR Policy.
Dean: I like the first approach if people can agree on that, maybe we ballot that suggestion. It’s cleaner. Instead of doing a catch-up ballot, which has all kinds of implications, and then use the ballot results notice as the IPR trigger in the future.
Jeremy: There is a risk with that if someone disagrees with the first approach being sufficient notice for past ballots.
Ben: Another thing is that a member is estopped from bringing an IPR claim under these circumstances because they sat by, they had signed the IPR agreement and knew what the IPR policy said and knew what their disclosure obligations were and knew that other members would implement the requirements.
Kirk: As to the auditors and the browsers, we don’t have control over what auditors put in their audit criteria or what browsers put in their requirements either, so I do not see a problem with the audits out there that have already been done. We were audited to the stated requirements and browsers have seemed to accept it. The simplest way to handle this is to send out the notice and submit your disclosures and when we’re done we’re caught up, and now all of the rules are finally permanent. And there aren’t questions of whether we were audited to the right standards because that is a decision for the auditors and browsers to make.
Ryan: We do need to do the catch-up and then explain how this works going forward. We need to send out the ballot results and that starts the 30 days. We need to send an integrated draft of the guideline, and that starts the 30 days, and let’s clarify that for the future, and when you do that you should provide some degree of guidance and explain what happens as a result, and we need to, as a Forum, have an agreement of what it means to take that route, so that there is no ambiguity.
Robin: Is the problem that we’re in an undefined situation at the moment? We are not clear, until we state whether we have complied with the IPR Policy. As soon as we crystallize our opinion about the IPR procedure, it either is complete, and there is that risk that Jeremy pointed out, or it is not complete, and there is a risk for that, meaning we have to complete it. So, we have no other alternative but to complete the process.
Dean: That is clear, but what isn’t clear is, what happens to the requirements that have already been approved and implemented in the audit?
Ryan: Not just implemented in the audit, but let’s say that there is a requirement for dual party control in version 1.17, and that’s what is audited, and they have complied with what the auditor thinks is the requirement. As we discussed in Scottsdale, when is a CA expected to comply with a ballot, and what if, during the process of an audit they find something not considered to be compliant? Let’s say it doesn’t meet the audit criteria, but it meets the CPS, and the CPS says, “we follow the latest version of the Baseline Requirements, which is version 1.15”, so they find a violation, but this violation is not a violation of our requirements. I am supportive of making this as clear as possible with the catch-up, but if we go that route, what guidance can we provide if a CA has not been following these? Are we saying that browsers expected CAs to follow them? In the catchup ballot we should provide what the state should look like.
Ben: If we do the catchup ballot, what if I write up an explanation of our expectations? That we were expecting compliance in the interim?
Ryan: Yes. The IPR Agreement is essentially a contract among member companies and the Forum is not an entity so it cannot decide that interpretation, so the best that we can do is create a commonly held expression / commercially reasonable interpretation of intent.
Kirk: It is not going to be legally enforceable, so why should we put it in there? It is enforceable by auditors and browsers, but it’s not otherwise enforceable.
Jeremy: That’s the point of having a contract.
Kirk: So you’re saying a ballot and a contract?
Kirk: I agree, even though I doubt anyone is going to use this loophole. I don’t see what good it might do other than to embarrass someone.
Ryan: There are two parts to this. One is the contract that Jeremy and Jody have referred to, and having some sort of signature addresses the legal aspect of this. And then there are more than just the audit criteria. There are statements in the CP and CPS about compliance with the most recent version of the guidelines. We need to provide guidance to the auditors and to the general community about what those types of statements mean. Take for instance CAA, which passed during this period. If someone doesn’t say what they are doing about CAA in their CPS, are they doing the right thing or the wrong thing? The common understanding is that you were doing the wrong thing because we passed this a while ago and there is a common understanding that this is what you should have been doing. Being clear about this isn’t about the legal interpretation. It is not an explicit auditable item that a WebTrust auditor is going to audit on, but if during a review of the CPS, in the auditor’s discretion, we want it to be clear what the expectation is. We don’t want any ambiguity on this potential loophole. The goal is simply to provide clarification. I wasn’t looking at this from a legal perspective, but only to provide guidance in the audit sphere.
Iñigo: The standards bodies don’t change their criteria every time the CA/B Forum changes a requirement.
Dean: But a browser can come to you and say, “you say you are following the latest version …”
Iñigo: But the auditors are not going to look at what the state is at the moment. So auditors and standards bodies take the document, let’s say, and freeze it on October 1, and apply all of the changes that have been made since the last time. They make a list and then make all of the changes. Then they publish the new version of the standard for auditors to implement. So there are many changes that are not yet in the audit criteria.
Dean: So Ben has offered to draft something up.
Kirk: And I would expect to see that, if others see a value in putting that language together.
Ryan: It’s not enforceable from an IP perspective, I agree with you there, without a signed document. My desire and goals are to provide guidance for expectations from an audit perspective.
Dean: I’ll draft a list of ballots that are applicable here.
Ryan: That catches us up, but how do you want it to look like in the future?
Dean: We’ll put IPR notice language at the bottom of the results.
Ryan: Just to be clear, we’ll announce the results and provide the draft that includes the integrated language, because that is required by the IPR Policy, and give notice that this begins the 30-day notice in which to state exclusions.
Dean: That might be problematic because the bylaws state we are supposed to announce the results within one calendar day of the close of the ballot, and having the draft ready may not be feasible.
Ben: As we’ve said in the past, we should prepare a redlined version to accompany each ballot, and we can also reference the GitHub, if we have a process where people submitting ballots will do a redline version, that might satisfy IPR Policy requirements.
Ryan: It would be good to get proponents of ballots to put them up on GitHub. They could be put up in the week during the review period. When the ballot passes, then we could “commit” the change and generate a PDF to accompany the results email. The IPR policy says the Chair must circulate a “complete” draft of the draft Forum Guideline.
Dean: The goal would be to have one notice.
Note Taker: Robin Alden
Issuance of server certificates with SHA-1 signatures for legacy applications.
Dean: I was contacted by payment processors having problems migrating from SHA-1 to SHA-2.
Many were able to solve their issues by using a removed root from various CAs.
The problem is that older payment terminals don’t work with SHA-2.
There is a large scope of the problem. Here are some global statistics: 10.25 billion debit and credit cards 24 million card accepting merchants 50 million point of sale terminals
The association that represents the payment processors has polled the payment processors and they estimated that about 5% of all transactions will fail because of the SHA-2 problem.$750 billion loss. That’s the same as about half of spain’s GDP.
A lot of those were solved with the removed root solution, but several payment processors have a deadline in the July/August timescale.
There are many terminals affected. They cannot migrate them all to SHA-2 in time. Many are being updated. It takes time. Some terminals are not used every day. E.g. A gas station will probably have a terminal that will work OK. An auction at a University or School – that terminal may not work.
tech details – we are familiar with the technical details of the issue.
on list discussed: Browsers wanted more information on each specific payment processor. PZB had suggested to use the same public key for the SHA-2 certificate that had been certified in the SHA-1 certificate it is to replace. It mitigates some of the threat from issuing a new SHA-1 certificate.
Could we make that a change whereby we can allow those types of certificate to be issued for payment processors – where browsers are not involved. These are to secure transactions between payment terminals and servers.
I think that may address the browser’s concerns, for a short period of time. We would make it increasingly difficult for payment processors to renew these certificates so it doesnt go beyond 2017.
Jody: 2020 is when the last Operating System ends support for SHA-1 Ryan: Java? Jody: We don’t want it to go to 2020. I hear Ryan’s proposal that we want friction in the renewal process. I agree.
Ryan: How do we balance the cost?
Jody: Dean is saying adding the friction, reissuing for a year increases friction and puts the onus on CAs to track this. What kind of pressure is the CA putting on Worldpay? We are sympathetic, but we want to put the onus onto the payment provider to upgrade. We are willing to help but the payment processors need to help themselves and show good faith.Payment processors have a plan to get this done in the next 12 months, right? We don’t want to be here again next year. I’ve heard they are willing to step up and do what it takes.
Dean: They have no choice. They feel the sense of urgency. They are doing as much as they can.
Andrew: How many certificates are we talking about? Dean: I’m guessing a couple per processor. Perhaps 25 to 50 overall. Andrew: That is a human-managable number. I’m a bit wary of creating an automatic system (e.g. signing an existing key) which could generate a large number of such certificates. For <50 we could make a cost-benefit analysis per certificate. Some cases will be good, a few may not be good. Jody: 1 certificate affecting tens of thousands of terminals, sure. I don’t think we’re going to say no. I’m sceptical we’re going to get to a point of saying no. Having more data is not a reason to make a different decision, it won’t get us to a different place. We are talking about core business processes that are going to break.
Ryan: I know I’m setting a target on my back.. I have a problem with trust. Yesterday I was dealing with miscommunication issues from certain CA(s). I think we need public friction on it.
Jody: Public disclosures are problematic because they disclose security vulnerabilities. If we’re worried about accountability, lets have the CAs report to the 4 companies that are impacted.
Ryan: We don’t believe that is OK. We want to see public cryptanalysis done on the data to be signed such as by Marc Stevens. There’s another issue, this is not just payment industry problem, there are also cable boxes in Japan. Are they sensitive too? Where does the boundary exist?
There does need to be a public assurance of something going on.
Jody : I hear you. The Payment Industry Council could make a public declaration of the problem. Having WorldPay make a disclosure (and another and another) probably isn’t helpful.
What is the underlying problem & vulnerability.
Ryan: It’s more than that. Google’s here in the forum. Google and Microsoft can quickly respond to threats. Payment terminals (and Android) cannot respond quicky. If things go wrong, its harder to mitigate on those systems.
Andrew: If you have the exact bitstream of what is to be signed, research suggests that its possible to evaluate whether it is likely to be an attempt to generate a collision.
Jody: Do you think there is something unique to each SHA-1 certificate.
Ryan: Yes, researcher Marc Stevens (analyzed Flame, rogue CA) half of the publicly known signatures. MD5. half were new signatures. SHA-1 is similar to MD5 (MD structure) so the attacks are anticipated to be similar. They Have identified likely targets of things to be signed in an attack.
Jody: so Dean could submit 50 certificates for analysis?
Ryan: Yes, submit TBScertificate and the hash algorithm to be used, you can run that through a counter-cryptanalysis and measure if it’s likely to be compromised. Maybe a suggestion comes back to change the date field.
Jody: Who decides?
Ryan: Marc’s counter-cryptanalysis algorithm and library exist. Dean wants to reissue, put them out for a public review for 1 or 2 weeks, people will analyze and identify possible threats.
Jody: When 15 out of 50 comes back, what then?
Ryan: Minimally, browsers choose. If 15 out of 50 come back indicating a likely collision, then something isn’t right and maybe they all need changing.
Jody: Dean, are you OK with that?
Dean: why don’t you just choose for me?
Ryan: Google don’t want to be sole arbiter.
Q: What is it that needs to be submitted? Andrew: It needs to be exactly the bitstream to be signed, e.g. the tbsCertificate.
Rick: The ‘start date’ is set at signing, not sooner.
Jody: It seems pretty reasonable to me. There are only a small number of these certificates.
Doug: might not be easy to have your CA sign an arbitray piece of data. It might be hard. Jody: – only a 25 to 50 certificate issue. We’re trying to solve a narrow problem. It’s hard. That’s OK.
Rick: anticipating 2 week review. start date + 2weeks. If all goes well that’s what we’ll sign. Signing time may no longer correspond to start date in cert.
Ryan & Jody (no objection)
Jody: If all were suggesting is that we do the cryptanalysis, sounds reasonable.
Ryan: We talked with Marc Stevans, some new info. Andrew: if you have an existing key thats 5 years old, the chance of collision blocks being there are pretty slim, even better if we know it was generated 5 yrs ago. The CA should declare: Heres the key, here’s the reason’s it’s really important to issue this certificate, do the cryptanalysis, then issue, sounds most likely to reduce risk as much as we reasonably can.
Ryan: evil hat on.. Let’s say I’m a government who subverted a CA. How do we know this key was pre-existing? – and not just that CA was subverted to claim the certificate was 5 yrs old. Even if in CT, how do you know CT logs not subverted. My assumption is that these certificates are unlikely to pre-exist in CT logs.
Rick : my understanding is that these certificates would have been picked up by crawler.
Ryan: No, we crawl, not scan. Someone else might have fed it in.. We want to find a solution that doesnt rely on that. Even a pre-existing key falls back to ‘trust us we’re doing OK’.Nation state adversaries have weaponizable stuff.
Even a new key may be OK with cryptanalysis.
Jody: who gets to say yes / no?
Andrew: One way is.. I assume any CA that issues a SHA-1 certificate (from a trusted root) would get a qualified audit. If those who rely on audits know about the certificates they could accept the qualifications.
Jody: Let’s say the reverse cryptanalysis shows a problem. For whatever reason the CA decides to still issue the certificate who decides?
Ryan: Sames as today. If a CA issues SHA-1 today they take the consequences.
Jody: if cryptanalysis comes back with a problem, who states no go? Who says yes/no.
Andrew: It would be down to each individual browser to decide whether to black a CA as a result of issuance. We have a recent example of a X.509 version 1 certificate issued. We put it in CRL instead of blacking CA. Same here – no hard and fast rule, browser (trust store) decides.
Ryan: Like with WorldPay, browsers are going to respond. But this forum is the CA/Browser forum. What is android going to do What is openssl going to do? The recommendation would be that the CA doesn’t sign it
Jody: we’re trying to examine the worst cases now in advance of coming across it.
Dean: TBScertificate includes public key, domain, issuer, etc? A:Yes.
Jody: not conerned about the vulnerability in it’s use. We are looking at the issuing proess. Andrew: (agree)
Jody: I would like shorter review.
Dean: were talking about certificates issued last year and year before. Not many, not much before that, expiring July/August 2016, need reissuance for a year.
Jody: This isn’t as onerous. To recap, submit tbsCertificate, cryptanalysis, scores come back, decide to issue.
Dean: How does submission for cryptanalsys work:
Andrew & Ryan: email to email@example.com? It has to be public. If it wasn’t a public process we would be saying trust (e.g.) Google. We don’t want this solution to depend on everyone trusting Google.
Dean: Are the right people monitoring the mail-list to perform the work?
Andrew: It would be down to each browser to say y/n. Either the browser can make sure the right people are looking or the CA could do the same, the CA might pay suitably qualified people to do it.
Ryan: affirmative or negative counter-cryptanalsys. There’s got to be enough time to run the counter-cryptanalysis
Andrew: on the openness side, I would be interested in seeing more than just the certificate. Primarily to measure the cost-benefit. Is this in support of 10,000 terminals? – or 1 terminal?Having a more formal process allows the story to be that browsers have put forward mitigation strategy and payment industry use an established process, rather than ‘Mozilla caves’.
Ryan: If we imageine this world (above), what happens if its 100,000 certs? We want to find the right balance. CA said key ceremony. That may work for a week or so, but if it runs for months it will get automated.
Dean: Payment Industry, cable boxes, are we opening the door for cloudflare or Chinese sites?
Ryan: Do we clarify this is just for Payment Industry, not cable boxes? Its very unlikely there will be any failures of the counter cryptanayalysis score process – unless there is a nation state adversary at play.
Andrew: vanishingly small.
Dean: summary: It would be possible for a ca to issue a SHA-1 for PI by doing the following: create a tbsCertificate, send for counter-cryptanalysis for 1 week or 2 weeks (tbd), receive back info. CA decides whether or not to issue. Get a qualified audit next time around.
Ryan: Keep the CCA analysis, record and present to browsers to show the risk is effectively mitigated, that the risk was astonishingly low.
Dean: So we get a qualified audit. My boss will say is that OK? Jody: Its not, but you don’t have a choice. If you’re issuing in violation of the BRs..
Ben: If you disclose to your auditor up front then the auditor may regard it as not worthy of a qualification.
Jody: We face the chance of getting a qualification for an audit of a Microsoft CA next time around. We face that.
Ryan: you have to say ‘here’s what we did in contravention of the BRs’, whether thats in the management letter or the audit letter – we don’t care.
Dean: Why don’t we ballot to make this a non-qualified audit? Jody: That stops this being a painful process. It reduces friction. We don’t like it.
Kirk: ballot could say, for next 6 months you can do that, but not afterwards. It would be self-extinguishing.
Andrew: That brings in the who decides problem. If all but one says its OK, but one browser says it’s too risky..
Ryan: part of the challenge is, if we start to normalize the process the danger increases. What if the nation state adversary blocks a CAs submission to the mail list so no objection is received, then the CA issues. Is that OK? – no.
Dean: JC – are you in agreement with what you’ve heard? JC: Generally, yes. We don’t want this to be easy.
Andrew: What happens with Cloudflare with lots of domains? Suggest a CABF CA to propose, not a subscriber. The CA makes the first ‘is it important enough’ decision.
Dean: The suggestion was ‘heres the use case’, here’s the tbsCertificate, get the CCA verdict, record and submit with audit.
Jody: yes, you’ll take the hit for WorldPay, not for Moudrick’s rug shop.
Dean: We’ll take this back to the Payment Processors, thanks.
== Prepare for Fall Elections == Note Taker: Moudrick
Dean: Bylaws say that chairman and vice-chairman is a two year term. The last ballot for me as chair closed on October 15 however one prior to that was on October 22. So my point is that we have to prepare for fall meeting which is October 18 – 20.
Kirk: I think your term goes to October 22.
Dean: Just reminding everybody about elections so again 2 years term and 60 days prior to the expiration of the current chair’s term, we’ll announce to the mailing list that the nominations are open for the offices of chair and vice chair. Probably we’ll have automatic nomination for chair position, however Bylaws allow also other candidates but I don’t recommend anyone who had no experience as chair, vice-chair be nominated as vice-chair. The Bylaws say the nomination period for chair will last 1 week and no longer than 4 weeks and then nomination for the vice chair should be open for 1 week and not longer than 4 weeks.
If only 1 person is nominated for a spot, we’ll vote just to confirm that person. If there is more than 1 person nominated, then we’ll have election ballot. The voting will go not the mailing list but to our two independent ratifiers which are WebTrust and ETSI representatives. Those two will compare the election results and will just announce the voting results.
Kirk: if you want to nominate someone else please check with them first and just communicate that to Dean. I don’t mind if some people strongly want to send to the management list, that’s fine. But if we chose to send it to Dean, then Dean will post it to the list than person X is nominated.
Dean: Start thinking about who you are going to nominate as chair or vice chair, we are 3 month away. I’m not eligible to run in this election. Right after the nomination for chair is closed, we’ll open the nomination for vice chair and two weeks seems perfectly reasonable.
Note Taker: Rick
Mads presented these slides: Role-of-Identity-in-TLS-certificates
He began by asking who should be represented in the Organization field of a TLS certificate? Clarification may be needed. In previous discussions, arguments were made to include many different entities.
Mads summarized Ryan’s previous comments from November 2015:
- Recognize we don’t have consensus yet for what the O field should present as
- Recognize that the Validation Working Group proposals provide many wonderful security benefits that we shouldn’t let them get hungup on resolving 1)
- Take a pass at the BRs, in their entirety, to find places where the language may be inconsistent with respect to the (unresolved) status quo, and update that language to reflect the present reality
- Longer term, if this is a topic members are passionate about, which I think we have evidence that some CAs are, work to build consensus as to those goals
So who should be represented in the O field? There are many possibilities:
- None – Mads quoted Ryan as having said “identity is not important”; Ryan said that what he said was “identity is not important for browsers”
- A (well-)defined set of entities satisfying some requirements
- All entities that are allowed according to the current BR/EVG:
- Kirk, content author and logical operator of kirk.example.com
- Example.com, provider of hosting services
- CDN Corp, a CDN that provides SSL/TLS front end services
- Marketing Inc, the firm responsible for designing and maintaining the website on behalf of Kirk
- Payments LLC, the payment processing firm responsible for handling orders and financial details (this is not allowed by current requirements)
- DNS Org, the company that operates the DNS services on behalf of Kirk
- Mail Corp, the organization that handles the MX records that point to the mail server for kirk.example.com
- Other (other entities authorized by domain contact to use the domain)
In Phoenix, we agreed on these possibilities:
- Ownership: The Applicant is the owner of the Domain, the Domain Registrant
- Control: The Applicant controls the Domain
- Authorization: The Applicant is authorized to use the Domain.
A robust discussion ensued.
Ryan asked how CAs would know who has control, like in the CDN case? The Internet’s architecture doesn’t lend itself to knowing some of the details. Robin said that there’s ownership, control, and authorization. When the CA has been asked to issue a cert with details in it, which method does it use? Should there be a ‘title’ to a domain so that Kirk can delegate control over the domain to CDN but keep the title to himself?
Kirk noted that he can give Dean the ability to sign a mortgage doc on his behalf. Dean signs as himself, on behalf of Kirk.
Ryan asked if the ‘agency’ in the digital space is associated with control over the content, DNS record, or private key? He said that the most difficult part is determining what do we want to represent to relying parties. If we talk about web pages, say a shopping page, you don’t just have one cert. The site kirk.example.com might have Google analytics which is represented by its own cert; some content could be served by a CDN. There’s a legal parallel – signing a doc with a bunch of blank pages in between, with the understanding that other parties will fill those in. How would we present that set of docs to the user? What do we want certificates to represent? We have to consider how the web works. Code signing is an example where in one model, you might want all resources in the manifest to be signed. If you serve many resources from many places, what do you represent to the relying party?
Doug asked why payment providers are not allowed to be represented in the cert. Ryan replied that they can’t demonstrate control or authorization.
The next steps could be to
- Decide who should be in the O field
- Define different categories of entities (domain owner, content owner, etc.)
- Define acceptable methods for verification for each category (e.g., by ownership, by control using method A, B or C)
Moudrick said that he’d like to see a generic solution for how to represent CABF-specific categories. We’re limited by old notation. Can we add a generic extension structure, take traditional notation but explain details? Like a QC statement.
Ryan said that there’s a critical first step to consider if we’re going to add new information to the O field – what do you think will happen when it’s there, how will it be used? How will it influence or alter behavior? Until you know how it will be used, it doesn’t make sense to think about other parts.
Cisco: who is concerned about this? The answer was: we are. He asked if users are really concerned about this? It’s possible that the average user doesn’t care. Is the info available elsewhere? Binding too much info into cert becomes a burden. Adoption rates might go down. We could try to use a level of indirection so the cert can stay the same when other aspects change, for example when some content is moved to a CDN.
Ryan said that if you think users care about it, what do you expect users to do when they view the info in the cert? How would we communicate it to the user? Who is the important party in the UI?
Robin said that such info can’t be used by a Google browser because the Google browser doesn’t show it, but others might. Ryan said that we need to understand the challenges browsers have. You have this field in the cert, but you have 20 different sources of info needed to show the page each with their own cert. How will you show that all to the user?
Tim said that Ryan’s argument is that the user is unsophisticated and they can’t understand this. He added that it’s true there are many parties involved; we’ll never get users to understand CDNs, etc. I think the most important thing is that the DNS is just a short list of strings; it doesn’t represent much info. The most important task is to provide a link to a top level entity.
Ryan asked if we should tie all this info to the domain? He agreed that users care about the identity of the domain, but how do they use that?
Tim said that Bank of America uses both bankofamerica.com and bankofamerica1.com. Who controls the domain is of interest to the user. He noticed that Bank of America sends out emails with links to bankofamerica1.com, and they look like phishing emails.
Ryan said that the user should check the domain binding before clicking on the link. We don’t want it to be “game over”. Do we want browsers to check if this is the same identity as last month? If the browser doesn’t do it, the user would have to keep looking at the url bar to see if it changes over time as she click links. No one will want to do that.
Ben asked if CAs are vetting all this detailed information for ourselves and not for end users, shouldn’t we leave things as they are now? We don’t want CAs to put any unconfirmed info into a cert. If someone does want to dig in to more detail, they have to read the guidelines. I don’t know if going in this direction is helpful.
Ryan said that we’ve been talking about web browsers, but this info might be useful in server-to-server mutual authentication. The utility of the info depends on the protocol. I’m not dismissing that there may be value in putting detail in certs, for non-browser uses. If that’s the goal, great. but if the goal is for browsers, we must figure out what users want and how do we get there.
Cisco: who cares about this info? Robin said that we’re sure we can strike out some things that they don’t care about. Cisco rep asked how many users look at who issued the cert? He does sometimes, but he admits that he’s a geek. Ryan noted that CAs disclaim liability if the relying party doesn’t look at their CPS.
Cisco: what about the auditing firm? Should they be in the certificate too? Kirk said that there was an expectation that the website owner owned the domain, but the world has changed. Who is going to use this info? Like CAA, until you build it, you don’t know who will use it or how it will be used. Moudrick suggested a set of OIDs to represent all this info. That might not be hard to implement, but it wouldn’t affect browsers. Ryan said that the goal of CAA was to prevent misissuance. We started with a problem statement, figured out a technological fix. For this, let’s start with a problem statement. He also disagreed that browsers don’t care about this. Adding more info into a certificate and making it bigger can have negative consensequences for the TLS congestion window, etc. He said that some users in India may take 30 seconds to load a page. He expressed his opinion that this would do more harm than good. He asked again what problem we are trying to solve.
Chris mentioned that in Ivan’s presentation, he said that no one cared about security until 2008. What should users care about? Ryan agreed and asked what do we want users to care about? Moudrick said that users want at least a single contact point. Ryan asked for what?
Cisco: users care who they are connected to. Is it amazon.com? They don’t care about CDNs. When he looks at the details in the cert, he sees the info that the CA validated when they issued the cert. He might see intermediaries, but he doesn’t care as long as they’re operating on behalf of amazon.com.
Ryan said he’s not disagreeing, but why? He said that the reason he cares about BofA is that he doesn’t want his BofA password to go to someone else. He doesn’t want to buy something that he can’t return, but how does that influence his behavior? Back to BofA: are certs the answer? Or do I use a password manager to insure I don’t send my BofA password to someone else?
Cisco: my browser can’t really know that I meant to connect to amazon.com. It doens’t know what entity the user had in mind. An unsophisticad user wants the browser to throw errors if they’re not at bankofamerica.com, but browsers don’t know the user’s intent. Dean asked if Mozilla or other browsers had an opinion. None had at this time.
Richard said let’s get away from the technical detail – as a visitor I care about whether the website is authentic; is it secure to send my credit card info to. Only EV SSL can guarantee this. Users don’t have to click to look at the cert – EV shows it in the browser chrome. Jody countered that real users only look for the lock icon.
Tyler said he tried to make his wife look at the browser bar, but she doesn’t care. She looks for green or a lock. Jody said that we don’t understand the problem we’re trying to solve. Ryan agreed, and said that until we understand the problem, we shouldn’t work on solution. Maybe certs are the answer, maybe not. Jody said that this is a solution in search of a problem.
Richard said that he cares about this, but Jody countered that you care because you’re educated. Most users don’t care. What’s the problem youre trying to solve? Several folks chimed in to say: identity and education. Jody said that we can solve reputation, but Ryan asked if we want browsers to show reputation along with a url? Jody said that reputation is not based on the O field. As an industry, we need to get users to care more about identity. Richard said “I guess I’m representing a CA position. Many US Government certs use DV; that’s unacceptable.”
JC said he can’t represent Mozilla, but he can represent the security team at Mozilla. They think of SSL identity tied to integrity and confidentiality. Stronger forms of validation make things more difficult. We don’t want to have only strong identity when dealing with crypto; we want a gradient so we can protect everyone. Dean asked what good is encryption without identity? (Richard’s position) Kirk said that browsers never displayed identity until EV came along. What Mads is talking about is can we improve the identity info in the cert. Ryan asked why should they care about the detail?
Jody said that the average user cares about identity, but they don’t know how to tell. The detail is available, but based on what I’ve seen, no real people care
Dean said that we can show you some studies that say otherwise. Jody said that we need to educate people on how to assess security of a site. Ryan said education was good but even if users are educated, what are we asking them to do? Click the lock icon every time? Jody said that the solution is reputation. Kirk asked why browsers are making users click each time to see this info?
Josh said that there’s a fundamental disconnect. Users do care if they’re talking to United Airlines or (not United Airlines). They don’t care if they’re talking to a CDN or whoever operating on behalf of United. I won’t behave differently if I’m talking to Akamai on behalf on United. Kirk asked if users saw they were talking to CloudFlare instead of United, they would care. Joss said that if CloudFlare was presented, users would think it was a phishing site.
Jody said that Microsoft is investing in reputation, not in browser UI. He said that they’d rather use SmartScreen.
Ryan agreed with Kirk and Josh; whatever business transaction I’m doing, I want to know I’m dealing with United Airlines and not someone else. Let’s say we had all this detail in the cert. It’s not good enough to look at one cert. We don’t have parallels in the digital world to the physical world. Andrew said that we don’t want to guarantee that you’re talking to only one entity.
Kirk said that it took him six clicks to find out the O info for United Airlines. He asked why does he have to click so much? Ryan said that showing the info solves what problem? Who am I dealing with? On one page or every page? Tyler said that once I get into the site, I no longer looking at the cert as I click from page to page. Dimitrius said that if I’m a legitimate company, I’ll make sure you stay with me from click to click Ryan noted that Kirk said that if the cert changes, users will care. Should we get rid of the url bar and just show the cert info? Mobile Safari does that. The goal is that users will notice when info changes. Why would they notice? They want to be sure that entire transaction is with United Airlines. Kirk said that he’s not trying to change behavior, just give users more info so they might make a better decision. Ryan said that California’s law mandating public display of the use of chemicals known to be carcinogenic doesn’t change behavior.
Tyler noted that sometimes users call GoDaddy because that’s who they saw in the cert info. JC asked if we’re looking for the notion of who the user is talking to, isn’t that the URL bar? Dean said that if you go to a search engine and click on some result and go to some url you’ve never heard of, what do you do? You want reputation info. Jeremy asked why can’t the cert carry or point to reputation info? Dean said that would be defense in depth. He asked: don’t all these things combine to help users? Jody said no, because users aren’t sophisticated. Ryan said that studies show that more info doesn’t lead to better choices. Users stop caring.
Dean proposed that search engines could make the decision. EV info could contribute to making that decision. Ryan asked if we want search engines to show meaningful info about the entity behind the link, are certs the best way to do that? Josh said that if you want to add extra info to the O field, it may be very useful in other contexts like code signing, but not the web.
Kirk said that we’re talking about whether we should normalize info. Mads said that we have to do additional vetting; we want to understand what we should care about. That’s why I started the discussion. Kirk suggested that the Validation Working Group should keep thinking about this. Browsers could ignore it. Andrew asked how to formulate a research question about this? In the UK, there’s an advice bureau. Things get interesting in the failure cases.
Kirk said that you can’t just rely on reputation. Ryan said: say you walk into Walmart and you’re greeted by a greeter who gives you a list that shows who the building cleaning staff is, how they were vetted, who runs security, etc. What problem does that solve? We can throw info at users, but how is it meaningful? Kirk said that if he visits the Sunglass Shack and he sees that it is in Ukraine, he might care about that.
Chris said that to pay a tax bill, he had to go to a url that looked like a phishing site. He looked at the cert but it was a DV cert. He went to Google to see if there was some info there. He spent several minutes looking, but if he had seen EV info he would have been assured. He doesn’t want reputation; he just wants to know if he was at the right place.
Ryan said that this is useful as a starting point. How do we expect people to behave? Chris said that a lot of research that’s been done has been based on users. What should users look for? Inconsistency of the UI has been a problem. If you had a simple, less hidden UI that would help.
Ryan said that’s the method, not the goal. Chris said that it would have made his life easier. Ryan asked if that the best way to achieve it? Before we shove more info at the user, need to understand the goal. Neal said that a well-established identity feeds into reputation. Narrowing the info down could feed into a reputation engine. Kirk reiterated his suggestion to have Validation Working Group continue to work on this.
Note Taker: Alex
Went over history, where we are today. Went over straw poll results. Potential to have CAs police the use of EV* on sites where user content can be uploaded
Solicited opinions from the members who haven’t chimed in yet.
WoSign supports wildcard Cisco’s opinion is that we are against it
SSC (Moudrick) said Against
No comments from the remaining members
Note Taker: Alex
proposal – notBefore value must be within X hours of signing time 2-5 days is about as far off as clocks seem to be
Ryan: 30 days was yelled out – Adrian’s slides mention this. How would we shorten the delta over time
Chris: We create certs real-time
Robin: they truncate back to the start of the current day.
- Potential to take to ballot
Symantec: was the question posed because people back date to get past BR rules?
Andrew: no, we can’t really put any policy around notBefore date. If we had a guarantee that auditors are checking up on such a rule, then we could use the notBefore date in a more accurate way
Ryan: avoids the need for a whitelist. provides a greater level of assurance than currently exists.
Dean: Is it auditable?
Ryan: yeah, comparing issuance logs with cert notBefore. Public CT allows detection because of timestamp on log and notBefore in cert.
Andrew: no requirement that someone log to CT log within X time of issuance. Would be tricky to codify in policy. Could use CT to raise awareness (eyebrows).
Ben: only EEntity certs?
Ryan: Yeah. Ballot would specifically address intermediate CAs.
Note Taker: Alex
Ryan: It’s been another year, are people supportive of requiring IPv6 on public-facing CA infra?
Rick: We’re not quite there yet. Some customers require a fixed set of IP addresses so they can configure firewalls for CRL download. Our provider won’t give a fixed set of IP Addresses. We need to convince customers that they don’t need a block of IPs.
Ryan: Wouldn’t IPv4 work for those customers?
Rick: Not sure, we’ll have to let you know.
Bruce: From Entrust there will be a need for a DNS change. Entrust is prepared for it, but hesitant.
Ryan: Need 6 months assuming the ballot passes? Trying to get a sense of timeframe.
Chris: Can we propose ballot and take back to Ops teams?
Ryan: We did that, a pre-ballot in March of 2015. Cable providers want to do revocation checking and AIA chasing on IPv6.
Ryan will circulate another pre-ballot, likely Feb 2017.
Dates set. October 18-20, 2016