Your Creative Solutions Services Products Research and other fun About YCS Contact information

Home

Why and how of this summary

Not everyone could attend the CC conference and those that did could not attend all the talks (simply because there were 3 tracks in parallel). I have making these summaries available for a larger public in an attempt to make some of this "lost" insight available again.

There is also a local cache of the presentations available.

As with all these summaries, there is a personal opinion colouring this summary, i.e. mine. I love feedback if you think I missed or misunderstood something, even more than hearing you agree. After all, it is the disagreement where most of the learning can be found. I'll update this summary as appropriate. And although I have worn and still wear many hats at different times, I am not speaking for others.

This year I had no talks, quite shocking for me after having 1-2 talks every year. Contrary to what many suspected, this was not because I did not submit proposals, but because they were not selected. A pity, I had some strong and funny presentations, but alas! I heard informally that this year the amount of proposals was much higher than normal, they might have simply been flushed out. Certainly the level of presentations I saw was high.

Supply chain concerns

The main new topic of this conference was the "supply chain" security threats/integrity/assurance/... concerns. The origin of these concerns are the US governmental bodies, but exactly what these concerns are is unclear. This vagueness is a major obstacle in the discussions on whether the CC covers this, should cover this, or what is missing. We don't really know what the perceived problems are (and yet some people were already claiming the CC doesn't cover it).

Just like at the beginning of the discussions on recognition of secure development processes versus showing the TOEs are secure during ICCC10 in Troms°, we started with a significantly different way of looking at this topic between the "European CC-is-working-for-us smartcard" group and the "US CC-doesn't-seem-to-fit-us large software/non-smartcard hardware" group. Of course this is an unfair and inaccurate simplification of the great persons and companies involved, but I am putting it here to recognize that we start out in different viewpoints and that we should be aware of how it influences our communication negatively.

A personal example of the colouring in practice: As I was on the panel discussion on this topic, I tried to prepare for this "supply chain thingy they worry about" fairly. However my practice of the CC is for a large part in the smartcard side of the CC. So I had a hard time understanding what "supply chain ..." was, considering the claim it was not covered by the CC. In my default view the development, production and distribution of smartcards is fully covered by ALC_DVS, ALC_DEL, AGD_PRE, and in some cases ADV (3rd party components in the TSF). This is the result of the common consensus in this domain, encoded in the guidance documents and the smartcard hardware PP (PP-0035).

However this is the view in the context of EAL4+AVA_VAN.5 and smartcards development and production environment. High security environments are the norm (discussion is ongoing to fix this into a document), i.e. dedicated development environments with an alarm, cameras, access control, escorting of visitors etc. That this extends to all aspects is logical for those in the fields, but not mandated (as clearly) by the CC itself.

An aside, on the required level of protection of sites

I only shortly mentioned this during the panel discussion and somewhat more explicit in the various informal discussions:

An approach I use with great results, is to consider the configuration items (the design, the TOE in its various development and production stages, documentation) in terms of required confidentiality and integrity/authenticity. The integrity/authenticity covers undetected (hostile) changes to these items, the threat of "an attacker puts a backdoor in the sourcecode". Typically this is covered by a combination of detection and prevention, such as code review prior to commitment to the source tree and access control on the source tree. Integrity/authenticity is nearly always needed for a configuration item, but the level of protection is generally lower than for confidentiality. Tamper seals and alarms work to detect that something happened, they do not prevent it.

Confidentiality is needed for a more limited set of configuration items. A convenient way to look at this, is to see what rating for "knowledge of the TOE" is needed in the vulnerability analysis w.r.t. that configuration item. This will result in a more clearly defined minimum level of protection needed: A public manual does not have to be kept in a vault, a cryptographic master key however must be protected such that the thief cannot access it before getting caught (it is not sufficient to detect the thief has read the key, he could tell that key during his one phone call).

In this way, the question on the strength of the site security can be linked to the relevance to the security of the TOE and the EAL (really: the AVA_VAN level). This is not the only way nor is it mandated by the CC itself, but it does work well in practice to address the always hard question of "what is enough security for this particular site?").

Keynote speeches

The keynote by Andreas Szakal of IBM went into these supply chain issues from their point of view and mentioned different approaches outside the CC, with specific attention to the Open Group Trusted Technology Forum.

The keynote by David MacFarlane of RIM (best known for their BlackBerries) had an interesting presentation on smartphones and their impact on the corporate IT networks. Smartphones are a personal life-style choice for many people and at the same time employees expect them to integrate with the corporate networks. This is at odds with the traditional "IT department decides and provides" model. The user also is interacting with the device in a shorter, more contextual sense, making more advanced user decisions on the security difficult.

Another point is that recognition of the certificates is not acceptance of the certified products, i.e. there is still the risk that there will be fragmentation making CC evaluations less interesting. See directly below for more on this.

Updates from CCRA

The CCRA Management Committee is recognizing the difference with recognizing the evaluation of the PPs and the recommendation of the use of such PPs for the respective governments. The CCRA means that a CC certified PP is "valid" within the CCRA-members in the sense that they accept that this is a CC certified PP. It does not automatically mean that the underlying government agencies accept products evaluated against those PPs.

Background of this remark is that the CBs have a core connection in the national communication security agencies (NCSA), sometimes explicitly (BSI and DCSSI for example), sometimes less obvious (NSCIB's technical oversight is done by the NLNCSA). There is an implicit expectation that acceptance of the CC-certification status under the CCRA also means that the PP is sanctioned for use under the local government however this is not necessarily the case. It is true that the NCSAs are responsible for the national sensitive and classified information security, but rarely the only party in this for a specific country. As a result the NCSAs as a whole have difficulty setting such policy so explicitly and publicly, and the certifiers within those NCSAs rarely have the power to force this issue or speak on behalf of the NCSA.

Some countries are already making approved PP lists (Sweden presented their initiative for pre-approved PPs for use in Sweden in the past, NIAP has something similar), but this is not centralized. Seeing the sheer complexity of getting all 26 CCRA members' governmental bodies to agree on this, I don't think this is something that is going to be solved formally soon. Informally however this already has the attention of the schemes. From what I hear, in the processes for many of the recent mayor PPs attention was put on involving the international community too.

Related to this is the aspect of cryptographic evaluations. Here also the NCSAs' political sensitivities come into play. That is the reason for the "we don't say anything about cryptographic algorithms" exception in the CC: promoting a specific algorithm as strong (especially non-standard ones) is lose-lose for them. They lose if they approve an algorithm that later is shown to have flaws (puts their nation at risk, loss of face, suspicions of trying to implant a backdoor). They also lose if they disapprove an algorithm as that shows how much expertise they have in breaking them, one of the most highly classified pieces of knowledge (think Enigma).

That said, there are some practical and public guidelines that are available for the different governments. One location where they are clearly summarized, especially about what approaches from other countries are accepted in a given country, would be useful.

The biometrics guidance should be available for trial use by December. I am looking forward to reading it, it has been a long time in the making.

Panel discussion

Michael Grimm from Microsoft talked about the supply chain issues from the point of view of cloud computing. Now "cloud computing" is already impressively hard to define and the current approaches are more geared towards defining what services are provided and what procedures are surrounding that. After all, a running application could be moved over continents to a completely different underlying hosting system, transparently. With the hardcoded need for precise identification of the TSF parts in the CC, this does not seem to match as well. However in my view such aspects translate more to the ALC assurance aspects than the ADV, and site certification is an approach that might be used here to provide some assurance. Or, if you will forgive my blasphemy, ISO 27001.

Michael provided some input on what people consider to be "supply chain issues" (sometimes a bit far fetched/misapplied, roughly ordered in decreasing relevance in my opinion):

  • Tampered products
  • Counterfeit products (in CC we would call this masquerading in ALC_DEL/AGD_PRE)
  • Disclosure of data (i.e. reverse engineering of the TOE because you can buy them second hand)
  • Not designed/produced by distrusted countries/companies/products/processes/individuals
  • Product quality
  • Business continuity
  • Proper labour conditions
  • Recycling (for environmental reasons)
  • ...
I feel it is a good starting point for getting a grasp of what people out there consider "supply chain issues".

Gene Keeling from Cisco also had a talk on supply chain issues. There is a solid overlap with many of the other talks, which I will not repeat here. Gene also pointed out this need to get a common understanding of what the perceived threats are, but added that he wanted to gradually increase this security there. Interestingly, from Gene's point of view, counterfeit products are also a supply chain issue.

Aside: Site certification

Gene's talk and other discussions showed me that the site certification process is not really known outside the smartcard community. It really is very straightforward: site certification applies ALC to a site doing an generalized step in a life-cycle and issues a certificate for that. In that way, you can say and check that a site correctly does version management but more importantly also provide sufficient confidentiality and integrity protection for the configuration items it receives, operates on, generates and outputs again. So one can do a site certification on a software developer site to show that it can keep the source code secret and unchanged, keep the right version etc, in a TOE unspecific but process-specific way.

The Site Certification process uses the ST structure to capture what is evaluated, the input/output procedures for that site (needed to do the "composition" of the site into an evaluation), and that is about all that is to it really. Validity is as harmonized up to 2 years or until the proces / site has changed.

In my opinion, this process is by far the easiest to use if we want to apply only the process assurance (ALC) to a site as a way to address "supply chain issues". All the implementation and acceptance is in place, it is nothing but just doing ALC and being a bit careful in describing what a site does.

Outside the comfort zone, or inside the uncomfort zone

This year there were again some great presentations calling attention to where we are stuck in our comfort zone, or to put the finger on painful practical aspects.

Pointing out the uncomfortable...

By far the most often referred presentation in the conference was from Gerald Krummeck of atsec: "fighting the bean-counters". Gerald made a humorous point that lists, intended as tools, tend to be seen by "the bean-counters" as mandatory, no-brains-used check lists to be followed exactly, standing in the way of real analysis and thinking by the evaluator. A very recognizable example is the use of the CEM guidance as a mandatory check-list that needs to be addressed paragraph by paragraph. Gerald had an example of "EAL4" not being sufficient for ASE_REQ.2-2 (feel free to look it up, I got this "wrong" too by saying it meets item d, but "EAL4" is not "an individual component in a security requirements package"). As Dirk-Jan Out repeated during the question session, this is not the intention with which the guidance was written. I recognized Gerald's frustration and warning, as did many of the practitioners present. I only hope that enough of the "bean-counters" will take it to heart too.

Uncomfortable...

Quang Trinh of SAIC gave his view on the US approach of encoding the assurance steps (here: the testing) expected into the PPs. Whereas Quang liked this and was arguing that more specific test cases and tools in the PPs, I am of the opinion that there is something seriously wrong with the labs and oversight if such detailed lists are needed or useful. At best this makes CC evaluation into FIPS-style conformance testing (as Miguel put it so nicely). At worst, this means that the evaluators will not use their brains at all (see above about Gerald's warning). Two questions always run through my mind if I see such proposals:

  1. Larger picture: Why do you think that an evaluator who cannot think of these tests himself (after all, you felt the need to mandate it for him), can actually perform those test correctly?
  2. Private picture: Why would you want to do such brain-killing-boring work??
But then again, I am of the opinion that it is the subjective work that is done by the evaluators is the interesting part, not the objective automated ones...

Outside the comfort zone

Tony Boswell from SiVenture gave an interesting presentation on how to do medium assurance on ARM's TrustZone architecture. As usual he had an interesting point when he said that it is tempting to stay with the high assurance (EAL4+AVA_VAN.5) of smartcards just because we know it, even though it does not fit the use case of this technology (some security for a trusted user interface on a phone, or support for DRM). His point is worth keeping in mind, I also catch myself sometimes thinking from the smartcard-common view automatically.

Tony pointed out that the TrustZone developers and users already formed a "natural technical" community with a good common understanding of the technology and requirements. I liked that view that technical communities don't need this whole formal process at all, just a common need to address the security.

Real communities walk the talk...

Brian Smithson of Ricoh gave a presentation on the progress of the hardcopy device community (updated version and paper). As I said before, I am very impressed by the work this community has done from scratch (see also his presentations in the ICCC8 and ICCC11). They have made two PPs under the IEEE P2600 working group, one 2600.1 (EAL3+ALC_FLR.2) for the "government user" (also adopted as U.S. Government Protection Profile for Hardcopy Devices in Basic Robustness Environments"), another 2600.2 (EAL2+ALC_FLR.2) for the "corporate user". NIAP has taken the 2600.2 PP augmented with some SFRs from 2600.1 to become the "U.S. Government Protection Profile for Hardcopy Devices Version 1.0", a good achievement.

Brian frankly discussed his views on the trend for very specific SFRs and SARs with tailored activities as not fitting their view (nor does this trend fit my view). Their PPs have been written with high-level SFRs, so it is unlikely that the PP will need updating soon. If clearer interpretation is needed, supporting documents possibly will be developed. Sounds familiar doesn't it?

The hardcopy device community has evolved in parallel and without influence of previous experiences and ended up in exactly the situation that the smartcard community found itself in some time ago. I see this as a reinforcement that both communities' have taken a path that at least for them works (and I hope others can use these examples to make their growth easier).

Smart meters and smartgrid

Eugene Polulyakh of BKP Security gave a presentation on the smart metering domain. The NIST smartgrid guidelines were quickly glossed over and shown that they can be mapped to SFRs. Similarly there was a quick overview of the BSI smart meter PP. As an introduction for laymen it was too complex, for someone mildly informed like me it was too little.

Dr Helge Kreutzmann of BSI described in his presentation the BSI PP on smart meters. This PP mostly focusses on protecting the privacy of the meter users and avoiding that the meter can be used as an entry point into the user's home network, but also addresses billing manipulation and influencing the switchable loads. Of these I worry most about the switchable loads in the broader view, but the privacy gets the most attention. I am not so clear why measurements over 15-minute or longer time frames are really that sensitive that they need to be protected better than is common for a paycard's data and its PIN, but ok.

This PP also has differentiated attack potential levels. The PP encodes this a bit unclearly (I came to the three possible interpretations: AVA_VAN.2 level, 15 points and 25 points with pre-filled table) so I asked:

  • Internal attackers with physical access: AVA_VAN.2 level
  • External attackers with remote access: AVA_VAN.5 level
Personally I find these kind of "AVA_VAN.5-quite a bit"-PPs somewhat misleading. For two reasons: it requires careful CC-savvy reading to know that the local attacker is not protected at that level (this I dislike the most) and it suggests that we know how to test to AVA_VAN.5 level for networks. I have no idea what an AVA_VAN.5 attack on the network even looks like, seeing that the most advanced software attack I've seen was at most 3 man months. That is a long way from 6 months by experts needed for 25 points. How is it ensured that the labs actually can do this?

Marus Bartsch of T▄ViT gave a presentation on the broader view. He suggests that more smart metering and industrial technology is going to move to CC evaluations. I can only hope so, some security there is much needed.

Various

Nariki Kai from IPA showed in his presentation a ISMS Benchmarking system that allows companies to benchmark against the ISO 27001:2005 of others companies on the 33 common items of 133 points. The hope and their push in the Asian arena is that it will help developers improve their site security. By benchmarking in this way the awareness of the weaknesses is increased, and as expected in the Japanese culture, the weaknesses will be addressed over time. He expressed the worry that in other cultures this might lead to embellished results instead (a worry I share, also I am not sure that the need to address the weaknesses is so clear for others).

My impression was that this is a practical bottom-up approach to help developers who want to catch up to the common level. The value of it in a CC evaluation itself seems limited, self-benchmarking is even less applicable to the CC than a full ISO 27001 certification which is already not very accepted.

Siti Fatimah Abidin of the Malaysian CC evaluation lab MySEF had a presentation on how their EAL1/2 evaluations varied in time and what the influencers of these were. The execution phase (where the real evaluation work is performed) for their EAL1 evaluations varied between 1 to 7 months, with most running in the 3-4 months range. Fatimah drew as learning points from this that face-to-face meetings were efficient in reducing the time of the evaluation, as were evaluations where knowledge of the TOE type could be re-used. Fatimah also gracefully pointed out that one internal improvement of combining the SFR-related tracings (for ASE, AGD, ADV and ATE) originated from the intermediate evaluator course I had given them some time ago.

Dr Haohao Song of the Ministry of Public Security of China gave an insight into the application of CC in China. The short of it, is that the 1983 TCSEC was adapted for the Chinese situation in the national standard GB17859-1999, including some specific aspects for China. The CCv2.x version has been translated to Chinese and is now the relevant standard. Some of the relevant specific aspects for China from the previous step were included, and some assurance aspects were changed in the EALs. Unfortunately the details of what changed was not available (I had hoped we could learn some possible improvements from that).

Marcus Streets of Thales speaking on behalf of the ETSI WG17 described the work done in making the new European Digital signature PPs. Not much to remark on this except that it was time that the proposal ITSEC PP was replaced. The PPs will be under the European Norms under the prefix EN 366 and the supporting documents under EN 166. Part 2-3, the signature creation device SSCD PP, is at BSI at this moment and certification is expected in the next month. Parts 4-5 will be in 2012-2013.

Igor Furgel from T-Systems had a presentation on his view on ADV_ARC. He saw everything in ARC as aspects of domain separation. ARC has some aspects that look very similar and often overlap in practice, but I generally see self-protection as the main focus of work. Not a very relevant difference of opinion, even if we can discus long nights over beers on this.

Karin Greimel from NXP had a presentation on a formal SPM for a smartcard. As a formal methods guy, I liked the talk a lot, but frankly Karin's best moment was her answer was on the question "what changed in the TOE?". She immediately saw the double bind in it (was the TOE incorrect or was it useless to do?). Her answer was that the TOE was correct and this approach found ambiguities in the functional specification. This is in my opinion the added value of formal methods between the SFRs and the FSP (which happened here). Only when the formal methods go deeper in the design changes to the actual TOE come into view.

Michael Dulucq of Serma gave a presentation on the introduction of security in the aviation safety world. In contrast to other approaches I've seen, this project tried to bring the lessons of CC into the safety world, without holding to the CC itself. As a result the approach leverages the excellent configuration management and design review structures already in place, and tries to add the security view. Difficulty in this approach is the different views on threat likelihoods. In the safety worldview, risks have a chance because it is a random chance from nature that they occur. One sensitive call in a 10.000 functions API for example is a 1/10.000 chance reduction in that view. In the security view, the attacker always chooses the weak point with chance 1. This (and the introduction of out of the box thinking) is the challenge that really is there. Michael knows this well and it showed.

An interesting discussion was also about the boundary of the security. The entertainment system for example should not be able to impact the flight systems, simply because they are not connected (or are not supposed to be). However as Michael points out, a message on the screens "there is a bomb on this plane" will have disasterous effects in an airplane.

Missed and should have missed

This year I selected the talks that I expected would provide me some deeper insights (and tried to avoid hopping between tracks). As a result, I did not attend a lot of talks of people I know well (as their topics and views were already mostly known to me).

Could have missed

Erin Conner of EWA-Canada gave an ok presentation (real version) on SCAP. SCAP is a protocol for checking the configuration of an IT product (and in the future, to set it) from the MITRE CWE/CVE/... area. The link to CC was rather weak, the idea is that it could be used to check or put the TOE in its evaluated configuration. More interestingly, one could use this to change the security settings through the whole enterprise if say a worm was rampaging around. Although that idea sounded a bit interesting, there are at the time of presentation a whopping 8 validated SCAP profiles (all Windows OS or Internet Explorer) and 28 that should work. I for one am not putting this high on my reading pile^Wmountain. Althought it was good of Erin to bring up SCAP as something to consider, I feel it could have been said in 5-10 minutes. So I am putting it in this category as I would rather have heard Simon Milford's talk which I am sure was insightful and fun as usual.

Should have missed

Courtney Cavness of atsec gave presentation arguing that to cover third-party developers a new assurance package could be defined. This assurance package takes a set of ALC assurance requirements that assumes a specific case of a third party developer of an EAL4 TOE and adds some specific tweaks (like a mandatory criminal background check). There seem to be some ideas lurking deep in the proposal that are useful, but I fail to see the value of this approach over the existing mandatory application of ALC on the whole TSF (including third party developers!).

Conclusion and future

The next ICCC

The next ICCC will be in France, Paris. I for one am looking forward to the technical community discussions under the enjoyment of a glass of wine.

Conclusion

I enjoyed talking to everyone between the talks, after and deep into the night immensely. Thank you all for the wonderful time!

And a special personal thanks to the company of ladies and gentlemen of Monday, Friday and Saturday!

Local ICCC12 presentation cache until the website is updated