Your Creative Solutions Services Products Research and other fun About YCS Contact information

Home

Why and how of this summary

Not everyone could attend the CC conference and those that did could not attend all the talks (simply because there were 3 tracks in parallel). Last year I started making these summaries available for a larger public in an attempt to make some of this "lost" insight available again.

As with all these summaries, there is a personal opinion colouring this summary, i.e. mine. I love feedback if you think I missed or misunderstood something, even more than hearing you agree. After all, it is the disagreement where most of the learning can be found.

To get it out of the way, my own presentation intended to show what I think is the biggest risk to our CC brand: the gaps between what we evaluate and what the end-users expect to have ensured by the "magic CC stamp". Especially shifting things to the environment (even though we know users do not follow the manuals) or tricks with the TOE scope and SFRs are things we should be careful with, lest we damage the excellent CC brand value too much.

ICCC11 atmosphere

This conference had an old and a new feeling to it. The old feeling for me was the real community that the CC domain forms. It was a feast of meeting people again, exchanging information (and gossip), and discussing to late in the night about CC, inter-cultural aspects, and just how much fun it is to work together with all these interesting people. (By the way, last year Miguel Bańón from Epoche & Espri made pictures of people active in the CC domain, resulting in a "mugshot" book. I hope next year he'll repeat that and it will have a larger selection.)

The new feeling was one of movement from complaining about the CC to actually working with it, especially for the software domain. I don't know the reason for this, maybe last years' shocks of policy changes effecting these domains has faded away and has caused mostly the developers to rally to counter it? In any case, there were several initiatives for addressing areas that were considered painful the last years. The OS community for example has for example with help from atsec created a OS PP addressing much of the difficulties associated with the older conditional access and labelled security PPs (more later).

CC changes, "Technical communities"

David Martin coined the term "technical communities" for the concept of broad groups of developers, end-users, evaluators and certifiers coming together to create a consensus for a specific application domain on what is evaluated (typically resulting in a Protection Profile) and how specific aspects of the evaluation are applied. In the smartcard community this has grown over a long time with significant effort by all parties involved, resulting in the CC guidance documents, the smartcard hardware PP PP-0035, and well functioning working groups like ISCI WG1 and JHAS (and in a way also JTEMS). It seemed to me that in the software domain with the OS, the enterprise security and the network security areas have more or less independently started similar groups. The first result of these groups are PPs that more closely match their needs, and I saw all of them recognising that the interpretation (mostly during evaluation and certification) of these PPs and technologies will likely result in more guidance. Some of this guidance is already incorporated in the PPs (the OS PP is an example). Frankly the application notes of a PP is not my preferred place for such guidance (because it makes re-use and updating of the guidance harder), but it is a practical first step none the less.

The activities of these "grass roots" communities are what pushes the practical application of the CC forward. Although more politically formulated by David Martin on behalf of the CCDB, essentially nearly all of the working groups that have been looking at improvements of the CC, originally under the name of CCv4, have stagnated at various levels. It seems that having a specific technology / product domain is needed to keep the focus and speed in these groups, and that was not there sufficiently. I had the impression that the predictive assurance group, looking at how to keep the certification in face of small patches, was closest to producing working documents and procedures, but what the next step will be is unclear at the moment.

I think the lesson we should draw (and already have drawn) from this is clear: developers hitting difficulties with making sane, comparable requirements and interpretations, should come together and build this "technical community", this consensus group, with the evaluators and certifiers, and preferably also with the knowledgeable end-users. Specific domains make for more focussed discussions so we do not get bogged down in vagaries, and developers pushing it makes sure the need for a workable and timely approach is not forgotten.

Personal contact

As discussed in the panel discussion, making a community starts with building a trust basis between the people part in that community. In person, this is much easier (the smartcard, JTEMS and OS PP communities started like this) although the Enterprise-level Security PP community apparently was virtual from the start.

I have said it before with great appreciation, one can see the effect of the personal trust and respect very well in the smartcard community. Horizontal competitors (developer A versus developer B, lab X versus lab Y, and in a way even the CBs) and vertical parties with sometimes different goals (developer, lab, CB, end-user) work together to make a level playing field in the requirements. This is on the basis of trust and respect, built during the official work but also in the informal contacts outside these situations. Especially during these informal contacts I would say.

The meaning of life, the universe and everything.

SFRs mean everything and nothing...

Another recurring theme for me is the description of the security properties we are evaluating against. Since CCv3.0 this is only the SFRs. The SFRs have not significantly changed since the Trusted Computing times (with a brief attempt in CCv3.0 that failed) and they still don't always explicitly express the security properties that we want to evaluate. My favourite example, FCS_COP, came up in a discussion after Monique Bakker's presentation on smartcard composition (actually it was unrelated to her presentation per se). What do we mean when we say FCS_COP?
  • "Thou shall do encryption exactly according to the AES spec"?
  • "Thou shall do something that looks like encryption and do not compromise the confidentiality of the crypto key used"?
  • "Thou shall do encryption that ensures no faulty encrypted text is ever outputted"?
  • "Thou shall always have encryption available"?
  • ...
Dirk-Jan Out of Brightsight started of track B with the unclarity of SFRs as his point. He showed that the common PR argumentation we use that the SFRs are unambiguous and clear is not quite true, by trying to model a wire (say the simple version of a VPN link). Where in earlier CC conferences I think DJ would have been burned as a heretic, now the reactions were rather mild and approving. It seems the almost fanatical one-way-interpretation school has started to fade away and the more mature multi-interpretation thinking has taken its place. I for one am glad for this change.

The "solution" to the SFR problem is still too hard I think. I see no way to make a set of requirements that objectively describe what a good and at the same time internationally in the same way understood security requirements are. A context, a community, and lots of discussion will be required to ensure this.

... and we are not so clear about assurance either

Tony Boswell of SiVenture as always talked entertainingly a bit about the SFR modelling problem and more about the meaning of assurance. One comment Tony made stuck to my mind: it seems like we are using these EALs to strictly. It almost seems we started with EAL4 and just created the rest by diluting it (after all, he asked, why is it that so neatly all EAL-levels can be described by only adding a few bolded words?). I agree with the feeling. I am known to quickly summarize EAL1 as the "well let's get them working in CC", EAL2 as the "blackbox testing level", EAL4 as the "whitebox testing level", the +AVA_VAN augmentations as the interesting ones and EAL5 and higher more for marketing reasons.

Another idea Tony suggested was to do a sensitivity analysis on the evaluated configuration settings. We know end-users often do not use the TOE in its evaluated configuration but expect the TOE to be "secure" none the less (this is in short half of my presentation too). Tony suggested we explore just how sensitive the TOE is for those kind of mistakes in following the guidance, is it still safe or not? An intriguing question, if dangerous in the amount of work it might create.

Re-use

Sarra Mestiri from Oberthur showed how they combine CC, EMVco, GSMA and other certifications using the smartcard-standard composition approach. An overview of the specific re-use arguments has been made (my first impression is that it is straightforward to do so and useful to document this once and for all). As they are looking at an open JavaCard platform (i.e. new Java applets can be loaded in the field), this composition is possible provided the JavaCard platform itself it tested well even in the face of hostile applets. A proposal document for minimal test cases of such a hostile applet to be considered should be available by the end of this year (2010) from JHAS. This sounds like an interesting document to read even if it is likely to be available only under the JHAS restrictions.

Eric Winterton from Booz Allen Hamilton described his experiences with the ACO class when applying it to a scope extension of an existing product evaluation. Not surprisingly he hit a lot of difficulties there, as ACO is designed explicitly to apply only for system-like composition, i.e. two certified TOEs that are relatively independent are combined. His TOEs were intertwined, one building into another. The smartcard-standard composition or just a free-form delta evaluation would have been much more efficient here. (In general: if ACO is your answer, you most likely have not understood your composition question. Composition in general is hard and ACO only covers the most basic situation that generally is not interesting to actually do.)

Knowledge distribution

Zarina Musa of the CyberSecurity Malaysia evaluation lab MySEF in Malaysia held a a presentation describing how they distribute their knowledge within the lab. After the course by the certification body on CC fundamentals, the evaluators are trained on the job using a mentor-trainee construction. For attack and product knowledge improvement, individual evaluators take courses and share the knowledge within the group afterwards. Also external trainers are contracted (Zarina was kind to mention that I was the one providing them intermediate CC evaluator training this year).

Yasuyoshi Uemura from ECSec described the Japanese smartcard community. A big consortium of Japanese developers, labs and end-users has been formed called the IC System Security. The big consortium meets as a "Round Table" to discuss topics that have impact on the security. A more specific, smaller, subgroup named the Japan Consortium (ICSS-JC) has been formed to discuss specific attacks and analysis methods, comparable to the JHAS and JTEMS working groups. The ICSS-JC currently meets about once a month (interestingly in terms of community building, this is also the rate the Eurosmart/JHAS/JTEMS had when they started).

Naohisa Ichihara from NTT Data summerized how NTT Data acquires and internally distributes the knowledge. An interesting overview (although for me already known). In true Japanese cultural manner, Ichihara-san I think understated his role in this process quite a bit.

Various

Jean-Yves Bernard of Thales ITSEF made a plea for a more precise definition of semiformal language. The presentation did not really go into a proposal for such a definition, but described more what the use of the ADV evidence is (clearer tracing, clarity in how it is distributed, avoiding the tracing of many requirements to one interface and then to many subsystems, etc). Whereas I could recognize the unwanted aspects, I must say I do not find these problems, or at least they are not related to the formality-level of the documentation. Both informal and formal specs also can be hard to trace.

Rob Huisman of the Dutch CB and Dirk-Jan Out of Brightsight (the Dutch lab) had a joined presentation on a minimal (paper-)weight reporting between evaluation lab and certifiers. The minimal paper reporting required by the CCRA are the verdicts of the families and final ETR. The real core is how the certifiers get their trust that the evaluators have done their work properly. This is implemented by providing a full ASE report, and holding two evaluation meetings between the evaluators and certifiers. In these meetings, the evaluators summarize in presentations the knowledge they gained from the evaluation evidence, and the certifiers challenge this with questions.

I believe Rob and DJ if they say this approach fulfils the letter of the CCRA. The spirit of the CCRA, the quality control, is ensured here for a large part because both parties, the evaluators and the certifiers, are highly experienced and quality conscious. Documentation of this fact is hard to show, which was exactly the question/comment from the US CB on this: "how do you show this during the VPA?" (the VPA is the process where existing schemes are checked by their peers every 4-5 years).

Michael Grimm of Microsoft described the approach to use CC for cloud computing. It took some time for the concept of cloud computing to become clear in the general field, and now for the application of CC the work will need to start. From Michael's tone I gather that like the definition problem of "cloud computing", finding a consensus on the security properties that need to be evaluated is going to be challenging (in the "please let someone else do that"-sense). It is good to hear that there is a push for it.

The last presentation I saw was Anders Staaf of Combitech explaining how Sweden is promoting the use of CC for government use. The Swedish government has made it mandatory to deploy evaluated products in the critical infrastructure (telecom, networks, SCADA, civil government, emergency response etc) if critical information is handled and successful attacks on it would cause severe to catastrophic damage, and a PP is selected. A database of selected PPs will be created. Currently the highest on the list are mobile devices and first boundary devices (firewalls, virus scanners etc). It will be interesting to see how this initiative will work in Sweden, compared to the US/Japan/... initiatives.

Vulnerability Analysis

Jose Emilio Rico of Epoche & Espri had a talk about making vulnerability analysis continuous during the evaluation. This is a common discussion theme separating labs that come from the penetration testing mindset and those that come from the auditing mindset. Of course the whole point of the CC evaluation is to feed into the vulnerability analysis, only the place and way often differs. Looking from a pentest-centric view, Jose argued to pull the start of the vulnerability analysis forward in the project plan and to have early access to a working system (or prototype) to use it to solve ambiguities in the understanding of the TOE.

In my experience, for unknown TOE types (think: something new to the lab, general software, web applications) this is a good strategy to reduce calender time consumed in the evaluation. For known TOE types (think: yet another smartcard hardware platform) this generally isn't as the detailed attacks are expensive and the decisions to perform them need to be well considered ("trying" a 3-6 week DPA attack on a vague understanding early in the evaluation will eat away budget inefficiently).

Miguel Bańón from Epoche & Espri had a presentation on the return on investment for evaluations. I could only see the last part of his presentation but the conclusions were interesting as they were surprising to Miguel and me: he found that the amount of vulnerabilities found in their projects scaled linearly with the effort spent. So where he (and I) expected that there would be some asymptotic behaviour, a natural limit where more effort does not really discover more vulnerabilities, this limit is not in fact reached or does not exist. One explanation I can think of, is that Miguel and his lab is more active in software domain, where I could expect the TOEs to have enough bugs to discover. In the smartcard domain where I did most of my work, the discovery of ~1-2 potential problems during an evaluation is more common. These generally turn out to be ~0.5 real problems to be solved in the TOE or the guidance. The impression I have there, is that we are in a mature domain where it is just the slightly different view that uncovers the vulnerabilities.

Biometrics

Belen Fernandez-Saaveara of the TestingLab of the University Carlos III of Madrid described proposed changes to the Biometric Evaluation Methodology (BEM). I must say I had a hard time spotting differences from the way I interpreted the BEM for projects that involved biometrics, but an update of the BEM to CCv3.1 would be appreciated. When I asked, they indicated that the improvements will likely find its way via the Spanish CB as lead nation into a new and improved Biometric Evaluation Methodology. Wolfgang Killman asked the question I wanted to ask: with error rates of 1 in 10.000 or 1 in 1.000.000, how are you ever going to get the appropriate multiple of these amounts in valid biometric test subjects? Apparently, this is still an unsolved problem...

Related was a talk by Frank Grefarth of BSI on Biometric Spoof Detection. BSI had a trial project for biometric fingerprint sensors. During the process they discovered that at that time (mid 2009, begin 2010) all existing fingerprint sensors were vulnerable to low attack potential spoofing attacks. At least one of the small set of common attacks would reliably work. As a result, even EAL1 evaluations (with basic attack potential) would fail. However development of better, more spoof-resistant systems was under way and looked promising. To promote the field to stay within the CC evaluation methodology and to show the way to higher assurances, two PPs were designed. One (FSDPP_OSP PP-CC-PP-0062-2010) uses policies to encoded the spoof detection, the more advanced (FSDPP BSI-CC-PP-0063-2010) uses threats to do so. The interpretation is that the policy-encoded PP (FSDPP_OSP) means that only functional testing is applied (as there are no threats), whereas the threat-encoded PP (FSDPP) does mean vulnerability analysis is applied. Purely from CC-technical viewpoint this is rather creative thinking (there is no official link between threats and AVA_VAN), but as argued earlier, part of the practice of the CC is its interpretation by a community. This at least makes the differences clear and paints a path to future use of the CC by this TOE type.

As a part of this project, a document describing the common attack methods for making fake fingers will be published, called "fake-toolbox". This is going to be an interesting resource for evaluation labs to base their testing on. Speaking from personal experience, it is real fun to make a fake finger from rubber and have a sensor accept it...

Smartcards, sidechannel analysis

Monique Bakker of Brightsight had two talks from her smartcard background. The first talk described how successfull DPA attacks on contact-less smartcards are performed at Brightsight. As a (former) insider, I enjoyed seeing the presentation of these attacks. It was telling to see Monique, from her experience in doing these attacks, casually point to two parts of the waveforms and conclude that these are the areas where "clearly" the leakage occurs. From teaching I know that for the general public this "clearly" is anything but clear. It shows the big and steep learning curve of these attacks quite nicely.

The second talk investigated the smartcard composition requirements and where Monique saw re-use opportunities between ASE_COMP and ADV_COMP on one hand, and the for her normal documentation in of ADV_ARC and the programmer guidance. Interestingly this triggered a good discussion with Igor Furgel (one of the authors of the original composition document). My summary seeing the exchange is that where Monique has seen evaluation evidence that contained very re-usable security architecture descriptions, Igor had not and therefore had written exactly what he thought was missing in the requirements. I can but conclude that Monique's re-use, given her input, is indeed correct. I want to repeat that I liked the interaction this triggered, it is a good way to exchange ideas and come to common, better understanding.

Jean-Yves Bernard of Thales ITSEF held a talk about the need to look at the tools used, using as example the JavaCard bytecode validator. In the smartcard community we know that tools like compilers can very sneakily optimize away security mechanisms present in the source code. Whereas I agree that the CC puts little focus on the tools used, I think there is a very good reason to do so: it is practically impossible to do so, and we compensate for that by testing for dangerous optimizations we know.

A representative of TÜBITAK/UEKAE (the Turkish evaluation lab) presented a proposal to collapse the identification/exploitation phases as used in the smartcard vulnerability analysis to just one and fudge the attack points to match. I have to say I was rather taken aback by the apparent total lack of understanding by the speaker on the reasons for keeping the (originally CCv2.x) rating. One is that the smartcard community wants to distinguish between attacks that are easy to repeat and those that are hard to repeat. Software attacks generally have zero or near to zero repeat costs, hence the collapse of the general rating approach to only one phase. Hardware attacks often do require significant repeat costs and this information needs to be available to the risk-analysis people further in the chain.

The other reason for keeping the old rating mechanism is that this is the encoding of the consensus built over the years and hundreds of evaluations in the domain. I felt rather offended by the lack of knowledge and sensitivity on this area and I am afraid it showed in my question to the speaker.

Wolfgang Killman of T-Systems summarized the guidance for vulnerability analysis on Elliptic Curve Implementations. This work, sponsored by BSI, has gathered the open knowledge of the field (with undoubtebly great help by Tanja Lange of the Technical University Eindhoven) into a rather comprehensive list of attacks to consider. From my experience on gathering attack knowledge and knowing the people involved, I can tell this constitutes a huge body of knowledge. Should you consider ECC vulnerability analysis, this is definetely a document to check.

What I wished I had skipped...

There are always talks that I feel are a waste of my time. I've thought hard whether I should be explicit about these talks, as this is a very personal impression and can sound rather harsh. On the other hand, I think we should not keep accepting these talks in the ICCC. For one thing, they are keeping out other, potentially more interesting talks. And for another, from all of these speakers I expected much better based on their previous presentations.

Michael Nash had a talk about his experiences building an UK Ministry of Defense logistics system. Sold under the title "Using the Common Criteria in Practice" he spent his 30 minutes talking about the system in all dull details. He managed to avoid talking in any detail about the evaluation aspects, where they did seem to be interesting problems (like how do you keep a system that changes every 6 months certified if your evaluation takes more then 6 months?). Now truth be told, I rarely agree with Michael's talks in the ICCCs, but they were never boring and always add something to the discussion. This time I walked out on his talk when the question time came, a first for me. What a waste of speaking time.

Jose Francisco Ruiz Fuelda had a talk interestingly called "Evaluating a watermelon mitigating the threats through the operational environment.pdf". Drawn by the title (and the hope it would discuss the dangers of the assumptions/objectives for the environment) I had high hopes for this presentation. Unfortunately the point was completely lost in the unused symbol of the watermelon. The best summary I can make of it, is that threats can be countered by objectives for the TOE and for the environment.

Ortega Chamorro Alvaro had a talk on Side channel attacks in CC, FIPS-140, EMV and PCI. Now I am from these fields (except for FIPS-140), so I was rather hoping for a deeper or new insight. Unfortunately the summary in the 30 minutes is that CC, EMVco and PCI use the JHAS attack potential document, and FIPS-140 does not really describe what needs to be done. I and others in the room had much higher content hopes.

... and what I wished I had seen

The first round was hard to choose between. A1 had the smartcard speakers, C1 cryptography (and Ahmad Dahari Jarno from MyCC), but B1 had Dirk-Jan Out's talk and he had promised for it to be interesting (and it was), so...

Brian Smithson of Ricoh's talk on the PP development for hardcopy devices. Their presentation at the ICCC8 surprised me immensely. In the midst of the CCv3.0 and CCv3.1 discussions, this developer-only group had managed to understand and implement a good PP, with all the community building surrounding it. From the slides I can see that the presentation on the lessons learned must have been very interesting as well.

Peter van Swieten of Brightsight had a presentation on using tools to generate design evidence. In the past we have discussed this between us and I would have loved to have seen where it brought him. Also a bit because I think he referred to Doxygen made by my dear friend Dimitri van Heesch.

Anil Ardic of the Turkish CB had a talk at the same time I held mine. Based on discussion the two of us had, I think there is quite a bit of overlap in the talks.

The next ICCC

The next ICCC will be in Malaysia, Kuala Lumpur. I've been there a few times the last years, it is real big city. For those of you looking for culture and sightseeing, if you are more or less limited to the city itself, I suggest the Petronas towers, Islamic Arts museum, and the Batu Caves. For those of you going out for shopping, the city has plenty of shopping malls ranging from general to high-end. The intellectual property laws in the last years have been enforced much stronger, so now most software/DVDs/luxery items do seem legit.

Seeing that just after the ICCC11 I got my PADI Open Water Diver certification, I am most likely going to try to add some diving options to my ICCC there...

Conclusion

The Common Criteria community as a whole seems to have shifted from the "(interpretation in certain schemes of) the CC is hurting us, so CC is bad and must be done over completely" to following the "it is not perfect but together we can make it work for us"-attitude the smartcard community has been leading for years. I like that more than all the other changes.

And as soft as it might sound, I think we can be proud of the community of individuals we have built. It is a real delight to talk to everyone in between the talks, after and deep into the night. Thank you all for the wonderfull time.

Mugshot foto of Wouter

"Mugshot" picture of Wouter from Miguel's book

TechnicalCommunityAtWork

How technical communities really get to be a so well ehmm oiled machine...

My presentation

Local ICCC11 cache until the website is updated