Your Creative Solutions Services Products Research and other fun About YCS Contact information

Home

Why and how of this summary

Not everyone could attend the CC conference and those that did could not attend all the talks (simply because there were 3 tracks in parallel). I have making these summaries available for a larger public in an attempt to make some of this "lost" insight available again.

There is also a local cache of the presentations available.

As with all these summaries, there is a personal opinion coloring this summary, i.e. mine. I love feedback if you think I missed or misunderstood something, even more than hearing you agree. After all, it is the disagreement where most of the learning can be found. I'll update this summary as appropriate. And although I have worn and still wear many hats at different times, I am not speaking for others.

≥10 years of CC for me

While thinking of topics for the talks this year, I suddenly realized I've been in the CC world for ≥10 years. Somewhere I still consider myself one of the young ones in the field, as I haven't worked with the Orange book (I wasn't even literate then) or even CC version 1. Yet with ≥10 intensive years I could not avoid feeling more part of the old guard in the CC field.

This prompted me to share even more of my experience. This year I had hoped to do this in three ways: a presentation on training people in CC, a presentation on the different views the different roles have and the avoidable misunderstanding that can come from this, and a book on the processes that occur in a CC evaluation. Unfortunately my roles presentation was not selected, even though I think that would have been really valuable: I see quite a bit of avoidable miscommunication creating inefficiencies and frustrations in evaluation processes. Maybe for the next ICCC?

Teaching CC: Lessons learned

My presentation was about the learning path novices to the CC go through, the different ways one can train them and the effects that has on the recall and productivity. I pointed out four major training methods:
  • Throwing the novice into the standard and letting them swim or drown.
  • Following the standard from part 1 to part 3 and the CEM.
  • Overview picture first, then the standard.
  • Hands-on directly practical work (and learning the CC implicitly)
The last two are the most productive in my experience: the knowledge sticks the best. Hands-on training has the advantage of bringing people immediately to a productive place and hence to acquiring experience. (Due to time constraints I could not go into the quality safeguards that accompany this approach. The trick is to use a method that automatically forces a novice to escalate because he cannot make it fit the method when he does not understand, plus of course senior support.)

After the initial training, further growth is mostly on increased skills. The growth in these skills is determined by experience gained. Performing in multiple roles (consultant and evaluator in most cases, or trainer) is the fastest way to acquire the diverse experience and hence growth. As with most skills, I've found that the skills essential for good evaluation processes can be taught. Towards the more senior roles, these skills are mostly interpersonal communication, reasoning, negotiation, communication. The technical knowledge and skills generally only need a little training support in my experience (enthusiasm drives most of us to learn this stuff already).

There is one core attitude that is essential for the evaluators and certifiers: keeping your integrity when being soft on the people. I found that this core attitude needs to be present to start with, and then can grow by providing skills to balance the strictness on the principles and the softness to the people. For example being aware of the common interaction games that occur between evaluator and developer, or certifier and evaluator, allows the student to keep his integrity clean while working with the dynamic. This is also described in my book.

Common Criteria for professionals: evaluation processes

I have spent most of my working life performing Common Criteria (CC) evaluations and certifications, and helping, coaching and training developers, evaluators and certifiers to preform evaluations efficiently. Over time I have given a lot of advice in many forms and now with the ICCC I decided it was time to bring the parts around the processes together in this book.

This book takes a step back from the technicalities of the CC2 and goes into the processes associated with CC evaluations: the business view on the process and the interpersonal view on the play between the roles of the end-user, the developer, the evaluator and the certifier.

As with the CC, these topics are very broad and interwoven with each other. I have gathered them in chapters around common threads in my experience. This way the chapters can be read stand-alone while showing the bigger picture. My book is available in PDF format from lulu.com for an ICCC and first edition price of €14.99. A preview is available at lulu.com or here. I very much welcome feedback.

Big shifts(?)

The biggest news on the ICCC was a policy shift around the mutual recognition. The intention is that mutual recognition up to and including EAL4 is only available for products evaluated against a community PP (cPP). Other evaluations will only be recognized up to EAL2. All pre-existing agreements still apply, so for example the SOGIS-MRA still allows for mutual acceptance of much higher EALs for say smart cards.

This policy shift raised quite a bit of emotions and different views. I have several ways of looking at this shift:

"This devalues the mutual recognition!"

Especially EAL4 evaluations are both valuable in added assurance and expensive to perform, and mutual recognition of these evaluations was an important item for developers. Developers who had moved up to EAL4 evaluations, educated their users about the value of this, are now caught with the situation where formally their certificate is not valid in other countries.

Partly I think we are caught in how we market the mutual recognition agreement here: the CCRA MRA was only a gentlemen's agreement between the CCRA governments to not force developers into new evaluations under EAL4. We always describe this CCRA MRA as "valid all over the world" but it actually wasn't. Companies aren't bound by it and can accept as they want (or not). Governments are also not limited by it: if they want they can accept also say EAL5 evaluations (this is formalized in Europe in the SOGIS MRA, and I know of >EAL4 evaluations accepted on specific TOEs). So I'm tempted to not focus on this aspect.

"This just encodes what is already true (for the US)"

The US scheme had already decreased the maximum EAL they would accept for evaluations within the scheme to EAL2, hence it was already not possible to do an EAL3 or EAL4 in the US. There was an opportunity to do an EAL4 in one of the CCRA certificate issuing schemes and import it into the US that way, but there were often some technical details that could interfere with this. When EAL4 was required, most of the time it was EAL4 + augmentations not falling under the mutual recognition. Be it the robustness requirements, specific assurance activities defined in the PP, FIPS-140 requirements or an EAL4+AVA_VAN.5, technically they were already outside of the mutual recognition scope. And as always, there was the escape clause that this was "national security" and wasn't bound by the CCRA anyway.

Note that I do think that the ~2 year ago started policy shift of the US scheme to accept only ≤EAL2 evaluations effectively kills off the expertise in their labs. If all that is done is to read the manual, the guidance, and look for publicly known vulnerabilities, there is no need for solid expertise in attacks in the labs anymore. I am saddened by this process, even if I can see the various unhealthy interactions between all players in the scheme has lead to this point.

"This doesn't effect the ≥EAL4 communities"

A bit of Europe-centric thinking crept in my mind here: the only ≥EAL4 communities I currently see active, are the smart card and POI (payment terminals) communities. Those already have CC mutual recognition under the SOGIS MRA, and recognition to the other worldwide parties such as EMVco doesn't depend on the CCRA MRA. This thinking is closely linked to the next view:

"This addresses the mistrust on EAL2-EAL4 quality level, and promotes technical communities to solve it"

One can argue that the reason for downgrading the blanket mutual recognition from EAL4 to EAL2 is because doubts on the quality of other nations' EAL4 evaluations. After all, to bring the the community PP to >EAL2 (and ≤EAL4) there needs to be "a means to align the quality of the evaluations", i.e. there needs to be a technical community taking care that the evaluations are done at an equal-ish level. I do think that if there is such mistrust, it should be addressed at CCRA level (the shadow and PVA mechanisms formally, or realistically just in the CCRA MC discussions). Once trust starts to break down, only communication on that can save it.

Renewed attention on and extra dependency on technical communities

I like the idea that the technical communities not only come together to make the community Protection Profile, but also stay together to make the evaluations go smoothly and on an equal-ish level. This is, however, quite an investment in energy, time and money from all parties. Until recently the only community that seemed to be active in that regard, was the smart card and more recently also the payment terminal community. These are the only ones I know of that actually have the developers, evaluators and certifiers meeting and creating consensus on the evaluation level and aspects. They are investing the time and energy to meet ~6 times a year in the working groups, draft and edit the documents, and also the money to have the PPs certified and such.

Now an inspiring number of community initiatives could go this way. We have the older communities such as the multi-functional printers and OS PP communities, and newer such as the supply chain security one, yet I've not heard any of these communities creating consensus, let alone guidance documents, on the evaluation practice. As Tony Boswell remarked at the last talk of the conference, we seem to be missing an opportunity if we focus only on the voting and governance structures of such communities, at the cost of losing sight of the energy and enjoyment (as well as risk- and cost-reduction) coming from working together as passionate security experts here.

"Government pace..."

The changes will not actually happen that fast. This is an intention of the CCRA MC to change the CCRA Mutual Recognition Agreement, a formal agreement between all the CCRA participating governments. Changing such an international agreement is ... complex and time-consuming, even when everyone is agreeing to this direction. Until the moment the new or changed CCRA MRA goes into effect, this is essentially only a US scheme policy change, be it one with big impact on developers with US customers requiring EAL4.

Careful on the communication

Generally I lean more to clear communication even if this is politically incorrect. Still I agree with the opinion of others such as @tsec that the communication here needs to be careful. Over the whole lifetime of the CC we've used the CCRA MRA as the argument why the CC is the internationally recognized standard. This has become a large component in the perceived value of the CC brand and I ask everyone to be careful in the communication of this proposed policy shift.

Presentations

I do not intend these summaries to be a review or rating of the talks as some of you remarked, although I am clear about my opinion on the quality of the presentation. I consider it the responsibility of the speaker to add something to the field, not just to spend the 30 minutes on nothing for a discount on the conference fees. With ~50 people sitting in the room, an empty 30 minute talk collectively is a full day of life thrown away. I won't be silent on that. Our time is worth a good effort from the speaker and the speaker is worth good feedback.

This year I have not clustered the presentations; there were no real themes to cluster around and I hope that this lessens the rating feel of this summary.

Common Criteria and Secure Development: A New Proposal"

In "Common Criteria and Secure Development: A New Proposal" Steven Lipner of Microsoft repeated his argument for a more process, not product, centric approach to be allowed into the CC. His main argument is that secure development processes such as Microsoft's SDLC provide much more of the actual product security than the evaluation does, hence he wants the developer efforts in these fields to be recognized in the CC.

Now I am always torn between two views on this topic. I have great respect of how Microsoft led by Steven has made an enormous shift with applying SDLC and it shows in the increased resistance against simple buffer overflow attacks and such. As Steven pointed out from his experience in the security response centre of Microsoft, about 2 in every 200 vulnerabilities they handled were items that were faults in the actual security features (i.e. SFRs).

Yet there my other view kicks in: that the TSF is so big and complex that all those millions of lines of code can break the SFRs, is exactly the reason why the product evaluation in my view should be difficult to pass! The point is not to show that there is progress made in the security, but to show that it is of sufficient level. And there the process evaluation runs into the difficulties: it is really hard to show that the process, even if it is perfectly followed, actually guarantees that the product will be secure.

I think this is what Steven experienced that led him to say that their experiences with process evaluation have been expensive, not predictable and not that effective. He had a hope that the ISO 27034-1:2011 standard on management processes of secure development would help. I am not so sure about that, it seems to me that goes even one step further from the end goal: instead of showing the TOE is secure, it goes towards showing that the management process of the development process of the TOE somehow makes the TOE secure. Yet these processes do improve security of the product, so I have the impression we are missing good arguments here to make it work. I like that Steve keeps faith in his view even if I do not see how it fits just yet.

"Virtualization and the Common Criteria"

This talk sounded like it would be an interesting case of applying the CC to more mobile applications or the cloud. However I found it to be a rather empty talk on a trial project to allow access to medical data using mobile devices with an UICC. What was evaluated, against what, what the problems were, what the CC issues or even goals were, all was unclear. The speaker did not even know what was evaluated against what (luckily some of the audience did: the UICC hardware was evaluated against PP-0035). I was underwhelmed by the content of this talk.

"Certification of development sites of smart card manufacturers"

In Certification of development sites of smart card manufacturers Christian Krause of BSI and Michael Vogel of G&D described their experience with the site certification process. At the moment 14 site certificates have been issues, all in the smart card domain. The sites started as simple third party production sites and now G&D showed a site certification of their own development site.

A site certification of their own development site at first glance seems just additional costs and effort as re-use and funding issues do not seem to be present. G&D was, however, expecting evaluations with 3 different schemes and multiple labs and this could result in having to spend many days on the audits and alignment problems. Formally site certificates aren't yet automatically mutually recognized (under the formal arguments that it is not a product evaluation and often there is ALC_DVS.2 included which is not part of EAL4), however between the smart card schemes it is easily accepted. (Background: the site certification process came from ISCI WG1, one of the working groups of the smart card community, hence this is a consensus between those smart card schemes.)

Next to the goal of reducing the audit days, G&D also saw this as an opportunity to show non-CC customers that their site was secure. All in all their experience was good and they showed a few tricks to sidestep the pitfalls of changing tool versions and disclosure of site details to others. I liked this talk for the progress it showed in this area and how G&D used this strategically to reduce costs in the audit days as well as promotion outside the CC domain.

"Feedback on the application of ALC requirements in Open Source projects"

In Feedback on the application of ALC requirements in Open Source projects Christophe Blad of Oppida described his experience with applying ALC on open source projects. As these are open source projects, the confidentiality of the configuration items (especially the source code) is not relevant, only the integrity.

The projects covered used subversion to control the integrity. Commit access to the subversion server was only given in steps, first an agreement with legal barbs had to be signed, then submit rights to a small, non-security relevant part was given (technically this will have been part of the TSF regardless, as we are talking about a library here without internal boundaries), and only then for the whole source. Full code reviews on submissions are (claimed to be) performed.

Christophe considered the integration and access control to have passed evaluation muster. I agree that at low attack potentials there seems to be no reason why an open source project with a solid version management system and checks on commits cannot pass, but there is a lurking problem when the attack potential goes up: the ALC requirements are within a model of trust-worthy developers. How this applies to a potentially malicious submitter of patches is unclear. I don't see a clear argument why an AVA_VAN.4 attacker submitting a backdoor in incremental "programming mistakes" will be countered by code reviews. There are a lot of ways to sneak in hostile code, even an off-by-one error deep in the multiplication routines can lead to remote backdoors (seemingly honest mistake in the RSA Reference library many moons ago left a lot of SSL implementations vulnerable to remotely exploitable buffer overflows).

"US NIAP Protection Profiles: Progress and Lessons Learned"

In US NIAP Protection Profiles: Progress and Lessons Learned Ken Elliott of The Aerospace Corporation described how the US NIAP drive towards "Tailored Assurance Packages (TAP)" was showing up in their new PPs, mostly the Network Device PP (NDPP).

My short summary of the background: NIAP has been driving for much more explicit descriptions of the assurance activities expected for several years now. The TAP/NDPP describe explicit steps to be taken, up to and including the type of tests expected or the documentation expected in the ST TOE Summary Specification (that is used as the publication mechanism). I see this approach as a distinctly American cultural view on evaluation requirements setting: to make the assurance steps very explicit to ensure that there is no discussion on these during the evaluation (discussion possibly even done via lawyers). I personally like the idea of making assurance steps clearer and at the same time get all kinds of questions when I see how specific the additional guidance is (be it in these TAP/NDPP documents or in say FIPS-140):

  • If an evaluator cannot be trusted to think of these tests himself, why do you think he can perform them adequately?
  • If the TOE passes that exact test, but would not pass one very similar to it, what happens? Is this outside the scope of testing (i.e. liability for the lab)? Is this inside (but then the list isn't complete regardless)?
  • Doesn't this have the risk of hugely over-specifying and thus restricting slightly different approaches?
  • Finally, who would actually enjoying this kind of work? Isn't this throwing away the fun and the value in security evaluations: the evaluators using their minds to think why they cannot break the TOE?
I think part of this is a cultural difference with how I perceive the European view of getting the intention clear foremost, leaving the implementation details to the ones implementing it in the TOE or evaluation.

Anyway, in the talk Ken described the approach of using the first time evaluation of a TOE against this PP as a shake out period, finding the places where the PP is unclear or too specific and fixing these. The first evaluation for that reason gets more attention from the scheme. I liked hearing that the scheme was conscious of the disruption and risk of thrashing in the changes towards other evaluations and paced the changes with that in mind. In general the direction was towards more objective (not totally objective I was glad to hear). I liked the insight into the process and the remarks on speed of change and processes gave me a lot more trust in the NIAP processes.

"Assurance activities: Ensuring Triumph, Avoiding Tragedy"

In "Assurance activities: Ensuring Triumph, Avoiding Tragedy" Tony Boswell of SiVenture (and now to my and others' surprise part of Cisco via its acquisition of NDS) confessed that the title of his talk came from a mindset influenced by a day of management meetings. Tony saw the assurance activities as a way to tailor assurance to go deeper only in certain areas smoothly, re-using best practices by references to external standards, in short crystalizing the know-how for a specific TOE type. I am not convinced this is the proper path myself, and surprised at not being in agreement with Tony's arguments as I usually am.

"ISCI contribution to Common Criteria methodology improvement"

In "ISCI contribution to Common Criteria methodology improvement" Alain Boudou on behalf of Eurosmart/ISCI ("The voice of the smart card community") sounded almost apologetic that the smart card community had "only 3" contributions to show in the ICCC this year: Reuse of evidences and evaluation results by Carolina Lavatelli, Minimum Site Requirements for the smart secure device supply chain by J.Noller and W. Gutau, and his talk. The humility of the smart card community amuses me, these talks represent a lot of consensus building on requirements and evaluation practice, trial use and improvement, and in the end committing that to guidance that is followed by all in the smart card community. This is the image I have whenever there is talk about "technical communities" that should make community PPs and guidance and such.

Now that site certification and site requirements are mostly tackled, the focus is on looking towards semi-open smart card products, i.e. smart cards that allow loading of applications by entities that are not the holder of the assets already in the smart card. Think of loading bank applications on a UICC SIM card, where the bank and the telecom provider do not necessarily share the same view on their assets and who holds it. Besides these policy and trust issues, there are also technical issues to address. The most important one is the composition in the face of potentially hostile or negligent applications. In the smart card domain with the AVA_VAN.5 attack potential (and the JIL/JHAS modifications to the rating that make the attacks in scope even more powerful), not knowing the details of the implementations of the applications adds quite a bit of additional work for the TOE developer as well as the evaluators. I find this an interesting field myself.

"Certification of a loader integrated in a Secure Microcontroller: Strategic stakes"

In "Certification of a loader integrated in a Secure Microcontroller: Strategic stakes" Christiane Droulers of STMicroelectronics described their experience of evaluating a smart card that allows the OS to be loaded into the flash. The smart card PP PP-0035 has already been written to allow this as a possible implementation in its life cycle model, the PP does not however have pre-existing ways of encoding it. There are two ways to add these, to implicitly include the loader in the TOE scope or to make it explicit in SFRs. STMicroelectronics chose for encoding it in additional SFRs, which in my opinion is the approach clearest to the users. There was a suggestion to take these SFRs to a package for others to re-use, which sounds useful to me.

The SFRs used encode authentication for the loading of the OS, integrity and confidentiality during the loading, and making the loading mechanism unavailable afterwards. There are some smart card specific aspects to this that might be useful to explain. Nearly always the confidentiality of the object code of the applications is considered to be an asset to protect, also for IPR reasons but mostly to meet the AVA_VAN.5 high attack potential: confidentiality of the TOE is important to reach the required 31 points.

Similarly making sure the loading mechanism is not available in the field is important for the vulnerability analysis of the hardware too: if an attacker can load his own code on the hardware platform, he can load applications on there to make attacks much, much easier (this is called an "open sample"). If open samples are available, showing resistance against the AVA_VAN.5 high attack potential is very difficult, hence the requirement to make sure it is not available in the field. Note that in the PP-0035 life-cycle the entities in phase 5 and 6 of the life-cycle model, the card manufacturer, is considered to be trusted. In the approach of STMicroelectronics this trust is a bit stronger and in my opinion still valid. I liked this talk.

"Technical challenges and solutions in Sony FeliCa contactless smartcard EAL6+ evaluation"

In "Technical challenges and solutions in Sony FeliCa contactless smart card EAL6+ evaluation" Hideo Yoshimi-san of Sony described their approach for (one of) the first EAL6+ evaluation of a smart card product. As I was the commercial certifier on this project, I won't comment further on this presentation except that I was very happy to see this presented.

"Complex Systems Security Assessment based on CC methodology"

In this talk Michael Dulucq of SERMA and Laurent Piebois of Airbus described how they took the concepts from CC to the airplane industry. As Airbus already had a modified waterfall development model similar to the one implicitly assumed by the CC, what was mostly missing was the vulnerability analysis view: instead of looking for resistance against random faults for safety, in security the attacker chooses the fault to his advantage. Or: considering that these TOEs (the airplanes) are in service 30+ years, the chance of an attack occurring are best set to 100%. (Michael had a more technical talk on the ICCC12 on this.)

Now the sheer size of an airplane in terms of systems (we are talking thousands of IT products), many of them running customized COTS OSes, being updated during the lifetime of the airplane, makes a formal CC evaluation impractical. So they sought to take the good ideas from the CC to incorporate into their process and increase the security that way. I like that approach: it's practical and points to the added value of CC within an already established mature development approach. So they focussed on the things AVA adds. Michael pointed out that at >AVA_VAN.3 the input for the vulnerability analysis does not change, only the expertise required of the lab (i.e. the attack potential increases). So they focus more on making sure the security aspects are documented, allowing a lab to check later. In this approach it immediately becomes clear that quite some parts of the whole airplane simply cannot be shown to withstand AVA_VAN.3. Not because they are vulnerable necessarily, but because it is hard to show they are not (complex COTS OSes, legacy devices, ...). Their approach was to first identify what could be shown on what part of the system and to take it from there. I liked it and am looking forward to hearing more about this.

"Cloud security and Common Criteria"

In Cloud security and Common Criteria" Ashit Vora of Cisco looked at what the CC could offer for cloud computing. Cloud computing has many different definitions and views, the aspect Ashit brought forward was the "multitenant" aspect: on a cloud customers are sharing the resources with other customers. Doubts on the security especially with others on the same cloud was mentioned as the number two reason for not adopting the cloud (at 32%) just under the number one: costs (at 34%) This issue becomes relevant especially public or hybrid clouds.

Ashit suggested to divide the cloud security by technology: use existing PPs for the known technologies, write new ones for functionality like virtualization and data center specific switches, and in the future add PPs for virtual security devices and such. This then runs into composition of these PPs (which as Albert Zenkoff pointed out, is just an engineering problem following the example of composition as done in the smart card domain).

I'm not sure there is a strong customer demand for real certified secure clouds, as much as I would like it to be. So although I hope such an effort will be made by a, dare I say it, technical community for clouds, I'm not expecting this to happen soon.

"Innovation and the Common Criteria"

This talk by Intel and Cisco seemed to want to argue that "innovative features" were a reason to bypass the earlier limitations on >EAL2 evaluations in the United States. What an innovative feature is, why that would qualify for special treatment (except "stifling innovation"), how that would work, why this is an international item to discuss: I have no idea. I do understand the frustration of not being able to do EAL4 evaluations under the US scheme though.

"Test Vehicle for Java Card"

In "Test Vehicle for Java Card" Toru Hashimoto-san of IPA described their project to make a test vehicle for Java Card attacks and evaluations, i.e. a smart card designed to withstand attacks just enough to be interesting and not well enough to meet full AVA_VAN.5 requirements. Such a test vehicle is very useful for testing the skills and expertise of new labs, to align attack levels and ratings between all labs, and for research and discussion. I liked the development of such a test vehicle in and under the Japanese smart card initiatives, unfortunately it is not available outside of this initiative at the moment.

"Commercial Product Assurance - an update"

In Commercial Product Assurance - an update" Simon Milford of SiVenture described the UK government evaluation scheme "Commercial Product Assurance" (CPA). As David Martin said, this scheme was started after some embarrasing data leaks had occurred and on minister level it was decided to do something about the lack of evaluated means to prevent such data leaks right now. The CPA has "security characteristics" profiles that compare roughly to PPs with tailored assurance. Where possible these are aligned with collaborative PPs (at the moment the USB and virtualisation devices PPs), where not work is under way to align the CPA security characteristics with the US government requirements (such bilateral processes are common, especially between the US and UK).

Compared to the CC, the CPA evaluations are roughly at EAL2. Simon did not express a preference for either nor did he see it as a bypass for the CC. David did mention that the original reason for the CPA was as a shopping list of good-enough products, so a future where these were all CC-evaluated against community PPs was possible.

I liked this insight into the CPA, its goals and underlying ideas.

"CC compositional certification for MILS virtualization platforms"

In CC compositional certification for MILS virtualization platforms Werner Stephan from DFKI talked about distributing the security properties over the modules, so that the composition covers the whole original SFRs and future re-use is easier.

It sounded a lot like the way mature evaluation evidence is structured in my experience, at least I have been approaching CC evaluation evidence and vulnerability analysis (see also my ICCC10 talk about "Project Berke") in this way for years. The difficulty when applying composition in this way is how to formalize the security properties that a module provides and what it requires. This is easy in theory, but gets tricky quickly: there will be many implicit requirements such as "the module is only called by its interfaces". Werner's approach used natural language to capture this, which in my view will require a lexicon and manual checking in the end.

Things really get complex when doing proper composition. Two composition mechanisms exist in the CC, CAP and CCDB-2007-09-001 (the one the smart card community generated and uses). CAP is limited to enhanced-basic attack potential and that was a limitation that the talk seemingly wanted to address however even after questions from me about that, I still don't know how they wanted to address it, or even why they think it is possible. In my view, the limitation to enhanced-basic attack potential is sound, as the composed TOE often requires things from the underlying platform TOE that wasn't considered a security property in the underlying platform's evaluation. The typical example I mentioned in my question applies: the hardware platform isn't thoroughly evaluated that it will do say a proper ADD or COMPARE instruction, even though the composed TOE will depend on those two to make bound checks or security decisions. When looking at emerging properties exploited by attack methods such as side channel analysis and perturbation this is way beyond the current state of the art to do this.

An area I like and it stopped just before it became interesting in my view.

"Security evaluation of Communication Protocols in Common Criteria"

In "Security evaluation of Communication Protocols in Common Criteria" Georges Bossert of AMOSSYS described the Netzob tool. This tool can learn the protocols by application of bio-informatical algorithms, allows manual tweaking and then can generate traffic based on the learned model (a probabilistic finite-state-machine with deterministic time delays). I loved the sound of this (it has been a long, unfortunately unrealized, wish of mine to make such a tool myself). Definitely seems a tool to have available when testing network devices (however be careful, the used algorithms require quite a bit of diverse example traffic to operate correctly on).

I did not know of this tool, it tickles my inner formal computing science guy, and I like a bit of hardcore Computing Science into the CC evaluation domain.

"Technical Communities: Theory and Practice"

In "Technical Communities: Theory and Practice" by Brian Smithson of Ricoh had the last talk of the conference, a place where I've seen him before (at the ICCC8 in Rome). Brian had interesting points on the technical communities driven by NIAP (and a bit of the others). The technical communities come together first to make a common set of functional requirements for the security product, from the technology side. Yet the value of the assets this security product is going protect, that value is assigned by the end consumer. Brian argued that developers might be in a much better position to draft a requirements document like that as they already gather and summarize their customers' requirements and balance are already in a position of having to please the customers while balancing the costs of the evaluations. So he suggested that industry with its goal driven approach leads the way in technical communities.

I agree with the part of describing the functional requirements from the developers with oversight by the government agencies later. However I have found that evaluation labs generally have a much better view on the evaluation process side: what kind of requirements are easy and what are hard to show for developers, what requirements are unclear and cause discussions with certifiers and what not. The certifiers in turn need to be involved both because of their dual role as accreditors in the government and because of them feeling part of this consensus.

So I agree that industry probably is best equipped to get these technical communities started and keep them running, however this should not go at the cost of building a community and generating consensus. Tony Boswell formulated this much more political than I do (and probably misremember now): we seem to be missing an opportunity if we focus only on the voting and governance structures of such communities, at the cost of losing sight of the energy and enjoyment (as well as risk- and cost-reduction) coming from working together as passionate security experts here.

Unfortunately missed

I unfortunately missed Peter van Swieten's talk (not on the CD, copy is here) on the definition of TSFI used within the Dutch Scheme. This definition, chosen out of all the different definitions in the CC, of TSFI cuts down on the amount of low-added-value paperwork in the evaluation. As I understood this presentation was well received and I got a few requests for this document. Although it is a stable document, unfortunately at the moment it is formally an unissued draft.

Conclusion and future

The big ticket item of the CCRA MRA policy shift now doesn't really feel that threatening, assuming that the communication around this discussion is done with some consideration to the image of the CC outside our community. For the rest we seem to be mostly busy with applying the CC in the various technology areas, creating communities around these technology areas, and just getting things evaluated. I like this feel of maturity and productivity, even if it seems a bit dull not to talk about new gizmos in the standard such as ADV_ARC.

The next ICCC

The next ICCC will be in September in the United States, in the "capital city area" i.e. in or near to Washington, Baltimore and Philadelphia. Announcements will be made via the portal's ICCC tab.

Gala dinner and certificate issuance

This year the gala dinner was less formal, with stands to eat and events to do all through the evening. I liked how that allowed for mingling in different groups and chat with many people. I do hope this trend is continued into the future ICCCs.

The certificate issuing was done per country this year, making this process quicker and frankly less boring for the people who did not get certificates, possibly at the cost of making it less weighty for the people who did get certificates (especially those getting their first or an otherwise hard-won certificate). I haven't heard of someone getting a certificate feeling left out though.

Thank you

I enjoyed talking to everyone between the talks, after and deep into the night immensely. Thank you all for the wonderful time!

Local ICCC13 presentation cache