Your Creative Solutions Services Products Research and other fun About YCS Contact information

Home

ICCC10 experiences

It has been a good CC conference again. I thought I'd write up some thoughts and observations I had, roughly in order of when they first came up in the conference (but I'll pull forward other items as useful for the topics). As always, I am only human and will very likely have misunderstood or misrepresented some of the talks. Feel free to contact me to explain where I went wrong (or not).

Location

Travelling to Troms° really hit home something that I knew intellectually but did not really appreciate emotionally: Norway is a huge country! Going to Troms° is about as far as going to the south of Italy for me. The standard map projections really distort the perception of the distances. Troms° surrounded by stunning mountains did provide a good backdrop for the conference, adding some quiet nature to the hectic lives we normally have.

Different problems for different (sized) TOE-types

For me one recurring theme in this conference was the different situations that the huge software product companies and the more established smartcard/embedded domain find themselves w.r.t. to CC evaluations. Right from the starting keynotes both Microsoft and Oracle expressed their difficulties in ensuring that all vulnerabilities or vulnerability causes could be shown to be adequately addressed at the time of evaluation and/or shipping.

Obviously they have been aware of this and instituted processes inside their companies to improve the security even in the face of restrained resources. Microsoft's Security Development Lifecycle championed by Steve Lipner is an impressive example of this (compare Windows 2000 to Vista, the improvement is remarkable). Nevertheless the sheer size of the products (thousands of developers working for years) and multitude of underlying platform and feature sets combined with long running version support almost as a law of physics (IMHO more a law of statistics) means that the harsh endresult is that there simply are not enough resources available to show with the appropriate assurance that there are no vulnerabilities (within the attack potential).

This results in the CC-brand being damaged because the public sees a CC-certified product being hacked anyway. There was a strong lobbying of the big software companies which almost seemed to say that we should just accept this as a community, but I prefer to interpret it as a lobby to provide some recognition of the improvement processes so that they can be used to internally and externally show their efforts and get acknowledgment for it. Although I find it hard to come up with a way to measure this in a way useful to the end-user (something like "shown 80% vulnerability free" does not really inspire trust I think), I think that externally visible measurements will (via the push of marketing that also pushes up the EALs) lead to better products. Certainly the huge efforts of these big companies should be supported, even though the end result is not yet there as it is in more mature areas like the smartcard domain.

On the other side the smartcard domain was mentioned over and over again as the shiny example of how domains can gain the intended state where products and evaluation methods are at such a level that high assurance at high attack potentials is reached as a matter of routine. This is not to say that this level was reached effortless, creating the community has taken very impressive political will to co-operate between parties with both "horizontal competition" (developers competing on products, evaluation labs competing for the developers, etc) and "vertical competition" (developers wanting the evaluators to reduce cost/time/risk, certifiers to increase the assurance). Discussions on vulnerability discovery techniques and ratings of such vulnerabilities require an atmosphere of trust that is hard to build and easily broken. With that trust and respect in the group, it was not so surprising to see the smartcard community with all the various business competition still as a very friendly group hang out together and drink (horribly expensive) Norwegian beers together and laughing a lot. I certainly had a wonderful time.

Although mentioned from the start by David Martin and repeated in the discussion panel (also by yours truly), the various presentations and the summary, it does not mean that the smartcard community can now relax as even maintaining this situation requires effort. Tony Boswell reminded us of this as astutely as always, repeated also in his talk "Building successful communities to interpret and apply CC". The way forward here is to determine small technology/product domain areas and form communities there. Making a common Protection Profile, common to the developers, evaluators, certifiers and hopefully also the end-users, is typically the starting point.

US scheme/government policy shift

Also in the discussion panel and through the whole conference the impacts of the policy change in the US (NIAP) were often discussed. It is a bit hard for me to understand this again new policy direction and the impact it will have. It certainly seemed to seriously upset the more CC-mature developers in the US, with all sorts of ugly rumors/opinions going around. From outside the US community, it does seem that the sudden shifts in the policy are destabilizing the complete basis of trust in the US evaluation community. I can't imagine that it will help the already difficult struggle in aligning what the US government wants as a procurer and accreditor (setting the PPs), what it does as service provider (as the CB), and what actually can reasonably be delivered by the developers and evaluation labs. In a way it comes back to community building again, and frankly I am glad that in Europe the consensus community is much better established.

Future of the Common Criteria

The work on improving the Common Criteria (under the codename "CC v4") is ongoing, but general summary is that the sheer scope of the improvement tasks (meaningful reports, predictive assurance, tools, skills and interaction if I remember correctly) is overwhelming the working groups. I have not attended all talks of the working groups but many seem to have re-aligned themselves, which was discussed just days before the ICCC. Opening up the working groups (for example by opening up the wikis intended to be used for this) to add insight from beyond the certification bodies has (as a result?) also been delayed. These processes will take a long time before they bear fruit I think.

One area that seems to have made significant progress is what used to be the tool support working group. On behalf of the Spanish scheme, Miguel Ba˝˛n introduced the work done so far in "Developer Tools and Techniques, Part I" talk, which essentially is to hook up with the MITRE work on CVE/CWE/CAPEC/etc. Robert Martin from MITRE in the "Developer Tools and Techniques, Part II" talk gave a passionate overview of the work done under the flag of MITRE. Besides the very impressive community building that had been performed in that domain of software vulnerability discovery, what I found particularly clever of their approach was the explicit decision to not intend to have one perfect taxonomy view on the domain, but many different and non-unique views on the datasets. With 40.000+ listed individual vulnerabilities (CVE), 400+ listed common weaknesses (CWE) and all the other data, this is in my humble opinion a brilliant example of solving the "perfect is the enemy of good"-problem. Informal consensus in the CCDB seems to be that linking up to this work by MITRE is going to go ahead, although the exact details are sketchy at the moment. I expect that some form of recognition of the CVE/CWE/CAPAC/etc repositories as being "fit and preferred for use in (software) vulnerability analysis / Security Architecture (ADV_ARC) work" is going the way forward, with additional guidance documentation formalizing this. At least that is the tried and proven approach that the smartcard community has done with their ~18 classes of common weaknesses and the associated analysis aspects. Some serious work needs to be done to glue the two worlds together, but the hardest part, the political will, seems to be in place. One example of work to be done is the attack point calculation to mix (interestingly enough, without knowing about these developments I had already started to do that as part of Your Creative Solution's "Project Berke" work).

As an aside, Miguel speaking personally made a delightful analogy of the CC to the Kama Sutra, concluding that CC work should not just be dull repetitive work (the repeating of calligraphy in the Kama Sutra) but also passionate fun. This was in his view the core success value of his lab Epoche & Esprit (also very productive in the CC conference by the way, they had 8-9 talks!) and he wished us all to share it. Although I myself as a northerner am a bit more reserved, I do share that passion and thank him for reminding us of it.

Vulnerability analysis (or: "Project Berke")

I was happy to be able to share some of my ideas on vulnerability analysis (when done by the evaluator) and security architecture work (when done by the developer as the "internal vulnerability analysis") in the "Vulnerability Analysis: Simplicity is the ultimate sophistication" talk. The trigger for this work was when I was doing a (non-CC) vulnerability analysis under high pressure ("someone said our product can be hacked and the national bank also, help!"). Even though I consider myself to be a level-headed expert in such analysis work, I was struck by the realization just how easy it is to forget to explicitly consider all attack methods. To support the analyst with his human limitations in this challenging work, Your Creative Solutions is working on a tool to support us in this. The project codename "Berke" was drawn from my youngest White Shepherd dog and his traits shared with good (in)vulnerability analysts: interest in technology and the drive to break it (and cute looks :-) ). In building the tool, the "stupid machine" mechanically applying the rules made all kind of strange aspects of the attack potential calculations become more clearly a domain for human-added reason. It also has made me realize the differences in the assurance gained within the vulnerability analysis, by means of the various testing approaches, more clearly.

To keep the talk within the time, I had to rush through the various different testing approaches a bit. I hope it was clear that analyze-then-pound-on-the-weakest spot, for example analyzing code by exhaustive simulation of possible perturbations to find the weakest spots and then pounding on that weakest spot with the automated perturbation setups (or good code analysis and tailored fuzzing in the software domain), can be a very cost effective and high assurance approach. (I think the talk "Taking white hats to the laundry: how to strengthen the testing in CC" by Apostol Vasilev was also about this, but frankly I'm not so sure I understood his point).

All in all I think that I am again returning to a sentiment that has been repeated in nearly every ICCC I've been to: the vulnerability analysis is the place where the assurance gained by the evaluators in the other parts translates to the one assurance that the end-user is looking for, the statement "we are convinced there is no vulnerability within reach of the defined attacker (assuming you as an end-user followed the manual)". For this to hold, the vulnerability analysis above all other assurance areas should be of the highest and most verifiable quality.

And as my answer to Tony's question highlighted, in my opinion documenting the complete reasoning (especially the attackmethods that are considered not applicable) is an area where the practice can still be improved quite a bit.

Something else that I had to gloss over due to time constraints also came up during the questions: the quality of the vulnerability analysis of course also depends on the quality and completeness of the bag of atttackmethods the analyst has, as well as his understanding and creativity. This is both a knowledge and a skill management issue. Knowledge management can be supported by use of attack method repositories, be it in the form of documents with common attacks (for example the JHAS documents in smartcard domain, or the MITRE CAPEC, OWASP etc lists) or more integrated tools like "Berke". The understanding and skill of the analyst will never be replaced by a tool here, this remains the hard and extremely rewarding domain of the person. Based on the slides of "Vulnerability Analysis Taxonomy: Achieving completeness in a systematic way" by Javier Jes˙s Tallon, I think he made a similar point. In any case, as always I am extremely grateful for the questions (this is for me personally a large part of the reward for sharing my ideas).

10th anniversary

The 10th anniversary of the ICCC was celebrated in the Arctic Cathedral, with haunting music and dance from local performers, reminiscent of the northern light and all-dark winter times in the arctic areas.

The loss of Mats Ohlin, for many years the chair of the CCRA MC, put a slight damper on the whole celebration though.

Formal methods and RNGs

Formal methods and RNGs are specific evaluation methodology domains which I really enjoy (yes I am aware that this is not a common view). This year there were but a few talks on them and as always I had to choose between them and more practically relevant talks. I did get to go to two of them.

There was an informative talk "Formal security policy model for a system with dynamic information flow" by Jens H.Rypest°l (who was also on the panel BTW) describing application of formal methods to a classified/unclassified switching situation. I enjoyed hearing his remark that obviously Bell-La Padula does not apply as the labeling on some ports is dynamic, it showed ingrained insight that might be expected in the academic world but unfortunately all to frequently not present in the commercial world.

Wolfgang Killmann summarized the new random number generator evaluation methodology developed for the German scheme (to replace the AIS20 DRNG and AIS31 TRNG methodologies). Next to updated requirements on deterministic and physical true RNGs, non-physical true RNGs (for example /dev/random in Linux/Unix-like systems) and hybrid RNGs (nearly all high-end RNGs in cryptographic software solutions such as Peter Gutmann's design) will be present. Interestingly the entropy-estimators common in non-physical true RNGs and hybrid RNGs will apparently be subject to requirements similar to the total failure tests of AIS31 (a good addition, as these entropy estimators are often very poorly documented and there are quite a few examples where they are overestimating the entropy by orders). More explicit evaluation and certification methodology should be included, which should ease some of the problems with implicit requirements and unclarities that have made TRNGs a very serious project-timeline risk factor in evaluations under the German scheme. Unfortunately the draft requirement is still under official review at BSI, and although Wolfgang optimistically expected it to be available December 2009, I'm not holding my breath waiting for BSI to officially release it (I've seen the almost final drafts a year ago!). Intellectually it is a pity, with all its serious pitfalls and difficulties in practical application, AIS20 and AIS31 are still the best RNG evaluation standards for cryptographic use out there and an update would be most welcome. In practice I imagine that many developers and evaluation labs in the German scheme however will enjoy the delay by re-use of existing documentation.

Appropriate assurance

As always there were many talks on streamlining the assurance, from indications of the direction it should go (for example "Effective evaluations outside the EAL framework: Vertical Assurance Packages & -Profiles" by Jose E. Rico) to applied solutions already deployed in specific areas (for example "Dedicated EAL: The payment terminal experience" by Carolina Lavatelli). Tony Boswell summarized in his talk "Appropriate Assurance: Fitting like a glove, not a tent" the wish to tailor the assurance and some approaches.

Trouble with the whole discussion is in my opinion not so much how we encode graduated assurance in the CC-language (one ST with multiple levels, several overlapping STs, minimalistic business-assets only and leaving it to the lab, etc) or how we market new evaluation assurance packages as well as the EALs. Certainly there are some small devils in the details but we can work that out. I think the fundamental part that we do not have a common view on what parts of the whole process add what in "assurance value". This is an area where we all seem to have the one true view, that only when we try to explain it to others seems to be both different from the others view and not that clear to ourselves either. This is going to need some serious thought and much discussion if we are going to solve it (on the other hand, it is not hurting us that much yet...).

Undoubtedly it is a complex issue already. If you consider "good assurance" to mean "performing all appropriate work units accurately", then my presentation "Taming the complexity of the CC" contains some graphs generated from marking all dependencies of a workunit on other workunits of an EAL4+ evaluation. Now this is just one (although experienced and opinionated ;-)) person's view on the dependencies, but it still shows quite clearly how complex the picture becomes. CC evaluations as a process really are that complex and interconnected. Interestingly this was illustrated by accident when I gave the talk in the conference: the laptop/projector for some reason had extremely poor contrast, causing nearly all pictures to be unrecognizable blobs. As I tend to look into the room and occasionally at the laptop screen, I only discovered this by an accidental glance to the projected screen at three quarters of the presentation (that's what I get from trusting technology ;-) ). Surprisingly half a dozen experienced CC people afterwards said to me that they had not told me because they thought the vague complex blob was an accurate way by me to represent the CC!

Also part of the presentation was a reminder of the patterns that all parties involved tend to apply to solve the complexity for themselves. In total they have a strong tendency to stagnation and ever increasing work just to be sure. If anything in "CC v4", I hope we can really explicitly go forward in reducing the effort. Not increasing is simply not good enough at standard-making level, because in the practice of applying the standard the work often increases, sometimes stays the same and never really decreases (barring smart solutions with lots of discussion). In the step from CCv2 to CCv3, even though there were several different attempts to reduce the complexity, in the end in workload not much has really changed (ok one thing: we lost the double tracing of SFs and SFRs).

BS7799/ISO17799/ISO2700x and the Common Criteria

The major recurring topic that was missing at the ICCC9 last year was the integration/annexation/co-operation of CC and BS7799/ISO27001. This year it was back again with several talks. The talk "CC vs. ISO/IEC 27001:2005: How to use an ISO/IEC 27001:2005 Certified Information Security Management System (ISMS) in a CC Evaluation." by Jean-Yves Bernard had the most explicit proposal of re-use of an ISO27001 certification into the CC. He suggested that an ISO27001 certification resulting in a "correctly established" verdict proves that the procedures are designed and applied correctly (i.e. most of the paperwork and checking of ALC_DEL is covered), provided the CC assets in the development (TOE parts are to be kept confidential and unchanged mostly) are considered assets to be protected in the ISO27001 ISMS scope.

As remarked from the public, the trouble is that in practice we all have been in the situation as evaluators that the site security, even though the ISO 27001 was applied, had minor or major weaknesses still. One can argue a lot on this, we all know that the site security is at its best during the CC audit (a spot check of a day every 2 years) while on the other hand the ISO 27001 drives it from the management down which means it is maybe not always as good, but it should be more evenly and deeply applied. Which is better or whether the other is even just good enough, we all have strong but differing opinions on it. In a way this is again the difficult problem of aligning the vague assurance ideas, but in this case between ISO 27001 and the CC. In smartcard domain this has been seen as insufficient too many times, which has lead to site certification from within the CC context. My short summary is that the CC does it their own way (and although in my experience 27001 documentation/process typically can be re-used in the CC evaluation process, but the 27001 certification can not be re-used).

Achilles heel of the CC: objectives for the environment

There is an aspect of the CC where it is quite fragile that is related to this discussion. Grossly simplified: the CC assures that the product provides a specific security functionality provided the user follows the manuals exactly to fulfill the objectives for the environment The BS7799/ISO27001 uses the products under the assumption that they provide the appropriate specific security functionality and seeks to enhance the procedures and learn from mistakes. Combining these two views makes the fragility of the "CC certification stamp" clearer: the CC evaluations depend critically on exactly the behavior of the user (reading of the manuals) that is not always done.

Within the CC domain, the solution is to reduce the objectives for the environment to the absolute minimum, as it will make the room for mistakes smaller. This does mean that the product has to become stronger and the evaluation surely not easier. One can see this difference quite clearly between the large software product and the smartcard domains. The large software product PP/STs typically include unrealistic assumptions (and therefore objectives for the environment) such as A.PEER (i.e. connect only to benign systems), i.e. excluding nearly all realistic usage. Compare this to the objectives for the environment for the smartcard hardware that say that the software must be coded according to the guidelines, the personalization performed safely and then the product is assumed to be given directly to the attacker. Coming back to my "Vulnerability Analysis" talk, I think clever tricks with the statement the CC gives, i.e. "we are convinced there is no vulnerability within reach of the defined attacker (assuming you as an end-user followed the manual)" by putting everything in that assumption, is damaging the CC as a brand (see also Albert Dorofeev's talk). With kind thanks to the person who sparked that discussion at my second talk!

Last but not least

At this point I'm left with the talks I attended that I could not weave into the story above, but were interesting nonetheless.

First is the talk "Optimizing ADV/AGD evidence for CC 3.1" by my former colleague Peter van Swieten from brightsight evaluation lab (I left, not Peter ;-) ). Peter promoted his approach of making the ADV/AGD documentation as minimal as possible, by documenting only what is strictly needed by the CC and by avoiding duplication of information. It was the first time that I've seen him present it as a completed picture supported with his own trial experience, and it will be interesting to see how it will hold up in the practice of development. I certainly wish him all the best.

I enjoyed the talk "Policies versus Threats: clarifying the Security Target" by Albert Dorofeev (update: Albert put the accompanying paper up, as well as the final version of the presentation) as it provoked in me some reconsideration of the roles of policies and threats. On CC-technical level, both are equivalent and the choice for either one is a matter of taste (in fact, from evaluator/certifier point of view, it is a moot point until they become SFRs for the TOE or Objectives for the environment). As Albert pointed out, making lists of attacks for the lab to consider and putting them as threats in the security target is a waste of effort: an evaluation lab does not need to address a threat (the evaluators applies attack methods to break SFRs, threats have nothing to do with AVA_VAN), nor will a lab skip an attack method because it is not listed as a threat. Listing threats created by having a TOE is simply not useful (note that this is a view that has very vocal supporters and attackers, I'm in the support side here).

Instead of an incomplete listing of bad things the TOE is supposed to protect against in the form of threats, Albert advocated making the complete list of things the TOE positively does in the form of poliies. Obviously this depends on the lab doing their work correctly, but that is already supported by other means (CB oversight of labs, inter-CB shadowing/review, technical guidance for example the JHAS working group in the smartcard domain). I like this approach as it solves what Markus Ranum calls the "enumerate badness" problem in "Six dumbest ideas in computer security". I am not entirely sure that you can't also do that by describing it as threats on business assets. I mean if you use the same abstraction level, "P.Confidentiality - The TOE must provide means to protect the confidentiality of the stored assets" or "T.Confidentiality - An attacker with physical access to the TOE breaks the confidentiality of the stored assets" seems the same.

In any case, it provoked some re-assessment of my views for me and that for me is a highly valued gift from Albert. As said before, I also liked his view that assumptions (or to be precise: objectives for the environment) are basically the risks the user takes. Making it more visible to the end-users that there is risk-shifting back to them again would be very high on my list of improvements for CCv5 on the ST. It is probably too much to ask for, but I'd really like to rename "Objectives for the environment" to something like "This is what you as user MUST ALL ensure AT ALL TIMES. Breaking your part of the deal VOIDS ALL IMPLIED WARRANTIES". In blinking red letters. With an EULA-like OK button that will not allow you to proceed unless you claim to have understood this.

There were some talks about reverse engineering and its application to the CC, for example "Why source code when having[sic] binaries? Applying reverse engineering in Common Criteria evaluations below EAL4" by Trifon GimÚnez. It looked like in the PC software domain there still is quite a bit of discussion needed to gain the consensus on how to integrate the results of reverse engineering into the attack potential rating. Central in the discussion seems to be how "knowledge of the TOE" is rated in the case that there is no source code available (which some speakers for some reason link to ADV_IMP, but that refers to the evaluator having the source code, not the attacker).

I think I am missing a part of the discussion here, because in my consultancy/evaluation work I've always found that sensitive knowledge of the TOE could only realistically be claimed for software in situations where either the software is not provided to the attacker (for example webservices only accessible over a network connection) or encased in hardware specifically designed to protect the confidentiality of the software (typical in smartcard and HSM domain). Reverse engineering tools like IDA Pro are so incredibly powerful (and/or the obfuscation so weak), especially in the hands of experienced hackers, that when the binary is available it is in my opinion more a question of how much time it takes to reverse engineer it and to find the vulnerability in the mass of data then a question of source code availability. There are some exceptions as always, I've seen situations where a particular logical flow error could realistically only be found if you had access to the design information, but they are rare and therefor I think those should be explicitly argued as being rare. Note that this is an area of quite some interaction in other parts of the evaluation like site security (ALC_DVS), delivery (ALC_DEL), tools for compilation/obfuscation (ALC_TAT), and it becomes really self-referencing when the protection against reverse engineering of the TOE is part of the SFRs for the TOE (which is sort of the case in smartcard domain). There be dragons here.

There were several talks on the experiences with various schemes. I've attended Hamada-san's talk "Sony FeliCa: Smartcard CC Evaluation Experience with Five Schemes" which shared some of his experience in the domain. I liked the view that running a composite evaluation with the same evaluation lab that did the hardware evaluation was likely to give a deep result, and with another lab a fresh result. Certainly the knowledge transfer between the hardware evaluation to the composite evaluation, in the form of an ETR-lite and in practice often also some discussions between the two labs, is formally well established in the smartcard domain. Still at any transition between people knowledge gets lost (and new fresh insights might be created), and this holds even more between labs (who are still competitors in their services). When there is a serious time gap between the evaluations, there is also the risk that the people just forget details (and/or forget which detail applies to which product again). Hamada-san also highlighted the common business problem of a composite evaluation: who manages the evaluation, who pays for it, and how is everything glued together? It was good to see the viewpoint of a party that is often in the situation that it is not the one that in production-sense composes the product but is the one that in evaluation-sense is interested in showing to their customers that their assets are safe in the composed product.

The final talk I want to highlight is the one by Shanai Ardi called "Secure Software Development for Higher CC Evaluation Assurance Levels". The core of the talk was about her work on the "Sustainable Software Security Process", which seeks to help developers to make (more) secure products by making available the deconstructed attack requirements and tracing the anti-thesis through the design. Common attacks are deconstructed in their steps, with the idea that the developer then can make sure that at least one of these steps is not possible (hence breaking the whole attack path).

I was struck by the overlap in thinking in both my work on the "Berke" project and the MITRE's work on CWE/CAPAC. Apparently that idea is now ripe for the market in some way. In the S3P-approach, breaking the attack paths is then introduced as requirements in the process so that developers can during the development have more confidence that they will not have vulnerabilities discovered during the evaluation. In a way this is similar to Microsoft's SDL (but more explicit and more focused on the attacks instead of the process) and MITRE's CWE/CAPAC work (which seeks to make repositories of the attack methods). It might also be in a good position to provide the developers the externally-visible better security development effort indication they are seeking.

I did not agree with the location in the CC-process she was hooking it into though. As described above with the talk by Albert Dorofeev, I am strongly opposed to listing the threats the TOE introduces in the ST. Besides the obvious solution to those threats (do not buy the TOE!), I do not think that listing will be useful to the end-user especially if there is no way to ensure it is complete (MITRE CWA lists 400+ general weaknesses, do you really want to include them and more in the ST? Will you ever spot the one that was removed because it actually was a vulnerability in the TOE?). It would also introduce the nasty property that the SFRs that come out of these threats themselves are only secondary requirements (i.e. they are there because their are one of many ways to eventually break the business assets the user cares about), but in CC we do not have that distinction officially. So breaking the secondary requirement (for example: random memory layout) will also officially count as a vulnerability that fails the TOE for the evaluation, even if there is no way to extend that attack step into breaking the real business assets.

In my not-so-humble opinion the right place to put this is in the security architecture (ADV_ARC) to show that the common attacks to break the self-protection and non-bypassibility of the TOE are addressed. Using the S3P process and its repository, the developer can then show how the addressed these attacks, guiding the evaluator through these difficult to prove aspects (especially in large software products). The evaluator can re-use this knowledge in his vulnerability analysis, hopefully with only minor verification tests here and there. In smartcard domain this has been informally the de facto norm and I see no reason why it should not apply to software also. I think that this is the type of solution that would address the need for acknowledgment of the secure development efforts the large software developers are asking for and increase the real-world security of said products by really over the broad spectrum addressing these attacks. Making an extended assurance requirement to highlight that this process is used, e.g. ADV_S3P.1 "The developer has shown to have considered all attacks listed in S3P [assignment: version/date] on all externally visible interfaces of the TSF", would allow the marketing and push the quality up.

Some stats and next ICCC

There were about 258 participants. Of the 109 submissions for talks, 64 were selected (including the standby presentations). To be honest, when both my submissions were accepted I thought they had a hard time filling the schedule but apparently not.

The next ICCC will be in Turkey.

Conclusion

To conclude I really enjoyed the conference and would like to thank SERTIT for organizing it so well. Besides the interesting talks, the social part was very enjoyable. Sitting in good company with a beer or black thee, in humor heckling each other over the practice of CC, jokingly discussing setting up a lab, seriously trying to understand the cultural-different approaches to managing your manager, and in general having stimulating exchanges of ideas was a joy. Thank you all!
Foto of Wouter

Foto of Berke, the inspiration

Fotos of surroundings

Local ICCC10 cache until the website is updated