Ertos l4 microkernel verified, or "I'd start a lab for that!"(Previously emailed to a private mailinglist)
The verification of a microkernel is an impressive piece of work, but the usefulness is hard to estimate (and this comes from someone indoctrinated with formal methods at the Eindhoven University, home of formal methods god Edsger W. Dijkstra ;-) ).
What exactly is verified?I'm having a hard time finding the formal spec they verified against, so I cannot see what it includes useful security features offered by the kernel to the applications (they suggest some form of access control mechanisms but what does it deliver exactly?).
From the papers it seems that they did not verify the step from general security properties delivered to the applications (for example: from "the content of a file is only readable to applications that have the read right" to the actual full API). I could be overlooking it in one of their papers, but if I'm right then they have proved that the kernel implements the API correctly, but not necessarily that you cannot misuse the API to do something the user does not expect. This raises the risk of API-level attacks.
Why the weird posturing against the Common Criteria?Their Common Criteria (CC) costs claims are, sorry but there is no nice word for this, ludicrous. For one thing, they count their effort on only 7.500 lines of code but the CC on the full product, and then multiply this with the "plucked out of the air" number of $10.000/LOC. A more fair comparison would be on the full product (including the assembly and start up code!) at both sides. And at 25-30 manyears of effort, at costs their effort is easily $3M just in paying the wages of the people. Of course here much of the effort was in un- or under-payed students and graduates, and pre-payed university staff, but that hardly scales to commercial evaluations ;-).
In terms of the CC EAL6-7 their work is but a subset of the required CC work, even though it is the hardest part. Besides their formal proof also assurance on the tools used (like the compiler), the site security, version management, documentation, and most importantly the declaration of the security properties promised and the mapping to the actual spec (in CC: the ST and ADV_FSP+AGD_OPE) is needed.
Their claim implies that the remainder would cost 87 - 3 = $84M. I'm willing to set up a CC lab and run that evaluation project for a mere fraction (even down to a tenth) of that budget (funding up front) :-). Seriously, the remaining CC part is similar to the work done for the first Linux evaluation, for which a total cost (evaluation and development) of ~$1M was claimed.
Frankly I don't understand why they chose to contrast them to the CC here, if anything their work is a significant cost-saver in any formalized evaluation methodology, of which CC is the most prominent in their market. In other words, I would market it as "look, hardest part done, rest is easy if you wish to do it".
Limits of (this) formal proofA word of caution on the interpretation: this proof is only valid if their assumptions hold, in particular that the compiler and the hardware do not introduce *any* behaviour different from the formally behaviour of the C statements. Differences can quickly explode into an attack not covered by the formal proof. For this reason the proof also obviously does not hold for situations where the attacker has physical access to the product (as they constantly break even 100% conformant hardware to non-conformant behaviour).
Stepping back, I think that their work is impressive and shows that formal evaluations of real products of reasonable complexity is finally becoming realistic. There has been a similar formal verification of a Java Card implementation by Gemplus using a similar approach but embedded in a Common Criteria EAL4+ evaluation (I have seen presentations of this and talked to the people from Trusted Labs about it, but I can't seem to find that presentation anymore. Here is some information).
Be warned about formal methods as snake oil by the way. I've seen horrible lies with formal methods. Most involve using an incorrect proof method for the job, for example trying to find a contradiction of a security property, not finding it (but also not proving that all contradictions would be found), and concluding it is safe because the computer did not find a flaw (on my first skimming through the protocol a gaping hole was immediately clear).
And some practitioners, especially embedded in the consultancy companies, of formal methods do silly things like introducing a patently incorrect derivation rule. And don't get me started on the wrong underlying threat models.
ConclusionBoth the formal methods in their practice and the development processes to make verification possible still require work to become more efficient. It's nearing the point of practicality for compact security products, but still is cutting and bleeding edge.
|More blog entries|
|Author: Wouter Slegers - Copyright Your Creative Solutions 2009 - All rights reserved|