Skip to main content



Cogent is an activity of the Trustworthy Systems project inside the Software System Research Group (SSRG).


Reducing the cost of verified systems software.


Fundamental to our approach for developing trustworthy systems is to use a minimal trusted computing base (TCB), and formally prove its correctness. We build such systems on top of the seL4 microkernel, for which we have developed proofs of implementation correctness and enforcement of isolation properties required for security and safety.

However, a real system's TCB is bigger than the microkernel. In particular, it will contain operating system services, such as file systems, device drivers and network stacks. Implementation of such services tends to be significantly bigger than the microkernel. In order to achieve overall trustworthiness, these services must be verified. As manual verification is costly and scales poorly, we are investigating ways to (partially) automate the verification of system services. We are using file systems as a first case study.

Approach: Cogeneration of code and proof

The core idea of Cogent is to specify a code module's functionality in a high-level language, and from that generate low-level (C) code, together with a proof of the correctness of the low-level code. That way the programmer can focus on the logic of the problem, i.e. the algorithms, without having to worry about error-prone tasks such as memory management or unwinding of state in the case of unsuccessful operations.

Importantly, the high-level language the provides a more convenient and productive environment for proving the correctness of the overall operation of the service. This is similar to the (although) manual approach we took with seL4, where we did a two-step refinement proof: (1) from a high-level functional specification to a low-level executable spec, and (2) from the executable spec to C code. Cogent automates the second step.

We use file system implementations to drive the design and implementation of the Cogent framework. We can make a number of observations about file systems (at least some of which also hold for other systems code, such as network stacks):

  1. Modern operating systems have a layered architecture, where services are stacks of abstraction levels, file systems and network stacks are examples. Such a logical structure lends itself to a modular implementation, with components interfaces matching the abstraction levels. For verification, such modularity is a big advantage, as components with well-defined interfaces can be verified individually. Given our earlier findings that verification effort scales quadratically with code size, this is key to reducing overall verification cost.
  2. Due to the nature of FSs, their implementations tend not to have much sharing among data structures. This feature can be well captured by linear type systems, which directly enable easy memory management and as a consequence build a straight linkage between a pure high-level semantics and a stateful low-level semantics.
  3. The control logic is cluttered and obscured with error-handling code; in some modules error-handling makes up 80% of all code. Much of this is boilerplate (but nevertheless error-prone) and an also be abstracted away into a domain-specific language (DSL). This then lets the implementer focus on the high-level logic.
  4. In-memory data structures for maintaining the file system information can largely be handled by suitable abstract data types (ADTs), which are separately implemented and verified.
  5. A large subset of FS code complexity can be easily described using DSLs, namely code for serialising and de-serialising between rich, in-memory data structures and flat, on-medium representations and code implementing the actual file-system logic.

The below figure shows the main components of our framework. Dark grey boxes are the tools that we employ in this framework. Light grey boxes represent code or specifications that are automatically generated; white boxes represent semantics; block arrows represent the proofs which connect them (with green boxes being the part that we have managed to fully automate). Dotted boxes indicate components which the FS implementer must provide. While the correctness specification, proofs and implementation (dotted boxes and arrows in the figure) are specific to the FS being built, the ADT library is reusable across many different FSs.

file system verification methodology

As indicated in the diagram, the file system implementer must provide four distinct components:

  • a high-level specification of FS functionality (in Isabelle/HOL)
  • the proof that the Cogent generated high-level specification corresponds to the functional correctness specification of the FS
  • the implementation of the FS in Cogent
  • the implementation of ADTs and interfaces to OS in C, together with proofs that the implementation is correct with respect to the functional correctness specification (partially provided by us or other ADT authors).

The Cogent compiler takes Cogent source code as input and produces, from the bottom of the figure above: the C code, the semantics of the C code expressed in Isabelle/Simpl, the same expressed as a monadic functional program, a monomorphic deep embedding of the Cogent program in A normal form, a polymorphic A-normal deep embedding of the same, an A-normal shallow embedding, and finally a 'neat' shallow embedding of the Cogent program that is syntactically close to the Cogent source code. Several theorems rely on the Cogent program being well-typed, which we prove automatically using type inference information from the compiler. The solid arrows on the right-hand side of the figure represent refinement proofs. The only arrow that is not formally verified is the one crossing from C code into Isabelle/HOL at the bottom (the red arrow) — this is the C parser, which is a mature verification tool used in a number of large-scale verifications (e.g. seL4).

For well-typed Cogent programs, we prove a series of refinement relations. The refinement proofs state that every behaviour exhibited by the C code can also be exhibited by the Cogent code and, furthermore, that the C code is always well-defined, including that e.g. the generated C code never dereferences a null pointer, and never causes signed overflow. It also implies that the generated C code is type-safe and memory-safe, meaning the code will never try to dereference an invalid pointer, or try to dereference two aliasing pointers of incompatible types. From the generated proofs, We additionally get guarantees that the generated code handles all error cases, is free of memory leaks, and never double-frees a pointer.

A key component to the overall refinement proof comes directly from Cogent’s linear type system. As linear types ensure that only one reference exists at a time to a heap object, a function that destructively updates an object may be given the type of a pure function, as it is impossible to distinguish destructive update from a purely functional update without aliasing. This allows us to assign two different semantic interpretations to the same code: A purely functional value semantics, which is is suitable for high level verification using equational reasoning, and an imperative update semantics, including pointers and a mutable heap, which more closely resembles the semantics of the generated C. We prove once and for all that, for any well-typed program, the update semantics is a refinement of the value semantics.

Research contributions

The project explores a new methodology for designing and formally verifying the correctness of systems code, based on a new language Cogent. We use file systems for case studies. Specific contributions include:

  • specification of FS correctness
  • the use of linearly typed Cogent language for real world FS implementation
  • proof of FS operations correctness and how Cogent reduces the effort
  • the design and formal development of the Cogent language and its certifying compiler
  • the fully automatic refinement tower between C code and Cogent's language specification.


To date we have implemented two different file systems in Cogent:

  • BilbyFS, a custom flash file system designed from scratch to be highly modular, to ease verification against a high-level functional specification one component at a time. BilbyFS strikes a balance between the simplicity of JFFS2 and the performance of UBIFS, the two most popular Linux flash file systems.
  • ext2fs, a standard Linux file system. Our present ext2fs implementation is an almost feature-complete implementation of revision 1: it passes the Posix File System Test Suite, except for the ACL and symlink tests (as we have not implemented those features).

Both file system implementations can be deployed in Linux, they sit below Linux's virtual file system switch (VFS) module, using C stub code to provide the top-level entry points expected by the VFS. We are also using them as native components on seL4.

The Cogent implementations of ext2fs and BilbyFs share a common library of ADTs that includes fixed-length arrays for words and structures, simple iterators for implementing for-loops, and Cogent stubs for accessing a range of Linux APIs such as the buffer cache and its native red-black tree implementation, plus iterators for each ADT.

For BilbyFS we have connected the low-level proofs to manual proofs of the correctness of two high-level functions: sync() and iget(), to demonstrate the ease of verifying the high-level functionality. More complete verification will happen in the future.


Code and documentation for this project can be found on Github.


The Contributor License Agreement can be downloaded here. If you have larger changes or additions, it might be a good idea to get in contact with us, so we can help you get started.

Future work

  • Our present Cogent language is a base language, that is not Turing-complete, but also not particularly domain-specific. Given the fact that FS code consist of a large portion of data serialisation/deserialisation, and most of them are boilerplate code, we plan to have another data description language on top of Cogent to further automate verifiably correct FS implementation. We have designed a prototype of that language.
  • From the studies we conducted to date, we have a rather clear picture of the pros and cons of the Cogent design, and will examine other alternatives and improvements.


Gabriele Keller (project leader), gabriele.keller<at>





Abstract PDF Liam O'Connor, Zilin Chen, Christine Rizkallah, Sidney Amani, Japheth Lim, Toby Murray, Yutaka Nagashima, Thomas Sewell and Gerwin Klein
Refinement through restraint: Bringing down the cost of verification
International Conference on Functional Programming, Nara, Japan, September, 2016
Abstract PDF Christine Rizkallah, Japheth Lim, Yutaka Nagashima, Thomas Sewell, Zilin Chen, Liam O'Connor, Toby Murray, Gabriele Keller and Gerwin Klein
A framework for the automatic formal verification of refinement from Cogent to C
International Conference on Interactive Theorem Proving, Nancy, France, August, 2016
Abstract PDF Sidney Amani, Alex Hixon, Zilin Chen, Christine Rizkallah, Peter Chubb, Liam O'Connor, Joel Beeren, Yutaka Nagashima, Japheth Lim, Thomas Sewell, Joseph Tuong, Gabriele Keller, Toby Murray, Gerwin Klein and Gernot Heiser
Cogent: verifying high-assurance file system implementations
International Conference on Architectural Support for Programming Languages and Operating Systems, pp. 175-188, Atlanta, GA, USA, April, 2016
Abstract PDF Sidney Amani and Toby Murray
Specifying a realistic file system
Workshop on Models for Formal Analysis of Real Systems, pp. 1-9, Suva, Fiji, November, 2015
Abstract PDF Liam O'Connor-Davis, Gabriele Keller, Sidney Amani, Toby Murray, Gerwin Klein, Zilin Chen and Christine Rizkallah
CDSL version 1: Simplifying verification with linear types
Technical Report, NICTA, October, 2014
PDF Gabriele Keller, Toby Murray, Sidney Amani, Liam O'Connor-Davis, Zilin Chen, Leonid Ryzhyk, Gerwin Klein and Gernot Heiser
File systems deserve verification too!
Workshop on Programming Languages and Operating Systems (PLOS), pp. 1-7, Farmington, Pennsylvania, USA, November, 2013
A revised version of this paper was published in Operating Systems Review, Volume 48, Issue 1, January 2014, pages 58-64.