in reversed chronological order. * denotes co-first authors.
-
Lilo: A Higher-Order, Relational Concurrent Separation Logic for Liveness
Proceedings of the ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications
(OOPSLA 2025)
Concurrent separation logic (CSL) has excelled in verifying safety properties across various applications, yet its application to liveness properties remains limited. While existing approaches like TaDA Live and Fair Operational Semantics (FOS) have made significant strides, they still face limitations. TaDA Live struggles to verify certain classes of programs, particularly concurrent objects with non-local linearization points, and lacks support for general liveness properties such as "good things happen infinitely often". On the other hand, FOS’s scalability is hindered by the absence of thread modular reasoning principles and modular specifications.
This paper introduces Lilo, a higher-order, relational CSL designed to overcome these limitations. Our core observation is that FOS helps us to maintain simple primitives for our logic, which enable us to explore design space with fewer restrictions. As a result, Lilo adapts various successful techniques from literature. It supports reasoning about non-terminating programs by supporting refinement proofs, and also provides Iris-style invariants and modular specifications to facilitate modular verification. To support higher-order reasoning without relying on step-indexing, we develop a technique called stratified propositions inspired by Nola. In particular, we develop novel abstractions for liveness reasoning that bring these techniques together in a uniform way. We show Lilo’s scalability through case studies, including the first termination-guaranteeing modular verification of the elimination stack. Lilo and examples in this paper are mechanized in Coq.
-
Archmage and CompCertCast: End-to-End Verification Supporting Integer-Pointer Casting
Proceedings of the ACM SIGPLAN Symposium on Principles of Programming Languages
(POPL 2025)
Although there have been many approaches for developing formal memory models that support integer-pointer casts, previous approaches share the drawback that they are not designed for end-to-end verification, failing to support some important source-level coding patterns, justify some backend optimizations, or lack a source-level logic for program verification. This paper presents Archmage, a framework for integer-pointer casting designed for end-to-end verification, supporting a wide range of source-level coding patterns, backend optimizations, and a formal notion of out-of-memory. To facilitate end-to-end verification via Archmage, we also present two systems based on Archmage: CompCertCast, an extension of CompCert with Archmage to bring a full verified compilation chain to integer-pointer casting programs, and Archmage logic, a source-level logic for reasoning about integer-pointer casts. We design CompCertCast such that the overhead from formally supporting integer-pointer casts is mitigated, and illustrate the effectiveness of Archmage logic by verifying an xor-based linked-list implementation, Together, our paper presents the first practical end-to-end verification chain for programs containing integer-pointer casts.
-
Stuttering for Free
Proceedings of the ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications
(OOPSLA 2023)
One of the most common tools for proving behavioral refinements between transition systems is the method of simulation proofs, which has been explored extensively over the past several decades. Stuttering simulations are an extension of traditional simulationsused, for example, in CompCertin which either the source or target of the simulation is permitted to “stutter” (stay in place) while the other side steps forward. In the interest of ensuring soundness, however, existing stuttering simulations restrict proofs to only perform a finite number of stuttering steps before making synchronous progressa step of reasoning in which both sides of the simulation progress forward together. This restriction guarantees that a terminating program cannot be proven to simulate a non-terminating one.
In this paper, we observe that the requirement to eventually achieve synchronous progress is burdensome and, what’s more, unnecessary: it is possible to ensure soundness of stuttering simulations while only requiring asynchronous progress (progress on both sides of the simulation that may be achieved with only stuttering steps). Building on this observation, we develop a new simulation technique we call FreeSim (short for “freely-stuttering simulations”), mechanized in Coq, and we demonstrate its effectiveness on a range of interesting case studies. These include a simplification of the meta-theory of CompCert, as well as the DTrees library, which enriches the ITrees (Interaction Trees) library with dual non-determinism.
-
Fair Operational Semantics
Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation
(PLDI 2023)
Fairness properties, which state that a sequence of bad events cannot happen infinitely before a good event takes place, are often crucial in program verification. However, general methods for abstractly expressing various kinds of fairness properties and reasoning about them are relatively underdeveloped compared to those for safety properties. In this paper, we propose FOS (Fair Operational Semantics), a theory for expressing arbitrary notions of custom fairness as an operational semantics and reasoning about them. In particular, FOS provides a thread-local simulation relation equipped with separation-logic-style resource algebras that gives a powerful reasoning principles both for proving and exploiting fairness, whose soundness is proven for every fairness definable in FOS. As an example, we verify a ticket lock implementation and a client of it under a relaxedmemory concurrency, which involves reasoning about notions of fairness capturing fairness of the thread scheduler, fairness the ticket lock and even fairness of relaxed memory, using FOS. Finally, we formalize the theory of FOS in Coq.
-
Putting Weak Memory in Order via a Promising Intermediate Representation
Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation
(PLDI 2023)
We investigate the problem of developing an "in-order" shared-memory concurrency model for languages like C and C++, which executes instructions following their program order, and is thus more amenable to reasoning and verification compared to recent complex proposals with out-of-order execution. We demonstrate that it is possible to fully support non-atomic accesses in an in-order model in a way that validates all compiler optimizations that are performed in single-threaded code (including irrelevant load introduction). The key to doing so is to utilize the distinction between a source model (with catch-fire semantics) and an intermediate representation (IR) model (with undefined value for racy reads) and formally establish the soundness of mapping from source to IR. As for relaxed atomic accesses, an in-order model must forbid load-store reordering. We discuss the rather limited performance impact of this fact and present a pragmatic approach to this problem, which, in the long term, requires a new kind of hardware store instructions for implementing relaxed stores. The source and IR semantics proposed in this paper are based on recent versions of the promising semantics, and the correctness proofs of the mappings from the source to the IR and from the IR to Armv8 are mechanized in Coq. This work is the first to formally relate an in-order source model and an out-of-order IR model with the goal of having an in-order source semantics without any performance overhead for non-atomics.
-
Conditional Contextual Refinement
Proceedings of the ACM SIGPLAN Symposium on Principles of Programming Languages
(POPL 2023)
Much work in formal verification of low-level systems is based on one of two approaches: refinement or separation logic. These two approaches have complementary benefits: refinement supports the use of programs as specifications, as well as transitive composition of proofs, whereas separation logic supports conditional specifications, as well as modular ownership reasoning about shared state. A number of verification frameworks employ these techniques in tandem , but in all such cases the benefits of the two techniques remain separate. For example, in frameworks that use relational separation logic to prove contextual refinement, the relational separation logic judgment does not support transitive composition of proofs, while the contextual refinement judgment does not support conditional specifications.
In this paper, we propose Conditional Contextual Refinement (or CCR , for short), the first verification system to not only combine refinement and separation logic in a single framework but also to truly marry them together into a unified mechanism enjoying all the benefits of refinement and separation logic simultaneously. Specifically, unlike in prior work, CCR’s refinement specifications are both conditional (with separation logic pre- and post-conditions) and transitively composable. We implement CCR in Coq and evaluate its effectiveness on a range of interesting examples.
-
Sequential Reasoning for Optimizing Compilers under Weak Memory Concurrency
Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation
(PLDI 2022)
We formally show that sequential reasoning is adequate and sufficient for establishing soundness of various compiler optimizations under weakly consistent shared-memory concurrency. Concretely, we introduce a sequential model and show that behavioral refinement in that model entails contextual refinement in the Promising Semantics model, extended with non-atomic accesses for non-racy code. This is the first work to achieve such result for a full-fledged model with a variety of C11-style concurrency features. Central to our model is the lifting of the common data-race-freedom assumption, which allows us to validate irrelevant load introduction, a transformation that is commonly performed by compilers. As a proof of concept, we develop an optimizer for a toy concurrent language, and certify it (in Coq) while relying solely on the sequential model. We believe that the proposed approach provides useful means for compiler developers and validators, as well as a solid foundation for the development of certified optimizing compilers for weakly consistent shared-memory concurrency.
-
Modular Data-Race-Freedom Guarantees in the Promising Semantics
Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation
(PLDI 2021)
Local data-race-freedom guarantees, ensuring strong semantics for locations accessed by non-racy instructions, provide a fruitful methodology for modular reasoning in relaxed memory concurrency. We observe that standard compiler optimizations are in inherent conflict with such guarantees in general fully-relaxed memory models.
Nevertheless, for a certain strengthening of the promising model by Lee et al. that only excludes relaxed RMW-store reorderings, we establish multiple useful local data-race-freedom guarantees that enhance the programmability aspect of the model. We also demonstrate that the performance price of forbidding these reorderings is insignificant. To the best of our knowledge, these results are the first to identify a model that includes the standard concurrency constructs, supports the efficient mapping of relaxed reads and writes to plain hardware loads and stores, and yet validates several local data-race-freedom guarantees. To gain confidence, our results are fully mechanized in Coq.
-
Promising 2.0: Global Optimizations in Relaxed Memory Concurrency
Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation
(PLDI 2020)
For more than fifteen years, researchers have tried to support global optimizations in a usable semantics for a concurrent programming language, yet this task has been proven to be very difficult because of (1) the infamous “out of thin air” problem, and (2) the subtle interaction between global and thread-local optimizations.
In this paper, we present a solution to this problem by redesigning a key component of the promising semantics (PS) of Kang et al. Our updated PS 2.0 model supports all the results known about the original PS model (i.e., thread-local optimizations, hardware mappings, DRF theorems), but additionally enables transformations based on global value-range analysis as well as register promotion (i.e., making accesses to a shared location local if the location is accessed by only one thread). PS 2.0 also resolves a problem with the compilation of relaxed RMWs to ARMv8, which required an unintended extra fence.
-
CompCertM: CompCert with C-Assembly Linking and Lightweight Modular Verification
Proceedings of the ACM SIGPLAN Symposium on Principles of Programming Languages
(POPL 2020)
Supporting multi-language linking such as linking C and handwritten assembly modules in the verified compiler CompCert requires a more compositional verification technique than that used in CompCert just supporting separate compilation. The two extensions, CompCertX and Compositional CompCert, supporting multi-language linking take different approaches. The former simplifies the problem by imposing restrictions that the source modules should have no mutual dependence and be verified against certain well-behaved specifications. On the other hand, the latter develops a new verification technique that directly solves the problem but at the expense of significantly increasing the verification cost.
In this paper, we develop a novel lightweight verification technique, called RUSC (Refinement Under Self-related Contexts), and demonstrate how RUSC can solve the problem without any restrictions but still with low verification overhead. For this, we develop CompCertM, a full extension of the latest version of CompCert supporting multi-language linking. Moreover, we demonstrate the power of RUSC as a program verification technique by modularly verifying interesting programs consisting of C and handwritten assembly against their mathematical specifications.