I'm a Lecturer (“Assistant Professor” in US English) at the University of Kent's School of Computing, where I teach topics in computer systems and software development, and do research within the Programming Languages and Systems group.
For my publications, click here (or keep reading).
I am a practical “computer scientist” who builds things; I'd argue this makes me more engineer than scientist per se. I have wide interests: my goal is to make it easier and cheaper to develop useful, high-quality software systems. So far, my work has mostly focused on programming languages and the systems that support them—including language runtimes and operating systems.
Micro-blog and calendar
- June 2019. Made overdue updates to my talks page; looking forward to ECOOP and (speaking at) Curry On! next month.
- August 2018. Joined the University of Kent's School of Computing as a Lecturer (pronounced “Assistant Professor” in some dialects).
- April 2018. Speaking at the MoreVMs workshop at <Programming> 2018, and also with a hand in this year's Salon des Refusés.
- September 2017. Pleased that my essay Some Were Meant For C is appearing at this year's Onward!.
Recent blog entries
- 15 May 2019, Research careers in UK Universities: questions few are asking, part one, in highered
- 11 January 2019, Mildly profane meta-advice for beginning PhD students, in research
- 17 January 2018, How to be a Scrutineer (and a better one than I managed to be), in highered
- 9 January 2018, Undergraduate admissions in “computer science”: a plea, in teaching
- 18 October 2017, Some were meant for post-hoc reflections, in research
My primary research goal is to make possible increasingly humane, programmable computer systems. Currently, I am pursuing this through a humane evolution of the Unix-like operating system.
If I had to limit myself to only a dozen research areas, they would be these.
- run-time checking in unsafe languages
- compiler and runtime support for debugging and live programming
- multi-language toolchains and runtimes
- tools for system interface specification and testing
- infrastructure for building instrumentation-based tools
- data description and schema languages/tools
- infrastructure for repeatable and reversible execution
- decentralised Internet applications and their construction
- radically modular programming
- software build and deployment infrastructure
- hybrid program analysis/execution engines
- big questions about programming and software
Most of my research concerns programming, but I identify as a “systems” researcher. For me, “systems” is a mindset rather than a research topic. It means I am primarily interested in the practical consequences of an idea; its abstract properties are of interest only subserviently. (If you can come up with a better definition, let me know.) I generally hesitate to identify as a “programming languages” researcher, not because I dislike languages but because they're only one piece of the puzzle. I'd identify as a “programming systems” researcher if that were a thing.
You can read more below about what I'm working on. A recurring theme is that I prefer pushing innovations into the software stack at relatively low levels, to the extent that this is sensible (which I believe can be surprising). Another is the pursuit of infrastructure that is well-integrated, observable, simple, flexible, and designed for human beings—particularly if we believe (as I do) that computers are for augmenting the human intellect, not just for high-performance data crunching.
I believe that technology cannot be pursued in isolation. “Good” engineering research is done with an understanding of history, human culture, and that research's place in them. This might mean asking small questions, like the impact of a technical change on a wider software ecosystem. It also means asking bigger questions of the human and social assumptions behind the work: who might use it; who it will benefit; why this is an overall good; and so on. By its answers to such questions, perhaps implicitly, all technical work invariably subscribes to what one could call a “philosophical basis”. I believe that understanding these bases can lead to more valuable technical work. Some are so big that they cannot be answered by experimental methods. I do not subscribe to the view that to be “scientific” is an unalloyed good, or that the methods of the humanities are faulty. To do good engineering research is to be more than purely a scientist.
Very brief biography
I did my Bachelor's degree in computer science in Cambridge, graduating in 2005. I then stayed on for a year as a Research Assistant, before starting my PhD in 2006.
My PhD work was based in the Networks and Operating Systems group under the supervision of Dr David Greaves, and centred on the problem of building software by composition of ill-matched components, for which I developed the Cake language.
During summer 2007 I took an internship at Fraser Research, doing networking research.
From January 2011 until March 2012 I was a research assistant in the Department of Computer Science at the University of Oxford. There I worked within the research programme of the Oxford Martin School's Institute for the Future of Computing. My work mostly focused on constucting a continuum between state-space methods of program analysis (notably symbolic execution) with syntactic methods (such as type checking). I ended up moving on before I could see this to fruition, but completing it remains in my projects list.
From May until October 2013 I was temporarily a Research Assistant at Oracle Labs, on the Alphabet Soup project.
I returned to Cambridge in October 2013, as a Research Associate (later Senior Research Associate) at the Computer Laboratory (now become the Department of Computer Science and Technology). I worked within the REMS project, and participated in both the Programming, Logic and Semantics group and the Systems Research Group (particularly its Networks and Operating Systems subgroup). As of August 2018 I continue to be a visiting researcher at the Department.
In my industrial life, I enjoyed some spells working for Opal Telecom and ARM around my Bachelor's studies. More recently, I have been consulting for Ellexus, and conducted some industrial research work (in reality rather development-heavy) for Oracle Labs.
I maintain (sometimes neglectfully) two CVs: a snappy two-page one here, and a longer academic one here. Please be aware that both may be a little rough or incomplete at times. If you're recruiting for jobs in the financial sector, even technical ones, please don't bother reading my CV or contacting me—I promise I am not interested.
Here's what I'm working on as of December 2018. In most cases these are my own individual projects; a few are with collaborators, as I describe.
Making the Unix process environment more descriptive and reflective
Unsafe languages made safer and more debuggable
Huge amounts of code have been written, and continue to be written, in unsafe programming languages including C, C++, Objective-C, Fortran and Ada. Performance is one motivator for choosing these languages; continuity with existing code, particularly libraries, is another. The experience of compiling and debugging using these languages is a patchy one, and often creates large time-sinks for the programmer in the form of debugging mysterious crashes or working around tools' limitations. I have developed the libcrunch system for run-time type checking and robust bounds checking. This consists of a runtime library based on liballocs, together with compilers wrappers and other toolchain extensions for instrumenting code and collecting metadata. Slowdowns are typically in the range 0–35% for type checking and 15–100% for bounds checking. The system is also unusually good at supporting use of uninstrumented libraries, recovering from errors, and supporting multiple languages. Its design is a nice illustration of an approach I call toolchain extension: realising new facilities using a combination of source- and binary-level techniques, both pre- and post-compilation, together with the relevant run-time extensions. Critically, it re-uses the stock toolchain as much as possible, and takes care to preserve binary compatibility with code compiled without use of these extensions. The mainline implementation uses the CIL library; during 2014—15, Chris Diamand developed a Clang front-end, with help from David Chisnall. In 2016–17 Richard Tynan developed a version of the runtime for the FreeBSD kernel. Motto: unsafe languages allow mostly-safe implementations; combine compile- and run-time techniques to make this the pragmatic option
- [Onward! '17]
- [OOPSLA '16a]
- [EuroLLVM 2016 talk (with Chris Diamand)]
- more talks in my talks archive
- libcrunch project web page
Rigorous specification of systems languages and system services
The semantics of programming languages are well studied, but in some cases, notably C and C++, they remain hard to nail down. With Cambridge and ISO WG14 colleagues, I'm involved in an effort to put C as actually practised on a firmer semantic footing. Relatedly, with JKU and Kent colleagues I have worked on understanding the most common ways in which C is effectively extended with inline assembly. Meanwhile, two equally crucial but much less-studied areas are the semantics of linking and (separately) of system calls. The challenges of verified toolchains and verified run-time environments depend critically on accurate specifications for these services. With my colleagues on the REMS project, we're working both on detailed specifications of ELF linking and Linux system call interface. This also means developing tools for experimentally testing and refining these specifications. In summer 2015 Jon French developed tools for specifying system calls in terms of their memory footprints. Motto: a specification for every instruction.
Principled approaches to debugging information
Debugging native code often involves fighting the tools. Programmers must trust to luck that the debugger can walk the stack, print values or call functions; when this fails, they must work with incomplete or apparently impossible stack traces, “value optimized out” messages, and the like. Obtaining debugging insight is even harder in production environments, where unreliable debugging infrastructure risks both downtime and exploits (see Linus Torvalds in this thread, among others). With REMS colleagues, Francesco Zappa Nardelli, and talented students including Théophile Bastian and Simon Ser, we're working on tools and techniques for providing a robust, high-quality debugging experience. This comes in several at its simplest, it means sanity-checking compiler-generated debugging information. At its most complex, it means propagating more information across compiler passes, to enable debugging information that is correct by construction. Debugging is also in tension with compiler optimisation, and is also inherently in need of sandboxing (to limit who can consume what application state). Motto: if it compiles, it should also debug.
Cross-language programming without FFIs
Language barriers are holding back programming. They splinter the ecosystem into duplicated fragments, each repeating work. Foreign function interfaces (FFIs) are the language implementers' attempt at a solution, but do not serve programmers' needs. FFIs must be eliminated! Instead, using a descriptive plane (again, provided by liballocs), we can make higher-level language objects appear directly within the same metamodel that spans the entire Unix process, alongside lower-level ones, which are mapped into each language's “object space”. For example, in my hacked nodeJS (quite a bit of V8 hacking was required too), all native functions visible to the dynamic linker are accessible by name under the existing process object. You're no longer dependent on hand-written C++ wrapper code, nor packages like node-ffi to access these objects. The type information gathered by liballocs enables direct access, out-of-the-box. It's far from a robust system at the moment, but the proof-of-proof-of-concept-concept is encouraging. Motto: FFIs are evil and must be destroyed, but one VM need not (cannot) rule them all.
Comprehensive, easy-to-use instrumentation on managed platforms
Programmers often want to develop custom tooling to help them profile and debug their applications. On virtual machines like the JVM, this traditionally means messing with low-level APIs such as JVMTI, or at best, frameworks like ASM which let you rewrite the application's bytecode. In Lugano, work done together with my DAG colleagues proposed a new programming model for these tasks based on aspect-oriented primitives (join points and advice), and also a yet higher-level model based on an event processing abstraction reminiscent of publish-subscribe middleware. On the run-time side, our infrastructure addresses the critical problem of isolation: traditionally, instrumentation and application share a virtual machine, and may interfere with one another in ways that are difficult to predict and impossible, in general, to avoid except by arbitrarily excluding shared classes (such as java.lang.* and the like). Our work gets around this by avoiding doing any bytecode instrumentation within the VM—instead at any instrumentation point we insert only a straight-to-native code path that hands off run-time events to a separate process (the “shadow VM”) which performs the analysis. This allows much stronger coverage properties than are possible with in-VM bytecode instrumentation, and nicely handles distributed applications—but also brings challenges with event synchronisation and ordering. Motto: robust, high-coverage without bytecode munging or hacky exclusion lists.
Radically easier composition
Programming by composition is the norm, but the norm also imposes needlessly high costs on developers. Keeping up with APIs as they evolve is a familiar drain on programmers' time. Inability to migrate to a different underlying library or platform is a familiar inhibitor of progress. It doesn't need to be this way. Why isn't software soft, malleable and composable? Why is so much tedious work left to the human programmer? My PhD work explored an alternative approach in which software is composed out of units which aren't expected to match precisely. A separate bunch of languages and tools, collectively called an “integration domain”, are provided to tackle the details of resolving this mismatch. I designed the language Cake to provide a rule-based mostly-declarative notation for API adaptation. The Cake compiler accepts a bunch of rules relating calls and data structure elements between APIs, including in various context-sensitive fashions. The Cake compiler then generates (nasty C++) wrapper code, saving the programmer from writing wrappers by hand. Motto: down with manual glue coding, down with silos!
What is that?
A recurring theme of my work is that I like to ask big questions, including questions of the form “what is X?” (for various X). The X so far have been the following.
- “systems programming” [Onward! '17]
- “types” [Onward! '14]
- operating systems, objects and files [PLOS '13, Philosophical Studies Series]
- software connectors [SYANCO '07]
- [POPL 19]
Kayvan Memarian, Victor B. F. Gomes, Brooks Davis, Stephen Kell,
Alexander Richardson, Robert N. M. Watson and Peter Sewell.
Exploring C Semantics and Pointer Provenance.
To appear in Proceedings of the 46th ACM SIGPLAN Symposium on Principles of Programming Languages (POPL 2019).
- [VEE 18]
Manuel Rigger, Stefan Marr, Stephen Kell, David Leopoldseder, Hanspeter Mössenböck.
An Analysis of x86-64 Inline Assembly in C Programs.
In Proceedings of the 14th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments.
publisher's doi locally deposited version abstract
Abstract: C codebases frequently embed nonportable and unstandardized elements such as inline assembly code. Such elements are not well understood, which poses a problem to tool developers who aspire to support C code. This paper investigates the use of x86-64 inline assembly in 1264 C projects from GitHub and combines qualitative and quantitative analyses to answer questions that tool authors may have. We found that 28.1% of the most popular projects contain inline assembly code, although the majority contain only a few fragments with just one or two instructions. The most popular instructions constitute a small subset concerned largely with multicore semantics, performance optimization, and hardware control. Our findings are intended to help developers of C-focused tools, those testing compilers, and language designers seeking to reduce the reliance on inline assembly. They may also aid the design of tools focused on inline assembly itself.
- [PhilStudSer xx]
Unix, Plan 9 and the Lurking Smalltalk.
To appear in L. de Mol and G. Primiero (eds), Reflections on Programming Systems,
Philosophical Studies Series, Springer.
manuscript pdf abstract
Abstract: High-level programming languages and their virtual machines have long aspired to erase operating systems from view. Starting from Dan Ingalls' Smalltalk-inspired position that “an operating system is a collection of things that don't fit inside a language; there shouldn't be one”, I contrast the ambitions and trajectories of Smalltalk with those of Unix and its descendents, exploring why Ingalls's vision appears not (yet) to have materialised. Firstly, I trace the trajectory of Unix's “file” abstraction into Plan 9 and beyond, noting how its logical extrapolation suggests a surprisingly Smalltalk-like end-point. Secondly, I note how various reflection and debugging features of Smalltalk have many analogues in the fragmented world of Unix programming. Finally, I consider these how these two directions of change may yet be convergent within future Unix-derived systems, and what this might mean for programming languages.
This is a history and philosophy piece, based on my paper at the PLOS workshop 2013 but expanding considerably on the historical discussion, and going easier (but not too easy) on the technical content. This version grew out of a talk I gave at the 2016 History and Philosophy of Programming symposium in Paris, and has been improved by two rounds of review and helpful comments from many correspondents, for which I'm very grateful.
- [Onward! 17]
Some were meant for C: the endurance of an unmanageable language.
In Proceedings of the ACM International Symposium on New
Ideas, New Paradigms, and Reflections on Programming and
Software (Onward!), October 2017.
pdf (author's version) abstract
Abstract: The C language leads a double life: as an application programming language of yesteryear, perpetuated by circumstance, and as a systems programming language which remains a weapon of choice decades after its creation. This essay is a C programmer's reaction to the call to abandon ship. It questions several properties commonly held to define the experience of using C; these include unsafety, undefined behaviour, and the motivation of performance. It argues all these are in fact inessential; rather, it traces C's ultimate strength to a communicative design which does not fit easily within the usual conception of “a programming language”, but can be seen as a counterpoint to so-called “managed languages”. This communicativity is what facilitates the essential aspect of system-building: creating parts which interact with other, remote parts—being “alongside” not “within”.
This is an essay I wrote after realising that the reasons many PL researchers think motivate people to (still) program in C aren't actually the reasons that I, as a C programmer, find holding true in my own case.
- [OOPSLA 16a]
Dynamically diagnosing run-time type errors in unsafe code.
In Proceedings of the ACM SIGPLAN International Conference on Object-Oriented
Programming, Systems, Languages, and Applications (OOPSLA), November 2016.
pdf preprint abstract
Abstract: Existing approaches for detecting type errors in unsafe languages are limited. Static analysis methods are imprecise, and often require source-level changes, while most dynamic methods check only memory properties (bounds, liveness, etc.), owing to a lack of run-time type information. This paper describes libcrunch, a system for binary-compatible run-time type checking of unmodified unsafe code, currently focusing on C. Practical experience shows that our prototype implementation is easily applicable to many real codebases without source-level modification, correctly flags programmer errors with a very low rate of false positives, offers a very low run-time overhead, and covers classes of error caught by no previously existing tool.
This is a technical paper describing a system (called libcrunch) for run-time type checking in native code (C, for now). Perhaps the most important message of the paper, albeit not strictly its novelty, is that we can check properties of pointers without attaching metadata to the pointers themselves. Indeed, propagating metadata with pointers is quite expensive, whereas looking up properties of the pointed-to objects can be made quite cheap.
- [OOPSLA 16b]
Stephen Kell, Dominic P. Mulligan, Peter Sewell.
The missing link: explaining ELF static linking,semantically.
In Proceedings of the ACM SIGPLAN International Conference on Object-Oriented
Programming, Systems, Languages, and Applications (OOPSLA), November 2016.
pdf preprint abstract
Abstract: Beneath the surface, software usually depends on complex linker behaviour to work as intended. Even linking hello_world.c is surprisingly involved, and systems software such as libc and operating system kernels rely on a host of linker features. But linking is poorly understood by working programmers and has largely been neglected by language researchers.
In this paper we survey the many use-cases that linkers support and the poorly specified linker speak by which they are controlled: metadata in object files, command-line options, and linker-script language. We provide the first validated formalisation of a realistic executable and linkable format (ELF), and capture aspects of the Application Binary Interfaces for four mainstream platforms (AArch64, AMD64, Power64, and IA32). Using these, we develop an executable specification of static linking, covering (among other things) enough to link small C programs (we use the example of bzip2) into a correctly running executable. We provide our specification in Lem and Isabelle/HOL forms. This is the first formal specification of mainstream linking. We have used the Isabelle/HOL version to prove a sample correctness property for one case of AMD64 ABI relocation, demonstrating that the specification supports formal proof, and as a first step towards the much more ambitious goal of verified linking. Our work should enable several novel strands of research, including linker-aware verified compilation and program analysis, and better languages for controlling linking.
This is a technical paper about a formal but realistic (non-idealised) specification of Unix linking. Its focus is mostly on static linking of ELF files, although not exclusively. It describes our ongoing effort to produce an executable formal specification of linking that can be used as a link checker, a basis for mechanised reasoning about compiler and linker implementations, and as a document for developers to consult. Most critically perhaps, it explains why linking matters: why linking is not just about separate compilation but actually affects program semantics. It details a large zoo of “linking effects” programmers can achieve by (and often only by) instructing the linker.
- [Software 16]
Yudi Zheng, Stephen Kell, Lubomír Bulej, Haiyang Sun, Walter Binder.
Comprehensive multi-platform dynamic program analysis for Java and Android.
IEEE Software 33(4), July 2016.
online edition abstract
Abstract: Dynamic program analyses, such as profiling, tracing and bug-finding tools, are essential for software engineering. Unfortunately, implementing dynamic analyses for managed languages such as Java is unduly difficult and error-prone, because the runtime environments provide only complex low-level mechanisms. Currently, programmers writing custom tooling must expend great effort in tool development and maintenance, while still suffering substantial limitations such as incomplete code coverage or lack of portability. Ideally, a framework would be available in which dynamic analysis tools could be expressed at a high level, robustly, with high coverage and supporting alternative runtimes such as Android. We describe our research on an “all-in-one” dynamic program analysis framework which uses a combination of techniques to satisfy these requirements.
This is an accessible introduction to the DiSL and ShadowVM systems for writing dynamic analyses for Java programs. It also covers recent developments in supporting a less cooperative runtime, namely Android's Dalvik VM, which brings particular challenges.
- [Onward! 15]
Towards a dynamic object model within Unix processes.
In Proceedings of the ACM International Symposium on New
Ideas, New Paradigms, and Reflections on Programming and
Software (Onward!), October 2015.
published version (via ACM Author-izer) pdf preprint abstract
Abstract: Programmers face much complexity from the co-existence of “native” (Unix-like) and virtual machine (VM) “managed” run-time environments. Rather than having VMs replace Unix, we investigate whether it makes sense for Unix to “become a VM”, in the sense of evolving its user-level services to subsume services offered by VMs. We survey the (little-understood) VM-like features in modern Unix, noting common shortcomings: a lack of semantic metadata (“type information”) and the inability to bind from objects “back” to their metadata. We describe the design and implementation of a system, liballocs, which adds these capabilities in a highly compatible way, and explore its consequences.
This is a fairly technical paper introducing liballocs as a “missing link” in the design of operating systems. It describes some things we can already do (witness libcrunch) and things we expect to be able to do with it. It also puts liballocs in context, relating it both to the Unix tradition and to the Lisp- and Smalltalk-style virtual machine tradition.
- [Onward! 14]
In search of types.
In Proceedings of the ACM International Symposium on New
Ideas, New Paradigms, and Reflections on Programming and
Software (Onward!), October 2014.
pdf (author's version) abstract
Abstract: The concept of “type” has been used without a consistent, precise definition in discussions about programming languages for 60 years. In this essay I explore various concepts lurking behind distinct uses of this word, highlighting two traditions in which the word came into use largely independently: engineering traditions on the one hand, and those of symbolic logic on the other. These traditions are founded on differing attitudes to the nature and purpose of abstraction, but their distinct uses of “type” have never been explicitly unified. One result is that discourse across these traditions often finds itself at cross purposes, such as overapplying one sense of “type” where another is appropriate, and occasionally proceeding to draw wrong conclusions. I illustrate this with examples from well-known and justly well-regarded literature, and argue that ongoing developments in both the theory and practice of programming make now a good time to resolve these problems.
This is an essay I wrote out of discomfort in the vagueness of the word “type”, both in popular usage and among “expert” researchers. I finished it with the feeling that I'd just written “one to throw away”, so it may have been more illuminating for me to write than for you to read. Nevertheless I am very happy to receive comments. Subject to demand, I'll also be happy to accommodate any corrections (including corrections of omissions, where practical) in the author's version.
The operating system: should there be one?.
In Proceedings of the 7th Workshop on Programming Languages and Operating Systems,
ACM, November 2013.
Abstract: Operating systems and programming languages are often informally evaluated on their conduciveness towards composition. We revisit Dan Ingalls' Smalltalk-inspired position that “an operating system is a collection of things that don't fit inside a language; there shouldn't be one”, discussing what it means, why it appears not to have materialised, and how we might work towards the same effect in the postmodern reality of today's systems. We argue that the trajectory of the “file” abstraction through Unix and Plan~9 culminates in a Smalltalk-style object, with other filesystem calls as a primitive metasystem. Meanwhile, the key features of Smalltalk have many analogues in the fragmented world of Unix programming (including techniques at the library, file and socket level). Based on the themes of unifying OS- and language-level mechanisms, and increasing the expressiveness of the meta-system, we identify some evolutionary approaches to a postmodern realisation of Ingalls' vision, arguing that an operating system is still necessary after all.
This is a bit of a curiosity. It's a position paper which I wrote after Michael Haupt pointed me at the cited Ingalls article. Every so often, the idea emerges that we don't need operating systems if we have language runtimes. This idea doesn't seem to stick, but it's hard to pin down what's wrong with it. Here I argue that a neglected value of operating systems is in their meta-abstractions, of which the Unix file is one classic example. These abstractions are not part of any language (or part of all of them, depending on how you see it). In fact, I argue the OS could and should provide much more of a meta-system. The paper makes a number of contrasts between Unix files and Smalltalk objects, then proposes a few concrete suggestions for evolutionarily improving the Unix-like metasystem towards a more Smalltalk-like one.
Yudi Zheng, Lubomír Bulej, Cheng Zhang, Stephen Kell, Danilo Ansaloni, Walter Binder.
Dynamic optimization of bytecode instrumentation.
In Proceedings of the 7th Workshop on Virtual Machines and Intermediate Languages,
ACM, October 2013.
Abstract: Accuracy, completeness, and performance are all major concerns in the context of dynamic program analysis. Emphasizing one of these factors may compromise the other factors. For example, improving completeness of an analysis may seriously impair performance. In this paper, we present an analysis model and a framework that enables reducing analysis overhead at runtime through adaptive instrumentation of the base program. Our approach targets analyses implemented with code instrumentation techniques on the Java platform. Overhead reduction is achieved by removing instrumentation from code locations that are considered unimportant for the analysis results, thereby avoiding execution of analysis code for those locations. For some analyses, our approach preserves result accuracy and completeness. For other analyses, accuracy and completeness may be traded for a major performance improvement. In this paper, we explore accuracy, completeness, and performance of our approach with two concrete analyses as case studies.
This paper describes a mechanism for dynamically undeploying and redeploying bytecode instrumentation, and its application to some scenarios where this can usefully optimise an analysis without sacrificing too much correctness or completeness. My personal favourite thing about it is the observation that analysis maintain collections of repeatedly mutated values (profile data, monitors, etc.), and the transition rules obeyed by these mutations give us insight into when instrumentation can be removed. We can only exploit this in a very limited set of cases at present, but the potential for future work is interesting.
Lukas Marek, Stephen Kell, Yudi Zheng, Lubomír Bulej, Walter Binder,
Petr Tůma, Danilo Ansaloni, Aibek Sarimbekov and Andreas Sewe.
ShadowVM: robust and comprehensive dynamic program analysis for the Java platform.
In Proceedings of the 12th International Conference on Generative Programming: Concepts & Experiences,
ACM, October 2013.
Abstract: Dynamic analysis tools are often implemented using instrumentation, particularly on managed runtimes including the Java Virtual Machine (JVM). Performing instrumentation robustly is especially complex on such runtimes: existing frameworks offer limited coverage and poor isolation, while previous work has shown that apparently innocuous instrumentation can cause deadlocks or crashes in the observed application. This paper describes ShadowVM, a system for instrumentation-based dynamic analyses on the JVM which combines a number of techniques to greatly improve both isolation and coverage. These centre on the offload of analysis to a separate process; we believe our design is the first system to enable genuinely full bytecode coverage on the JVM. We describe a working implementation, and use a case study to demonstrate its improved coverage and to evaluate its runtime overhead.
This paper describes a system for dynamic analysis of Java bytecode programs in a highly isolated fashion, by running the analysis in a separate process, adding only straight-to-native instrumentation paths within the Java bytecode itself. This sidesteps several coverage and robustness issues that plague in-VM analysis, as detailed by our VMIL 2012 workshop paper. The approach presented in this paper is probably the best you can do in a way that's (notionally) portable across JVMs. Its performance is sadly not great, but the next step (how to modify the VM's interfaces to support more efficient approaches) is set to be very interesting. This paper also gives an initial treatment of sychronisation requirements (i.e. relative ordering of events in the observed program w.r.t. their observation by the analysis), which vary hugely from one analysis to another and have a critical influence on achievable performance.
Aibek Sarimbekov, Stephen Kell, Lubomír Bulej, Andreas Sewe, Yudi Zheng, Danilo Ansaloni and Walter Binder.
A comprehensive toolchain for workload characterization across JVM languages.
In Proceedings of the 11th ACM SIGPLAN–SIGSOFT Workshop on Program Analysis for Software Tools and Engineering,
ACM, June 2013.
Abstract: The Java Virtual Machine (JVM) today hosts implementations of numerous languages. To achieve high performance, JVM implementations rely on heuristics in choosing compiler optimizations and adapting garbage collection behavior. Historically, these heuristics have been tuned to suit the dynamics of Java programs only. This leads to unnecessarily poor performance in case of non-Java languages, which often exhibit systematic difference in workload behavior. Dynamic metrics characterizing the workload help identify and quantify useful optimizations, but so far, no cohesive suite of metrics has adequately covered properties that vary systematically between Java and non-Java workloads. We present a suite of such metrics, justifying our choice with reference to a range of guest languages. These metrics are implemented on a common portable infrastructure which ensures ease of deployment and customization.
This paper describes an effort to extend the space of dynamic metrics defined at the JVM level so that they cover dimensions which exhibit variation between Java and non-Java languages running on the JVM. It describes both a new selection of metrics, and an infrastructure on which they are implemented. One of the parts I find neat is the query-based definition of a large subset of the metrics. Currently this is mostly a convenience, but I hope that in future work, this will prove useful for identifying new kinds of profiling information which is cheap enough to collect and use during dynamic compilation. (In other words, we want cheap observations which do a reasonable job of predicting the workload properties characterised by more expensive metrics, like the ones in this paper.)
Danilo Ansaloni, Stephen Kell, Lubomír Bulej, Yudi Zheng, Walter Binder and Petr Tůma.
Enabling modularity and re-use in dynamic program analysis tools for the Java Virtual Machine.
In Proceedings of the 27th European Conference on Object-Oriented Programming,
Springer-Verlag, July 2013.
Abstract: Dynamic program analysis tools based on code instrumentation serve many important software engineering tasks such as profiling, debugging, testing, program comprehension, and reverse engineering. Unfortunately, constructing new analysis tools is unduly difficult, because existing frameworks offer little or no support to the programmer beyond the incidental task of instrumentation. We observe that existing dynamic analysis tools re-address recurring requirements in their essential task: of maintaining state which captures some property of the analysed program. This paper presents a general architecture for dynamic program analysis tools which treats the maintenance of analysis state in a modular fashion, consisting of mappers decomposing input events spatially, and updaters aggregating them over time. We show that this architecture captures the requirements of a wide variety of existing analysis tools.
This paper is a move towards enabling a more event-driven, reactive style of programming in the creation of dynamic analysis tools. We built an analysis framework, FRANC, based around a library of instrumentation primitives (event sources) together with higher-level “state-oriented” components for decomposing and aggregating events. The neatest idea in the paper is to separate the spatial decomposition of incoming events from their temporal aggregation. This allows a “thin waist” hourglass design which opens up greater opportunity for composition and recombination of independent components.
Stephen Kell, Danilo Ansaloni, Walter Binder and Lukas Marek. The JVM is not observable enough (and what to do about it).
In Proceedings of the Sixth Workshop on Virtual Machines and Intermediate Languages,
ACM, October 2012.
pdf preprint abstract
Abstract: Bytecode instrumentation is a preferred technique for building profiling, debugging and monitoring tools targeting the Java Virtual Machine (JVM), yet is fundamentally dangerous. We illustrate its dangers with several examples gathered while building the DiSL instrumentation framework. We argue that no Java platform mechanism provides simultaneously adequate performance, reliability and expressiveness, but that this weakness is fixable. To elaborate, we contrast internal with external observation, and sketch some approaches and requirements for a hybrid mechanism.
Shortly after joining USI, it became clear that a lot of our group's implementation work was concerned with working around shortcomings with the JVM. Moreover, we had seen some worrying problems with no apparent solution. The reason is that the JVM platform actively encourages tool construction by bytecode instrumentation. But writing tools which instrument bytecode safely are invariably hard and often (depending on the instrumentation) impossible. I don't just mean impossible for non-specialist programmers to get right; I mean outright impossible. Specifically, there is no way to avoid the risk of introducing real and disastrous bugs to the instrumented program. The reason is that instrumentation code is not protected from the base program, and runs at near-arbitrary points, so easily creates deadlock and reentrancy problems, which there is no mechanism to guard against. (Unlike in process-level approaches, “avoid shared data” is not an option. Even dropping to native code doesn't guarantee anything.) There are several more modest difficulties also. This paper is my attempt at turning my colleagues' war stories into a manifesto for VM improvements. It argues that instrumentation is a bad mechanism on which to base VM observation interfaces, and sketches out some alternatives. I take the opportunity to grind one or two personal axes, including the one about how in-VM debugger servers are a step backwards from compiler-generated debugging information.
Stephen Kell and Conrad Irwin. Virtual machines should be invisible.
In Proceedings of the Fifth Workshop on Virtual Machines and Intermediate Languages,
ACM, October 2011.
pdf preprint abstract
Abstract: Current VM designs prioritise implementor freedom and performance, at the expense of other concerns of the end programmer. We motivate an alternative approach to VM design aiming to be unobtrusive in general, and prioritising two key concerns specifically: foreign function interfacing and support for runtime analysis tools (such as debuggers, profilers etc.). We describe our experiences building a Python VM in this manner, and identify some simple constraints that help enable low-overhead foreign function interfacing and direct use of native tools. We then discuss how to extend this towards a higher-performance VM suitable for Java or similar languages.
This is a short paper about some in-progress work that was begun with Conrad Irwin when he was a final-year undergraduate in Cambridge and I was his project supervisor. Conrad implemented a usable subset of Python which uses DWARF debugging information to interface with native libraries, rather than the usual wrapper generation step. This paper extends that idea in a few ways, culminating in a design which can (we hope!) allow native tools (gdb, Valgrind, etc.) to operate seamlessly across programs written in a diverse mixture of languages, while avoiding conventional foreign function interfacing overheads. I am continuing the implementation effort.
Stephen Kell. Composing heterogeneous software with style.
In Proceedings of the First International Workshop on Free Composition,
ACM, July 2011.
pdf preprint abstract
Abstract: Tools for composing software impose homogeneity requirements on what is composed—that modules must share a language, target the same libraries, or share other conventions. This inhibits cross-language and cross-infrastructure composition. We observe that a unifying representation of software turns heterogeneity of components into a matter of styles: recurring interface patterns that cross-cut large numbers of codebases. We sketch a rule-based language for capturing styles independently of composition context, and describe how it applies in two example scenarios.
This is a short paper describing an idea which I developed in the last technical chapter of my PhD thesis. The idea is that interfaces differ stylistically—in things like how they pass arguments, how they report errors, and in some less routine ways—and that by capturing these styles independently of composition context, we could achieve a lot more compositions with a lot less programmer effort. It's pitched as an extension to Cake; so far it's unimplemented, with the exception of a few foundations in the compiler; I intend to implement it one day, but have no idea when I'll get time. The most compelling content of the paper is therefore the preliminary survey (read: brainstorm) of styles in well-known interfaces most readers will recognise.
Stephen Kell. Component adaptation and assembly using interface relations.
In Proceedings of SPLASH 2010, OOPSLA Research track,
published version pdf preprint abstract
Abstract: Software's expense owes partly to frequent reimplementation of similar functionality and partly to maintenance of patches, ports or components targeting evolving interfaces. More modular non-invasive approaches are unpopular because they entail laborious wrapper code. We propose Cake, a rule-based language describing compositions using interface relations. To evaluate it, we compare several existing wrappers with reimplemented Cake versions, finding the latter to be simpler and better modularised.
This is a long research paper describing the design, implementation and evaluation of the basic Cake language and runtime.
Stephen Kell. The mythical matched modules: overcoming the tyranny of
inflexible software construction.
In the OOPSLA 2009 Companion, Onward! Innovation In Progress track,
published version pdf preprint abstract
Abstract: Conventional tools yield expensive and inflexible software. By requiring that software be structured as plug-compatible modules, tools preclude out-of-order development; by treating interoperation of languages as rare, adoption of innovations is inhibited. I propose that a solution must radically separate the concern of integration in software: firstly by using novel tools specialised towards integration (the “integration domain”), and secondly by prohibiting use of pre-existing interfaces (“interface hiding”) outside that domain.
This is a relatively “long view” on the position underlying my PhD work, expounding my gut feeling that the way we build software is bizarrely fragile and unrealistic, owing to the expectation that big pieces of software should be made from perfectly-fitting smaller pieces without any sort of “glue” or integration material. This is what makes software maintenance expensive, software evolution difficult and software functionality inflexible. I'm always especially glad to receive comments on the argument and general story presented by this paper.
Stephen Kell. Configuration and adaptation of binary software components.
In Companion to the 31st International Conference of Software Engineering,
New Ideas and Emerging Results track,
pdf preprint bib abstract
Abstract: Existing black-box adaptation techniques are insufficiently powerful for a large class of real-world tasks. Meanwhile, white-box techniques are language-specific and overly invasive. We argue for the inclusion of special-purpose adaptation features in a configuration language, and outline the benefits of targetting binary representations of software. We introduce Cake, a configuration language with adaptation features, and show how its design is being shaped by two case studies.
This is a short work-in-progress paper describing one part of my PhD work. The idea is that composition of software in binary form is desirable, both for convenience and because binaries offer a somewhat language-neutral model of software. The problem of linking together mismatched components in binary form, specifically at the object code level (meaning these components could be written in any number of languages including C and C++), has had little attention so far. However, a fairly descriptive model of binaries is already provided by debugging info standards like DWARF. The thesis of this work is that a domain-specific configuration language, with its baseline at the DWARF level of abstraction, can be a practical tool for describing compositions of mismatched binaries, where the language explicitly provides for adaptation to address mismatches in a black-box fashion. The main contribution is describing the proposed features of our language, which is called Cake, and the case-study experiences which are shaping it.
Stephen Kell. A survey of practical software adaptation techniques.
Journal of Universal Computer Science, Special Issue on Software Adaptation,
pdf preprint pdf preprint (crop+2up) author's notes
Writing a large survey is a task only a fool would undertake... so I had a go. Practical software adaptation techniques means techniques for bridging or modifying the interfaces (mostly...) of software components. It doesn't include adaptive techniques, as found in reflective and/or mobile middleware. The paper is unfortunately a bit disjointed, coming in two halves: first, it covers twentyish approaches found in the literature, and classifies them on what they do and how. Secondly it then discusses (borrowing some bits from my earlier workshop paper) various other aspects of such techniques: adapting component instances versus implementations, static versus dynamic components, elegance of model versus supporting heterogeneity, the utility of various distinctions between computational and communicational code, and what “loosely coupled” really means. In hindsight, a better survey would take all these latter issues and fold them into the first half's taxonomy, visiting the work in some order which allows elaboration on the more discursive matters as they are raised. It also needs a neater way, most likely a better graphical language, to describe the taxonomised nature of each technique, rather than relying on somewhat impenetrable verbal descriptions.
Stephen Kell. Rethinking software connectors.
Proceedings of the International Workshop on Synthesis
and Analysis of Component Connectors, September 2007.
pdf bib abstract
Abstract: Existing work on software connectors shows significant disagreement on both their definition and their relationships with components, coordinators and adaptors. We propose a precise characterisation of connectors, discuss how they relate to the other three classes, and contradict the suggestion that connectors and components are disjoint. We discuss the relationship between connectors and coupling, and argue the inseparability of connection models from component programming models. Finally we identify the class of configuration languages, show how it relates to primitive connectors and outline relevant areas for future work.
ACM copyright notice: © ACM, 2007. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in SYANCO ’07. http://doi.acm.org/10.1145/1294917.1294918
This is a somewhat strident and sweeping workshop paper where I try to decode the meaning and intentions of the many and various references to “software connectors” found in the literature. I then talk a bit about how this relates to coordination, adaptation and the concept of coupling, and give a sketchy outline of how I think compositional and communication-oriented concerns could or should be abstracted and supported by tools. Apart from the latter, there's not really any research contribution, except an attempt to hold to account the vague and flimsy motivational arguments that appear in the introductions of lots of (often otherwise good) papers.
See also my author page on DBLP (with much credit to Michael Ley and others for this great bibliographic resource).
Peer-reviewed abstracts, tool demonstrations and similar
- [MoreVMs 18] Stephen Kell. The inevitable death of VMs: a progress report. Talk abstract for MoreVMs workshop. In Conference Companion of the 2nd International Conference on Art, Science, and Engineering of Programming . doi preprint
- [APLAS 15] Haiyang Sun, Yudi Zheng, Lubomír Bulej, Stephen Kell and Walter Binder. Analyzing Distributed Multi-platform Java and Android Applications with ShadowVM. Tool demonstration, in Proceedings of APLAS 2015 (LNCS 9458, Programming Languages and Systems). doi
- [DemoSplash 15] Haiyang Sun, Yudi Zheng, Lubomír Bulej, Walter Binder and Stephen Kell. Custom full-coverage dynamic program analysis for Android. Tool demonstration, in Companion Proceedings of SPLASH 2015. doi
Manuscripts, reports, invited papers, dissertations etc.
- [SDR 2018] Stephen Kell. Critique of ‘Files as directories:” some thoughts on accessing structured data within files’ (2). Invited critique, in Conference Companion of the 2nd International Conference on the Art, Science, and Engineering of Programming (Programming '18). doi author version
- Black-box composition of mismatched software components.
Technical report, University of Cambridge Computer Laboratory, number UCAM-CL-TR-845,
tech report web page
Note: this is content-identical to the dissertation.
- Black-box composition of mismatched software components. PhD thesis, University of Cambridge, December 2010 (corrected May 2012). library catalogue record
- System Support for Adaptation and Composition of Applications.
Accepted for the EuroSys 2008 Doctoral Workshop.
pdf author's notes
This is quite a nice introduction, albeit brief and rather dense, to my PhD work. Or at least the problem, approach, ideas etc., since not much actual work is mentioned....
- Practical system-level adaptation of heterogeneous components.
PhD thesis proposal, approved January 2008 (revised March 2008).
pdf pdf (crop+2up) author's notes
This is a thesis proposal I submitted after about 12 months of PhD work. It reads nicely enough but is far too ambitious!
My research interests are outlined at the top of this page. In summary: although broad, they fall mostly within the intersections of systems, programming languages and software engineering.
I keep a calendar of approximate submission deadlines and event dates for most of the conferences and workshops that I might conceivably attend, together with a few for which it's verging on inconceivable. In case it's useful, here it is.
For my PhD I worked on supporting software adaptation at the level of the operating system's linking and loading mechanisms. Here “adaptation” means techniques for connecting and combining software components which were developed independently, and therefore do not have matched interfaces. My emphasis was on techniques which are practical, adoptable and work with a wide variety of target code, at the (hopefully temporary) expense of safety and automation.
My dissertation is now available as a technical report.
I am actively seeking research students who wish to do a PhD under my supervision. If this interests you, please contact me.
Funding is available, by competition, through the University of Kent. See this page for details; note that besides the topic-specific calls at the top of this page, general funding is also available, for which the application cycle runs roughly from December until early spring.
From time to time, other funding may be available, so please don't hesitate to contact me regardless of the time of year.
I frequently supervise research-flavoured projects conducted by Bachelor's and Master's students. These are in the Teaching section.
I also maintain a list of project ideas for students at all levels, and am always happy to talk about other ideas.
During 2005–06 I was a Research Assistant in Cambridge on the XenSE and Open Trusted Computing projects, under Steven Hand. Both projects seek to implement a practical secure computing platform, using virtualisation (and similar technologies) as the isolation mechanism.
As part of my work as an RA, I became interested in secure graphical user interfaces including L4's Nitpicker, a minimal secure windowing system. I began work on ports of this system to Linux, XenoLinux and the Xen mini-OS: the Linux version became mostly functional (but not yet secure!) while the others were stymied by various limitations with shared memory on Xen. These limitations are mostly fixed now, but I haven't had time to revisit the project since. Feel free to contact me for more information. If you wanted to take these up, I'd be glad to hear from you.
See also my attic page.
Service and appointments
In the past couple of years I have been a co-organiser of the 2018 VMIL workshop, on the PC for SPLASH workshops (2016), a co-organiser of Salon des Refusés workshop, and a reviewer for ACM TACO and Software: Practice and Experience.
Before that, on the PC for Onward! Essays (2015) and contributor to a CCC/CRA visioning workshop reporting on funding priorities for topics in accessible computing. Previously I was an external reviewer for the ASE journal, on the programme committee for PPPJ 2013, publicity chair for SC 2013, on the programme committee for RESoLVE 2012 at ASPLOS, and the shadow programme committee for Eurosys 2012.
At the University of Kent
I'm an Associate Member of the Kent Interdisciplinary Research Centre in Cyber Security (KirCCS), in connection with the security applications of my work on topics of run-time safety, program analysis and formal specification.
Since August 2018, when I moved from Cambridge to Kent, I've remained a (sometimes-) visiting researcher at the University of Cambridge's Computer Laboratory (now, more properly, the Department of Computer Science and Technology).
From December 2015 until September 2017 I served on the University of Cambridge's Board of Scrutiny, which has an oversight role in University governance. During the 2016–17 year I also acted as the Board's Secretary.
From May 2015 until July 2018, I was a postdoctoral affiliate at Christ's College (of which I'm also an alumnus, from my BA and PhD days).
From summer 2015 until summer 2018 I was an elected officer of the Postdocs of Cambridge Society (PdOC) which, in addition to providing networking and social events within the cross-disciplinary postdoctoral community, represents postdoctoral staff to various University bodies.
Since May 2008 I've been a Fellow of the Cambridge Philosophical Society.
I am currently a lecturer on the following modules here at the University of Kent.
I'm interested in supervising bright and enthusiastic Bachelor's and Master's students for their individual projects. For ideas and to find out what I'm interested in, see my list of suggested projects. I'm also always extremely happy to talk to students who have their own ideas.
I've supervised many projects in the past. Although I'm no longer in Cambridge, Bachelor's students there can still read my thoughts about Part II projects. Previously I've been fortunate enough to work with the following final-year students in Cambridge:
- 2017–18: Hrutvik Kanabar, Implementing and verifying a compiler optimisation in CakeML (co-supervised with Magnus Myreen and Anthony Fox).
- 2017–18: Tianlin (Tyler) Zhang, An Observable OCaml.
- 2017–18: Joshua Blake, Automated Testing of Libraries.
- 2017–18: Jared Khan, Program Slicing for Swift.
- 2016–17: Tim Radvan, Nefarious Scheme: a self-optimising interpreter for a functional programming language (using RPython).
- 2016–17: Shaun Steenkamp, Program slicing at debug time.
- 2016–17: Cheng Sun, An observable OCaml via C and liballocs (co-supervised with Stephen Dolan).
- 2016–17: Richard Tynan, Run-time type information in the FreeBSD kernel (co-supervised with David Chisnall).
- 2016–17: Jonathan Wright, Mostly-precise garbage collection for the C programming language (co-supervised with David Chisnall).
- 2015–16: Jonathan French, Symbolic execution of partial programs, with application to program testing and verification.
- 2014–15: Chris Diamand, Run-time type checking in C with Clang and libcrunch (co-supervised with David Chisnall).
- 2009–10: Conrad Irwin, A dynamic language with C-compatible runtime.
- 2009–10: Kenneth Yu, An automated document organiser.
- 2009–10: Andrew Thomas, A Java source-to-source optimising translator.
- 2008–09: David Mack, A filesystem for the disorganised.
Older teaching material (including various Cambridge-specific things) can be found on my attic page.
Note: as of late 2011, I have started using GitHub. Over time, code for all my larger projects will start to appear on my GitHub page. This page is likely to remain the more complete list, for the time being.
My research work involves building software. Inevitably, this software is never “finished”, rarely reaches release-worthy state, and usually rots quickly even then. So not much here is downloadable yet; this will improve over time, but in the meantime I list everything and invite you to get in touch if anything sounds interesting. My main projects, past and present, are:
- libdwarfpp, a C++ frontend
to libdwarf's consumer interface
that has grown into much more than that. It includes:
- RAII-style wrapper classes for most of the opaque (“handle”) data types in libdwarf;
- exceptions for error reporting;
- a data-abstracted interface (some would say “strongly-typed”) to most DWARF v3 tags;
- C++-style iterators for DWARF data, with various traversal policies;
- similarly data-abstracted support for most classes of DWARF attribute values;
- a couple of boost::graph interpretations of DWARF data;
- accommodation of (and some abstractions for) multiple DWARF standards and vendor extensions (wip);
- a DWARF expression evaluator (wip);
- abstraction of much of the irregularity and redundancy in DWARF's handling of memory locations;
- a pure C++ in-memory implementation of the ADTs, as an alternative to the libdwarf implementation (and useful for producing DWARF descriptions, although there is no DWARF output yet – but see dwarfidl).
- libcrunch, a runtime system and C front-end for run-time type checking (think: checking pointer casts in C) with good performance (think: 0--40% slowdown).
- liballocs, a runtime system and C front-end for tracking allocations and their types, at run time, across a program's entire address space, with good performance (think: 0--10% slowdown).
- dwarfidl, a human-friendly C-like language for most of the interface description subset of DWARF: the language is defined by a grammar for antlr, and the distribution includes skeletons of tools to parse-to and serialize-from the libdwarfpp ADTs (though sadly these are currently unfinished). Also included are tools for generating C++ header files and (preliminary support for) OCaml ctypes definitions given a DWARF description of a binary interface.
- libdlbind, a library that you can think of as an alloctor for JIT compilers. It offers a roughly malloc() API, but also lets you dynamically define linker symbols. This means JIT-compiled code becomes directly visible to debuggers and to clients of dlsym(). (There are some warts that needs to be worked around in the loader before this can become smooth. You can use trap-syscalls to fake up these changes, however.)
- trap-syscalls, a system for observing and rewriting system calls from within a user-level process. It's handy if you want to monitor or test hypotheses about system calls, or if you want to lightly virtualise your process by tweaking the system calls it makes (think mmap()). There are some scripts for interfacing it with SystemTap, too.
- cake is a compiler and runtime for the composition language I developed during my PhD. It lets you link together chunks of object code with mismatched interfaces, by writing rules that relate the two interfaces. The compiler generates C++ glue code, and the runtime handles the complexities of managing multiple sets of logically related objects. There are a few bits of code generation that are not implemented yet, but I'm working Right Now on getting this worthy of an initial release.
- dwarfpython is an implementation of Python with the goal of allowing seamless use of native debugging and profiling tools (think: gdb, gprof, ltrace, Valgrind, ...). It also hugely simplifies FFI coding tasks (think: no more Swig, and no more Python C API). It does so by aggressively adopting conventions and infrastructure used by native code—notably debugging infrastructure, which, surprisingly, turns out to be essentially sufficient to describe full Pythonic dynamism. At its core, dwarfpython remains a (deliberately) simple single-pass interpreter of the Python language. Development is ongoing. Conrad Irwin implemented an earlier version of the design during 2009–10, focusing on FFI aspects, and for a simple subset of the Python language. Since then I've been working on the tool support aspect and improving language coverage. It's a background task, and development is ongoing.
- nitpicker on Linux: some years ago I began a Linux port of the Nitpicker secure windowing system, and got it to the vaguely usable point of displaying windows and running its event loop. The actual security properties didn't get going, mainly because of the spoofability of UDP port numbers, although this can be worked around with some hacking on the IDL compiler. There has since been some third-party work on this, building on my efforts, that I haven't checked out but probably should.
- m4ntlr, a small set of macro definitions for use with antlr, which help with making grammars language-independent and with splitting grammars into separate files.
- libmallochooks is a set of hooks for the C library's malloc() family of functions, providing a higher-level set of callout hooks, useful for building various kinds of heap instrumentation (older, simpler, GNU-specific version: malloc_hooks.c)
- memtable.h is a C99 header implementing a very fast data associative data structure for lookups keyed on memory addresses (see this blog post for more)
- heap_index_hooks.c is a library (loadable with LD_PRELOAD) for instrumenting the GNU malloc to keep an index of the allocated heap blocks, keyed on address (and implemented, you guessed it, by a memtable)
- gcc-lgraph is an easy-to-use wrapper around gcc that will also output a module linkage graph (think “def–use” relationships) in the DOT language when you build a shared object or executable. These graphs tend not to visualise very well for nontrivial programs.
- dot-cluster is a bunch of Python scripts that attempt to produce a clustered approximation to a large-ish graph (thousands of nodes/edges) in the DOT language. The main aim is to reduce the edge count, by clustering sets of nodes whose in- and out- edge sets connecting them with the outside are similar to each other. Unfortunately the success of the current clustering algorithms is haphazard at best; I have sketched out some improvements and might get round to implementing them if someone shows interest.
- libsrk31c++ is a rag-bag of C++ utilities that either should be in the standard library, or actually are in boost but that I didn't discover there before I'd reimplemented them.
- libc++fileno is a distribution of Richard Kreckel's fileno() function for popular implementations of the C++ file I/O library.
- dwarfhpp is a tool for generating C++ headers that let you write code directly against an object file—it's included in dwarfidl, and used in the implementation of cake.
- objdump-all is a wrapper script for GNU objdump that gets you a useful objdump for a whole file in one go, dumping code and data appropriately for their section type
- gsfake-convertbb is a wrapper for the convert tool of ImageMagick or GraphicsMagick that calculates bounding boxes for PDF files, and mimics the interface of Ghostscript. Use it with pdfcrop's --gscmd option to get more reliable bounding box detection. I've had some trouble using with recent versions of convert, so let me know if you get problems.
- parens-filter.sh is a very short bash script for pretty-printing parenthesised tree representations. I've been using it on the output of my antlr-generated parser (specifically, on what gets printed by toStringTree()), but it probably works on other things (maybe S-expressions).
I've also submitted small patches to various open-source projects including LLVM (bugfix to bitcode linker), binutils (objcopy extension), gcc (documentation fix for gcj), Knit (compile fixes), Datascript (bug fixes), DICE (bug fixes), pdfjam (support “page template” option) and Claws Mail (support cookies file in RSS plugin). Some of them have even been applied. :-)
Apart from my main development projects, I sometimes produce scripts and other odds and ends which might be useful to other people. Where time permits, I try to package them half-decently and make them available here.
For computer science researchers
I have written some scripts which attempt to retrieve decent BibTeX for a given paper (as a PDF or PostScript file). details
For researchers in a research institution
I've written a nifty script for printing papers, which helps people save paper, share printed-out papers and discover perhaps unexpected collaborators within their institution. details
For supervisors of Tripos courses (in Cambridge)
For general audiences
I have built a sizable collection of vaguely useful shell scripts and other small fragments. One day “soon” I will get round to publishing them. The biggest chunks are my login scripts, which use Subversion to share a versioned repository of config files across all the Unix boxes that I have an account on, and the makefile and m4 templates that build this web page. I need to clean these up a bit. In the meantime, if you're interested in getting hold of them, let me know.
Occasionally I write down some thoughts which somehow relate to my work. They now appear in the form of a blog. Posts are categorised into research, and teaching, development, publishing, higher education and meta strands.
I have the beginnings of a personal web page. It's very sparse right now. Have a look, if you like.
|Post||Dr Stephen Kell|
School of Computing
University of Kent
Canterbury, CT2 7NF
Content updated at Thu 20 Jun 12:48:00 BST 2019.
validate this page