School of Computing

Stephen Kell

Lecturer

Photo of SR Kell, if available
  • Tel:     +44 (0)1227 82 7591
  • Fax:     +44 (0)1227 762811
  • Email:
  • Room S130
    School of Computing
    University of Kent,
    CT2 7NF

About Me

Photo by Emilie Selbourne. Thanks, Emilie.

I'm a Lecturer at the University of Kent's School of Computing, where I teach topics in computer systems and software development. and do research within the Programming Languages and Systems group.

I am a practical “computer scientist” who builds things; I'd argue this makes me more engineer than scientist per se. I have wide interests: my goal is to make it easier and cheaper to develop useful, high-quality software systems. So far, my work has mostly focused on programming languages and the systems that support them—including language runtimes and operating systems.

Most of my research concerns programming, but I identify as a “systems” researcher. For me, “systems” is a mindset rather than a research topic. It means I am primarily interested in the practical consequences of an idea; its abstract properties are of interest only subserviently. (If you can come up with a better definition, let me know.) I generally hesitate to identify as a “programming languages” researcher, not because I dislike languages but because they're only one piece of the puzzle. I'd identify as a “programming systems” researcher if that were a thing.

I also believe that technology cannot be pursued in isolation. Innovative viewpoints are best developed with the help of historical perspective, which tends to be undervalued in computer science research. Meanwhile, the societal benefit of a technology is only assured by its broader soundness in human and social terms—one could call this its “philosophical basis”. The history and philosophy of computer science are of increasing interest to me.

You can read more below about what I'm working on. A recurring theme is that I prefer pushing innovations into the software stack at relatively low levels, to the extent that this is sensible (which I believe can be surprising). Another is the pursuit of infrastructure that is well-integrated, observable, simple, flexible, and designed for human beings—particularly if we believe (as I do) that computers are for augmenting the human intellect, not just for high-performance data crunching.

Micro-blog and calendar

Recent blog entries

(all entries)

Very brief biography

I did my Bachelor's degree in computer science in Cambridge, graduating in 2005. I then stayed on for a year as a Research Assistant, before starting my PhD in 2006.

My PhD work was based in the Networks and Operating Systems group under the supervision of Dr David Greaves, and centred on the problem of building software by composition of ill-matched components, for which I developed the Cake language.

During summer 2007 I took an internship at Fraser Research, doing networking research.

From January 2011 until March 2012 I was a research assistant in the Department of Computer Science at the University of Oxford. There I worked as a James Martin Fellow, within the research programme of the Oxford Martin School's Institute for the Future of Computing. My work mostly focused on constucting a continuum between state-space methods of program analysis (notably symbolic execution) with syntactic methods (such as type checking). (This work was rudely interrupted by circumstances, but will be revived one day.)

From May 2012 to May 2013, I was a postdoctoral researcher at USI's Faculty of Informatics in Lugano, Switzerland, within the FAN project.

From May until October 2013 I was temporarily a Research Assistant at Oracle Labs, on the Alphabet Soup project.

I returned to Cambridge in October 2013, as a Research Associate (later Senior Research Associate) at the Computer Laboratory (now become the Department of Computer Science and Technology). I worked within the REMS project, and participated in both the Programming, Logic and Semantics group and the Systems Research Group (particularly its Networks and Operating Systems subgroup). As of August 2018 I continue to be a visiting researcher at the Department.

In my industrial life, I enjoyed some spells working for Opal Telecom and ARM around my Bachelor's studies. More recently, I have been consulting for Ellexus, and conducted some industrial research work (in reality rather development-heavy) for Oracle Labs.

I maintain (sometimes neglectfully) two CVs: a snappy two-page one here, and a longer academic one here. Please be aware that both may be a little rough or incomplete at times. If you're recruiting for jobs in the financial sector, even technical ones, please don't bother reading my CV or contacting me—I promise I am not interested.

Research

Current work

Here's what I'm working on as of April 2016. In most cases these are my own individual projects; a few are with collaborators, as I describe.

Making the Unix process environment more descriptive and reflective

Unix-like processes are with us to stay, but are impoverished in ways that hold us back. It is too difficult to build robust tools and systems which automate or abstract meta-level programming tasks, such as debugging, profiling, visualisation, instrumentation, composition, error reproduction and dynamic update. In turn, this massively hampers software development generally! In large part, the root problem is that the state of a Unix process is too opaque. Unix lacks adequate meta-level facilities for describing user-supplied abstractions such as data representation, memory layouts, procedural interfaces, file formats, inter-process protocols, and so on. To fix this, rather than designing a whole new system with strong meta-level abstractions, as has been done in Smalltalk and Lisp programming systems, my work chooses instead to evolve the services offered by existing Unix-like systems. Since meta-level tasks are far from uncommon, vestigial features to help carry them out can often be found “lurking” in modern Unix. By extending and evolving these features, my work can (only half-jokingly) be said to be “dragging Unix into the 1980s”. It does so while maximising compatibility, allowing new benefits to be applied to old code, in whatever language—the work spans C and C++ right up to JavaScript, Python and OCaml. One output of this work is liballocs, a run-time library fast whole-process tracking of per-allocation metadata, including detailed type information. Another is libdlbind, which helps make JIT-compiled code visible to debuggers and dynamic loaders. Both of these are building blocks used by my other research efforts (below). Motto: if you must unify, unify at the meta-level of the operating system, not at the base language level or in language implementations.

Unsafe languages made safer and more debuggable

Huge amounts of code have been written, and continue to be written, in unsafe programming languages including C, C++, Objective-C, Fortran and Ada. Performance is one motivator for choosing these languages; continuity with existing code, particularly libraries, is another. The experience of compiling and debugging using these languages is a patchy one, and often creates large time-sinks for the programmer in the form of debugging mysterious crashes or working around tools' limitations. I have developed the libcrunch system for run-time type checking and robust bounds checking. This consists of a runtime library based on liballocs, together with compilers wrappers and other toolchain extensions for instrumenting code and collecting metadata. Slowdowns are typically in the range 0–35% for type checking and 15–100% for bounds checking. The system is also unusually good at supporting use of uninstrumented libraries, recovering from errors, and supporting multiple languages. Its design is a nice illustration of an approach I call toolchain extension: realising new facilities using a combination of source- and binary-level techniques, both pre- and post-compilation, together with the relevant run-time extensions. Critically, it re-uses the stock toolchain as much as possible, and takes care to preserve binary compatibility with code compiled without use of these extensions. The mainline implementation uses the CIL library; during 2014—15, Chris Diamand developed a Clang front-end, with help from David Chisnall. In 2016–17 Richard Tynan developed a version of the runtime for the FreeBSD kernel. Motto: unsafe languages allow mostly-safe implementations; combine compile- and run-time techniques to make this the pragmatic option

Rigorous specification of system services

The semantics of programming languages are well studied. Two equally crucial but much less-studied areas are the semantics of linking and (separately) of system calls. The challenges of verified toolchains and verified run-time environments depend critically on accurate specifications for these services. With my colleagues on the REMS project, we're working both on detailed specifications of ELF linking and Linux system call interface. This also means developing tools for experimentally testing and refining these specifications. In summer 2015 Jon French developed tools for specifying system calls in terms of their memory footprints. Motto: a specification for every instruction.

  • [OOPSLA '16b]
  • [some stuff about system calls in the pipeline—ask me]

Principled approaches to debugging information

Debugging native code often involves fighting the tools. Programmers must trust to luck that the debugger can walk the stack, print values or call functions; when this fails, they must work with incomplete or apparently impossible stack traces, “value optimized out” messages, and the like. Obtaining debugging insight is even harder in production environments, where unreliable debugging infrastructure risks both downtime and exploits (see Linus Torvalds in his thread, among others). With REMS colleagues and Francesco Zappa Nardelli, we're working on tools and techniques for providing a robust, high-quality debugging experience. This comes in several at its simplest, it means sanity-checking compiler-generated debugging information. At its most complex, it means propagating more information across compiler passes, to enable debugging information that is correct by construction. Debugging is also in tension with compiler optimisation, and is also inherently in need of sandboxing (to limit who can consume what application state). Motto: if it compiles, it should also debug.

As of April 2016, this work is at an early stage.

Cross-language programming without FFIs

Language barriers are holding back programming. They splinter the ecosystem into duplicated fragments, each repeating work. Foreign function interfaces (FFIs) are the language implementers' attempt at a solution, but do not serve programmers' needs. FFIs must be eliminated! Instead, using a descriptive plane (again, provided by liballocs), we can make higher-level language objects appear directly within the same metamodel that spans the entire Unix process, alongside lower-level ones, which are mapped into each language's “object space”. For example, in my hacked nodeJS (quite a bit of V8 hacking was required too), all native functions visible to the dynamic linker are accessible by name under the existing process object. You're no longer dependent on hand-written C++ wrapper code, nor packages like node-ffi to access these objects. The type information gathered by liballocs enables direct access, out-of-the-box. It's far from a robust system at the moment, but the proof-of-proof-of-concept-concept is encouraging. Motto: FFIs are evil and must be destroyed, but one VM need not (cannot) rule them all.

Comprehensive, easy-to-use instrumentation on managed platforms

Programmers often want to develop custom tooling to help them profile and debug their applications. On virtual machines like the JVM, this traditionally means messing with low-level APIs such as JVMTI, or at best, frameworks like ASM which let you rewrite the application's bytecode. In Lugano, work done together with my DAG colleagues proposed a new programming model for these tasks based on aspect-oriented primitives (join points and advice), and also a yet higher-level model based on an event processing abstraction reminiscent of publish-subscribe middleware. On the run-time side, our infrastructure addresses the critical problem of isolation: traditionally, instrumentation and application share a virtual machine, and may interfere with one another in ways that are difficult to predict and impossible, in general, to avoid except by arbitrarily excluding shared classes (such as java.lang.* and the like). Our work gets around this by avoiding doing any bytecode instrumentation within the VM—instead at any instrumentation point we insert only a straight-to-native code path that hands off run-time events to a separate process (the “shadow VM”) which performs the analysis. This allows much stronger coverage properties than are possible with in-VM bytecode instrumentation, and nicely handles distributed applications—but also brings challenges with event synchronisation and ordering. Motto: robust, high-coverage without bytecode munging or hacky exclusion lists.

Efficient data structures for address-associative mappings

As a by-product of my work on reflective metadata in Unix processes, I've built various data structures for fast indexing of in-memory structures by their virtual address. The key idea is to use virtual address translation itself as the first stage of any lookup. This approach has its roots in folklore, but some of the extensions and variants I've built seem to be novel. Combined, they add a complement of abilities including support for both wide and narrow range queries, iteration, and efficiency under differing degrees of sparseness and clusteredness in the key-space usage. These structures are pleasingly simple to code, and have been known to offer significant out-of-the-box performance improvements relative to more complex hand-rolled structures. Their overall performance characteristics are, however, complex and non-obvious. A personal research project of mine is to explore, measure characterise these characteristics. Motto: virtual memory is hardware-assisted associative lookup.

This work is at an early stage.

Radically easier composition

Programming by composition is the norm, but the norm also imposes needlessly high costs on developers. Keeping up with APIs as they evolve is a familiar drain on programmers' time. Inability to migrate to a different underlying library or platform is a familiar inhibitor of progress. It doesn't need to be this way. Why isn't software soft, malleable and composable? Why is so much tedious work left to the human programmer? My PhD work explored an alternative approach in which software is composed out of units which aren't expected to match precisely. A separate bunch of languages and tools, collectively called an “integration domain”, are provided to tackle the details of resolving this mismatch. I designed the language Cake to provide a rule-based mostly-declarative notation for API adaptation. The Cake compiler accepts a bunch of rules relating calls and data structure elements between APIs, including in various context-sensitive fashions. The Cake compiler then generates (nasty C++) wrapper code, saving the programmer from writing wrappers by hand. Motto: down with manual glue coding, down with silos!

What is that?

A recurring theme of my work is that I like to ask big questions, including questions of the form “what is X?” (for various X). The X so far have been the following.

Published articles

  • [Onward! 17] Stephen Kell. Some were meant for C: the endurance of an unmanageable language. In Proceedings of the ACM International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (Onward!), October 2017 (to appear).
    pdf (author's version) abstract

    Abstract: The C language leads a double life: as an application programming language of yesteryear, perpetuated by circumstance, and as a systems programming language which remains a weapon of choice decades after its creation. This essay is a C programmer's reaction to the call to abandon ship. It questions several properties commonly held to define the experience of using C; these include unsafety, undefined behaviour, and the motivation of performance. It argues all these are in fact inessential; rather, it traces C's ultimate strength to a communicative design which does not fit easily within the usual conception of “a programming language”, but can be seen as a counterpoint to so-called “managed languages”. This communicativity is what facilitates the essential aspect of system-building: creating parts which interact with other, remote parts—being “alongside” not “within”.

    author's notes

    This is an essay I wrote after realising that the reasons many PL researchers think motivate people to (still) program in C aren't actually the reasons that I, as a C programmer, find holding true in my own case.

  • [OOPSLA 16a] Stephen Kell. Dynamically diagnosing run-time type errors in unsafe code. In Proceedings of the ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), November 2016.
    pdf preprint abstract

    Abstract: Existing approaches for detecting type errors in unsafe languages are limited. Static analysis methods are imprecise, and often require source-level changes, while most dynamic methods check only memory properties (bounds, liveness, etc.), owing to a lack of run-time type information. This paper describes libcrunch, a system for binary-compatible run-time type checking of unmodified unsafe code, currently focusing on C. Practical experience shows that our prototype implementation is easily applicable to many real codebases without source-level modification, correctly flags programmer errors with a very low rate of false positives, offers a very low run-time overhead, and covers classes of error caught by no previously existing tool.

    author's notes

    This is a technical paper describing a system (called libcrunch) for run-time type checking in native code (C, for now). Perhaps the most important message of the paper, albeit not strictly its novelty, is that we can check properties of pointers without attaching metadata to the pointers themselves. Indeed, propagating metadata with pointers is quite expensive, whereas looking up properties of the pointed-to objects can be made quite cheap.

  • [OOPSLA 16b] Stephen Kell, Dominic P. Mulligan, Peter Sewell. The missing link: explaining ELF static linking,semantically. In Proceedings of the ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), November 2016.
    pdf preprint abstract

    Abstract: Beneath the surface, software usually depends on complex linker behaviour to work as intended. Even linking hello_world.c is surprisingly involved, and systems software such as libc and operating system kernels rely on a host of linker features. But linking is poorly understood by working programmers and has largely been neglected by language researchers.

    In this paper we survey the many use-cases that linkers support and the poorly specified linker speak by which they are controlled: metadata in object files, command-line options, and linker-script language. We provide the first validated formalisation of a realistic executable and linkable format (ELF), and capture aspects of the Application Binary Interfaces for four mainstream platforms (AArch64, AMD64, Power64, and IA32). Using these, we develop an executable specification of static linking, covering (among other things) enough to link small C programs (we use the example of bzip2) into a correctly running executable. We provide our specification in Lem and Isabelle/HOL forms. This is the first formal specification of mainstream linking. We have used the Isabelle/HOL version to prove a sample correctness property for one case of AMD64 ABI relocation, demonstrating that the specification supports formal proof, and as a first step towards the much more ambitious goal of verified linking. Our work should enable several novel strands of research, including linker-aware verified compilation and program analysis, and better languages for controlling linking.

    author's notes

    This is a technical paper about a formal but realistic (non-idealised) specification of Unix linking. Its focus is mostly on static linking of ELF files, although not exclusively. It describes our ongoing effort to produce an executable formal specification of linking that can be used as a link checker, a basis for mechanised reasoning about compiler and linker implementations, and as a document for developers to consult. Most critically perhaps, it explains why linking matters: why linking is not just about separate compilation but actually affects program semantics. It details a large zoo of “linking effects” programmers can achieve by (and often only by) instructing the linker.

  • [Software 16] Yudi Zheng, Stephen Kell, Lubomír Bulej, Haiyang Sun, Walter Binder. Comprehensive multi-platform dynamic program analysis for Java and Android. IEEE Software 33(4), July 2016.
    online edition abstract

    Abstract: Dynamic program analyses, such as profiling, tracing and bug-finding tools, are essential for software engineering. Unfortunately, implementing dynamic analyses for managed languages such as Java is unduly difficult and error-prone, because the runtime environments provide only complex low-level mechanisms. Currently, programmers writing custom tooling must expend great effort in tool development and maintenance, while still suffering substantial limitations such as incomplete code coverage or lack of portability. Ideally, a framework would be available in which dynamic analysis tools could be expressed at a high level, robustly, with high coverage and supporting alternative runtimes such as Android. We describe our research on an “all-in-one” dynamic program analysis framework which uses a combination of techniques to satisfy these requirements.

    author's notes

    This is an accessible introduction to the DiSL and ShadowVM systems for writing dynamic analyses for Java programs. It also covers recent developments in supporting a less cooperative runtime, namely Android's Dalvik VM, which brings particular challenges.

  • [Onward! 15] Stephen Kell. Towards a dynamic object model within Unix processes. In Proceedings of the ACM International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (Onward!), October 2015.
    published version (via ACM Author-izer) pdf preprint abstract

    Abstract: Programmers face much complexity from the co-existence of “native” (Unix-like) and virtual machine (VM) “managed” run-time environments. Rather than having VMs replace Unix, we investigate whether it makes sense for Unix to “become a VM”, in the sense of evolving its user-level services to subsume services offered by VMs. We survey the (little-understood) VM-like features in modern Unix, noting common shortcomings: a lack of semantic metadata (“type information”) and the inability to bind from objects “back” to their metadata. We describe the design and implementation of a system, liballocs, which adds these capabilities in a highly compatible way, and explore its consequences.

    author's notes

    This is a fairly technical paper introducing liballocs as a “missing link” in the design of operating systems. It describes some things we can already do (witness libcrunch) and things we expect to be able to do with it. It also puts liballocs in context, relating it both to the Unix tradition and to the Lisp- and Smalltalk-style virtual machine tradition.

  • [Onward! 14] Stephen Kell. In search of types. In Proceedings of the ACM International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (Onward!), October 2014.
    pdf (author's version) abstract

    Abstract: The concept of “type” has been used without a consistent, precise definition in discussions about programming languages for 60 years. In this essay I explore various concepts lurking behind distinct uses of this word, highlighting two traditions in which the word came into use largely independently: engineering traditions on the one hand, and those of symbolic logic on the other. These traditions are founded on differing attitudes to the nature and purpose of abstraction, but their distinct uses of “type” have never been explicitly unified. One result is that discourse across these traditions often finds itself at cross purposes, such as overapplying one sense of “type” where another is appropriate, and occasionally proceeding to draw wrong conclusions. I illustrate this with examples from well-known and justly well-regarded literature, and argue that ongoing developments in both the theory and practice of programming make now a good time to resolve these problems.

    author's notes

    This is an essay I wrote out of discomfort in the vagueness of the word “type”, both in popular usage and among “expert” researchers. I finished it with the feeling that I'd just written “one to throw away”, so it may have been more illuminating for me to write than for you to read. Nevertheless I am very happy to receive comments. Subject to demand, I'll also be happy to accommodate any corrections (including corrections of omissions, where practical) in the author's version.

  • [PLOS 13] Stephen Kell. The operating system: should there be one?. In Proceedings of the 7th Workshop on Programming Languages and Operating Systems, ACM, November 2013.
    pdf abstract

    Abstract: Operating systems and programming languages are often informally evaluated on their conduciveness towards composition. We revisit Dan Ingalls' Smalltalk-inspired position that “an operating system is a collection of things that don't fit inside a language; there shouldn't be one”, discussing what it means, why it appears not to have materialised, and how we might work towards the same effect in the postmodern reality of today's systems. We argue that the trajectory of the “file” abstraction through Unix and Plan~9 culminates in a Smalltalk-style object, with other filesystem calls as a primitive metasystem. Meanwhile, the key features of Smalltalk have many analogues in the fragmented world of Unix programming (including techniques at the library, file and socket level). Based on the themes of unifying OS- and language-level mechanisms, and increasing the expressiveness of the meta-system, we identify some evolutionary approaches to a postmodern realisation of Ingalls' vision, arguing that an operating system is still necessary after all.

    author's notes

    This is a bit of a curiosity. It's a position paper which I wrote after Michael Haupt pointed me at the cited Ingalls article. Every so often, the idea emerges that we don't need operating systems if we have language runtimes. This idea doesn't seem to stick, but it's hard to pin down what's wrong with it. Here I argue that a neglected value of operating systems is in their meta-abstractions, of which the Unix file is one classic example. These abstractions are not part of any language (or part of all of them, depending on how you see it). In fact, I argue the OS could and should provide much more of a meta-system. The paper makes a number of contrasts between Unix files and Smalltalk objects, then proposes a few concrete suggestions for evolutionarily improving the Unix-like metasystem towards a more Smalltalk-like one.

  • [VMIL 13] Yudi Zheng, Lubomír Bulej, Cheng Zhang, Stephen Kell, Danilo Ansaloni, Walter Binder. Dynamic optimization of bytecode instrumentation. In Proceedings of the 7th Workshop on Virtual Machines and Intermediate Languages, ACM, October 2013.
    abstract

    Abstract: Accuracy, completeness, and performance are all major concerns in the context of dynamic program analysis. Emphasizing one of these factors may compromise the other factors. For example, improving completeness of an analysis may seriously impair performance. In this paper, we present an analysis model and a framework that enables reducing analysis overhead at runtime through adaptive instrumentation of the base program. Our approach targets analyses implemented with code instrumentation techniques on the Java platform. Overhead reduction is achieved by removing instrumentation from code locations that are considered unimportant for the analysis results, thereby avoiding execution of analysis code for those locations. For some analyses, our approach preserves result accuracy and completeness. For other analyses, accuracy and completeness may be traded for a major performance improvement. In this paper, we explore accuracy, completeness, and performance of our approach with two concrete analyses as case studies.

    author's notes

    This paper describes a mechanism for dynamically undeploying and redeploying bytecode instrumentation, and its application to some scenarios where this can usefully optimise an analysis without sacrificing too much correctness or completeness. My personal favourite thing about it is the observation that analysis maintain collections of repeatedly mutated values (profile data, monitors, etc.), and the transition rules obeyed by these mutations give us insight into when instrumentation can be removed. We can only exploit this in a very limited set of cases at present, but the potential for future work is interesting.

  • [GPCE 13] Lukas Marek, Stephen Kell, Yudi Zheng, Lubomír Bulej, Walter Binder, Petr Tůma, Danilo Ansaloni, Aibek Sarimbekov and Andreas Sewe. ShadowVM: robust and comprehensive dynamic program analysis for the Java platform. In Proceedings of the 12th International Conference on Generative Programming: Concepts & Experiences, ACM, October 2013.
    abstract

    Abstract: Dynamic analysis tools are often implemented using instrumentation, particularly on managed runtimes including the Java Virtual Machine (JVM). Performing instrumentation robustly is especially complex on such runtimes: existing frameworks offer limited coverage and poor isolation, while previous work has shown that apparently innocuous instrumentation can cause deadlocks or crashes in the observed application. This paper describes ShadowVM, a system for instrumentation-based dynamic analyses on the JVM which combines a number of techniques to greatly improve both isolation and coverage. These centre on the offload of analysis to a separate process; we believe our design is the first system to enable genuinely full bytecode coverage on the JVM. We describe a working implementation, and use a case study to demonstrate its improved coverage and to evaluate its runtime overhead.

    author's notes

    This paper describes a system for dynamic analysis of Java bytecode programs in a highly isolated fashion, by running the analysis in a separate process, adding only straight-to-native instrumentation paths within the Java bytecode itself. This sidesteps several coverage and robustness issues that plague in-VM analysis, as detailed by our VMIL 2012 workshop paper. The approach presented in this paper is probably the best you can do in a way that's (notionally) portable across JVMs. Its performance is sadly not great, but the next step (how to modify the VM's interfaces to support more efficient approaches) is set to be very interesting. This paper also gives an initial treatment of sychronisation requirements (i.e. relative ordering of events in the observed program w.r.t. their observation by the analysis), which vary hugely from one analysis to another and have a critical influence on achievable performance.

  • [PASTE 13] Aibek Sarimbekov, Stephen Kell, Lubomír Bulej, Andreas Sewe, Yudi Zheng, Danilo Ansaloni and Walter Binder. A comprehensive toolchain for workload characterization across JVM languages. In Proceedings of the 11th ACM SIGPLAN–SIGSOFT Workshop on Program Analysis for Software Tools and Engineering, ACM, June 2013.
    abstract

    Abstract: The Java Virtual Machine (JVM) today hosts implementations of numerous languages. To achieve high performance, JVM implementations rely on heuristics in choosing compiler optimizations and adapting garbage collection behavior. Historically, these heuristics have been tuned to suit the dynamics of Java programs only. This leads to unnecessarily poor performance in case of non-Java languages, which often exhibit systematic difference in workload behavior. Dynamic metrics characterizing the workload help identify and quantify useful optimizations, but so far, no cohesive suite of metrics has adequately covered properties that vary systematically between Java and non-Java workloads. We present a suite of such metrics, justifying our choice with reference to a range of guest languages. These metrics are implemented on a common portable infrastructure which ensures ease of deployment and customization.

    author's notes

    This paper describes an effort to extend the space of dynamic metrics defined at the JVM level so that they cover dimensions which exhibit variation between Java and non-Java languages running on the JVM. It describes both a new selection of metrics, and an infrastructure on which they are implemented. One of the parts I find neat is the query-based definition of a large subset of the metrics. Currently this is mostly a convenience, but I hope that in future work, this will prove useful for identifying new kinds of profiling information which is cheap enough to collect and use during dynamic compilation. (In other words, we want cheap observations which do a reasonable job of predicting the workload properties characterised by more expensive metrics, like the ones in this paper.)

  • [ECOOP 13] Danilo Ansaloni, Stephen Kell, Lubomír Bulej, Yudi Zheng, Walter Binder and Petr Tůma. Enabling modularity and re-use in dynamic program analysis tools for the Java Virtual Machine. In Proceedings of the 27th European Conference on Object-Oriented Programming, Springer-Verlag, July 2013.
    abstract

    Abstract: Dynamic program analysis tools based on code instrumentation serve many important software engineering tasks such as profiling, debugging, testing, program comprehension, and reverse engineering. Unfortunately, constructing new analysis tools is unduly difficult, because existing frameworks offer little or no support to the programmer beyond the incidental task of instrumentation. We observe that existing dynamic analysis tools re-address recurring requirements in their essential task: of maintaining state which captures some property of the analysed program. This paper presents a general architecture for dynamic program analysis tools which treats the maintenance of analysis state in a modular fashion, consisting of mappers decomposing input events spatially, and updaters aggregating them over time. We show that this architecture captures the requirements of a wide variety of existing analysis tools.

    author's notes

    This paper is a move towards enabling a more event-driven, reactive style of programming in the creation of dynamic analysis tools. We built an analysis framework, FRANC, based around a library of instrumentation primitives (event sources) together with higher-level “state-oriented” components for decomposing and aggregating events. The neatest idea in the paper is to separate the spatial decomposition of incoming events from their temporal aggregation. This allows a “thin waist” hourglass design which opens up greater opportunity for composition and recombination of independent components.

  • [VMIL 12] Stephen Kell, Danilo Ansaloni, Walter Binder and Lukas Marek. The JVM is not observable enough (and what to do about it). In Proceedings of the Sixth Workshop on Virtual Machines and Intermediate Languages, ACM, October 2012.
    pdf preprint abstract

    Abstract: Bytecode instrumentation is a preferred technique for building profiling, debugging and monitoring tools targeting the Java Virtual Machine (JVM), yet is fundamentally dangerous. We illustrate its dangers with several examples gathered while building the DiSL instrumentation framework. We argue that no Java platform mechanism provides simultaneously adequate performance, reliability and expressiveness, but that this weakness is fixable. To elaborate, we contrast internal with external observation, and sketch some approaches and requirements for a hybrid mechanism.

    author's notes

    Shortly after joining USI, it became clear that a lot of our group's implementation work was concerned with working around shortcomings with the JVM. Moreover, we had seen some worrying problems with no apparent solution. The reason is that the JVM platform actively encourages tool construction by bytecode instrumentation. But writing tools which instrument bytecode safely are invariably hard and often (depending on the instrumentation) impossible. I don't just mean impossible for non-specialist programmers to get right; I mean outright impossible. Specifically, there is no way to avoid the risk of introducing real and disastrous bugs to the instrumented program. The reason is that instrumentation code is not protected from the base program, and runs at near-arbitrary points, so easily creates deadlock and reentrancy problems, which there is no mechanism to guard against. (Unlike in process-level approaches, “avoid shared data” is not an option. Even dropping to native code doesn't guarantee anything.) There are several more modest difficulties also. This paper is my attempt at turning my colleagues' war stories into a manifesto for VM improvements. It argues that instrumentation is a bad mechanism on which to base VM observation interfaces, and sketches out some alternatives. I take the opportunity to grind one or two personal axes, including the one about how in-VM debugger servers are a step backwards from compiler-generated debugging information.

  • [VMIL 11] Stephen Kell and Conrad Irwin. Virtual machines should be invisible. In Proceedings of the Fifth Workshop on Virtual Machines and Intermediate Languages, ACM, October 2011.
    pdf preprint abstract

    Abstract: Current VM designs prioritise implementor freedom and performance, at the expense of other concerns of the end programmer. We motivate an alternative approach to VM design aiming to be unobtrusive in general, and prioritising two key concerns specifically: foreign function interfacing and support for runtime analysis tools (such as debuggers, profilers etc.). We describe our experiences building a Python VM in this manner, and identify some simple constraints that help enable low-overhead foreign function interfacing and direct use of native tools. We then discuss how to extend this towards a higher-performance VM suitable for Java or similar languages.

    author's notes

    This is a short paper about some in-progress work that was begun with Conrad Irwin when he was a final-year undergraduate in Cambridge and I was his project supervisor. Conrad implemented a usable subset of Python which uses DWARF debugging information to interface with native libraries, rather than the usual wrapper generation step. This paper extends that idea in a few ways, culminating in a design which can (we hope!) allow native tools (gdb, Valgrind, etc.) to operate seamlessly across programs written in a diverse mixture of languages, while avoiding conventional foreign function interfacing overheads. I am continuing the implementation effort.

  • [FREECO 11] Stephen Kell. Composing heterogeneous software with style. In Proceedings of the First International Workshop on Free Composition, ACM, July 2011.
    pdf preprint abstract

    Abstract: Tools for composing software impose homogeneity requirements on what is composed—that modules must share a language, target the same libraries, or share other conventions. This inhibits cross-language and cross-infrastructure composition. We observe that a unifying representation of software turns heterogeneity of components into a matter of styles: recurring interface patterns that cross-cut large numbers of codebases. We sketch a rule-based language for capturing styles independently of composition context, and describe how it applies in two example scenarios.

    author's notes

    This is a short paper describing an idea which I developed in the last technical chapter of my PhD thesis. The idea is that interfaces differ stylistically—in things like how they pass arguments, how they report errors, and in some less routine ways—and that by capturing these styles independently of composition context, we could achieve a lot more compositions with a lot less programmer effort. It's pitched as an extension to Cake; so far it's unimplemented, with the exception of a few foundations in the compiler; I intend to implement it one day, but have no idea when I'll get time. The most compelling content of the paper is therefore the preliminary survey (read: brainstorm) of styles in well-known interfaces most readers will recognise.

  • [OOPSLA 10] Stephen Kell. Component adaptation and assembly using interface relations. In Proceedings of SPLASH 2010, OOPSLA Research track, October 2010.
    published version pdf preprint abstract

    Abstract: Software's expense owes partly to frequent reimplementation of similar functionality and partly to maintenance of patches, ports or components targeting evolving interfaces. More modular non-invasive approaches are unpopular because they entail laborious wrapper code. We propose Cake, a rule-based language describing compositions using interface relations. To evaluate it, we compare several existing wrappers with reimplemented Cake versions, finding the latter to be simpler and better modularised.

    author's notes

    This is a long research paper describing the design, implementation and evaluation of the basic Cake language and runtime.

  • [Onward! 09] Stephen Kell. The mythical matched modules: overcoming the tyranny of inflexible software construction. In the OOPSLA 2009 Companion, Onward! Innovation In Progress track, October 2009.
    published version pdf preprint abstract

    Abstract: Conventional tools yield expensive and inflexible software. By requiring that software be structured as plug-compatible modules, tools preclude out-of-order development; by treating interoperation of languages as rare, adoption of innovations is inhibited. I propose that a solution must radically separate the concern of integration in software: firstly by using novel tools specialised towards integration (the “integration domain”), and secondly by prohibiting use of pre-existing interfaces (“interface hiding”) outside that domain.

    author's notes

    This is a relatively “long view” on the position underlying my PhD work, expounding my gut feeling that the way we build software is bizarrely fragile and unrealistic, owing to the expectation that big pieces of software should be made from perfectly-fitting smaller pieces without any sort of “glue” or integration material. This is what makes software maintenance expensive, software evolution difficult and software functionality inflexible. I'm always especially glad to receive comments on the argument and general story presented by this paper.

  • [ICSE-NIER 09] Stephen Kell. Configuration and adaptation of binary software components. In Companion to the 31st International Conference of Software Engineering, New Ideas and Emerging Results track, May 2009.
    pdf preprint bib abstract

    Abstract: Existing black-box adaptation techniques are insufficiently powerful for a large class of real-world tasks. Meanwhile, white-box techniques are language-specific and overly invasive. We argue for the inclusion of special-purpose adaptation features in a configuration language, and outline the benefits of targetting binary representations of software. We introduce Cake, a configuration language with adaptation features, and show how its design is being shaped by two case studies.

    author's notes

    This is a short work-in-progress paper describing one part of my PhD work. The idea is that composition of software in binary form is desirable, both for convenience and because binaries offer a somewhat language-neutral model of software. The problem of linking together mismatched components in binary form, specifically at the object code level (meaning these components could be written in any number of languages including C and C++), has had little attention so far. However, a fairly descriptive model of binaries is already provided by debugging info standards like DWARF. The thesis of this work is that a domain-specific configuration language, with its baseline at the DWARF level of abstraction, can be a practical tool for describing compositions of mismatched binaries, where the language explicitly provides for adaptation to address mismatches in a black-box fashion. The main contribution is describing the proposed features of our language, which is called Cake, and the case-study experiences which are shaping it.

  • [JUCS 08] Stephen Kell. A survey of practical software adaptation techniques. Journal of Universal Computer Science, Special Issue on Software Adaptation, September 2008.
    pdf preprint pdf preprint (crop+2up) author's notes

    Writing a large survey is a task only a fool would undertake... so I had a go. Practical software adaptation techniques means techniques for bridging or modifying the interfaces (mostly...) of software components. It doesn't include adaptive techniques, as found in reflective and/or mobile middleware. The paper is unfortunately a bit disjointed, coming in two halves: first, it covers twentyish approaches found in the literature, and classifies them on what they do and how. Secondly it then discusses (borrowing some bits from my earlier workshop paper) various other aspects of such techniques: adapting component instances versus implementations, static versus dynamic components, elegance of model versus supporting heterogeneity, the utility of various distinctions between computational and communicational code, and what “loosely coupled” really means. In hindsight, a better survey would take all these latter issues and fold them into the first half's taxonomy, visiting the work in some order which allows elaboration on the more discursive matters as they are raised. It also needs a neater way, most likely a better graphical language, to describe the taxonomised nature of each technique, rather than relying on somewhat impenetrable verbal descriptions.

  • [SYANCO 07] Stephen Kell. Rethinking software connectors. Proceedings of the International Workshop on Synthesis and Analysis of Component Connectors, September 2007.
    pdf bib abstract

    Abstract: Existing work on software connectors shows significant disagreement on both their definition and their relationships with components, coordinators and adaptors. We propose a precise characterisation of connectors, discuss how they relate to the other three classes, and contradict the suggestion that connectors and components are disjoint. We discuss the relationship between connectors and coupling, and argue the inseparability of connection models from component programming models. Finally we identify the class of configuration languages, show how it relates to primitive connectors and outline relevant areas for future work.

    copyright notice

    ACM copyright notice: © ACM, 2007. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in SYANCO ’07. http://doi.acm.org/10.1145/1294917.1294918

    doi author's notes

    This is a somewhat strident and sweeping workshop paper where I try to decode the meaning and intentions of the many and various references to “software connectors” found in the literature. I then talk a bit about how this relates to coordination, adaptation and the concept of coupling, and give a sketchy outline of how I think compositional and communication-oriented concerns could or should be abstracted and supported by tools. Apart from the latter, there's not really any research contribution, except an attempt to hold to account the vague and flimsy motivational arguments that appear in the introductions of lots of (often otherwise good) papers.

See also my author page on DBLP (with much credit to Michael Ley for this superb service).

Peer-reviewed abstracts, tool demonstrations and similar

  • [APLAS 15] Haiyang Sun, Yudi Zheng, Lubomír Bulej, Stephen Kell and Walter Binder. Analyzing Distributed Multi-platform Java and Android Applications with ShadowVM. Tool demonstration, in Proceedings of APLAS 2015 (LNCS 9458, Programming Languages and Systems). doi
  • [DemoSplash 15] Haiyang Sun, Yudi Zheng, Lubomír Bulej, Walter Binder and Stephen Kell. Custom full-coverage dynamic program analysis for Android. Tool demonstration, in Companion Proceedings of SPLASH 2015. doi

Manuscripts, reports, dissertations etc.

  • Black-box composition of mismatched software components. Technical report, University of Cambridge Computer Laboratory, number UCAM-CL-TR-845, December 2013. tech report web page
    Note: this is content-identical to the dissertation.
  • Black-box composition of mismatched software components. PhD thesis, University of Cambridge, December 2010 (corrected May 2012). library catalogue record
  • System Support for Adaptation and Composition of Applications. Accepted for the EuroSys 2008 Doctoral Workshop.
    pdf author's notes

    This is quite a nice introduction, albeit brief and rather dense, to my PhD work. Or at least the problem, approach, ideas etc., since not much actual work is mentioned....

  • Practical system-level adaptation of heterogeneous components. PhD thesis proposal, approved January 2008 (revised March 2008).
    pdf pdf (crop+2up) author's notes

    This is a thesis proposal I submitted after about 12 months of PhD work. It reads nicely enough but is far too ambitious!

Talks

Interests

My research interests are outlined at the top of this page. In summary: although broad, they fall mostly within the intersections of systems, programming languages and software engineering.

I keep a calendar of approximate submission deadlines and event dates for most of the conferences and workshops that I might conceivably attend, together with a few for which it's verging on inconceivable. In case it's useful, here it is.

PhD work

For my PhD I worked on supporting software adaptation at the level of the operating system's linking and loading mechanisms. Here “adaptation” means techniques for connecting and combining software components which were developed independently, and therefore do not have matched interfaces. My emphasis was on techniques which are practical, adoptable and work with a wide variety of target code, at the (hopefully temporary) expense of safety and automation.

The main focus of my work was Cake, a special-purpose language for describing relations between the interfaces of binary components (specifically, relocatable object code). Cake makes heavy use of DWARF debugging information, and can be considered interesting in several ways: as a domain-specific rule-based programming language; as a “composition”, “configuration” or “linking” language; as a dynamic language; as a runtime system sharing commonalities with garbage collectors and debuggers. It does not really make contributions in the domains of module systems or linking models.

To find out more, please browse my publications, and do contact me for more information. There will hopefully be one or two more papers appearing on additional work that I did during my PhD years. My dissertation is now available too.

Support and acknowledgements

I'm very grateful to EPSRC and Cambridge Philosophical Society for the funds which supported my PhD research work and some related travel, and to the Graduate Research Fund and Emily & Gordon Bottomley Fund of Christ's College, EuroSys, The Royal Academy of Engineering, ACM SIGSOFT and ACM SIGPLAN for additional support of my research travel and conference attendance.

Students

For now, my research “leadership” is confined to the student projects I've been known to supervise, which are in the Teaching section. I am always interested in working with enthusiastic Bachelor's, Master's and doctoral students. I maintain a list of project ideas, and am always happy to talk about other ideas.

History

During 2005–06 I was a Research Assistant in Cambridge on the XenSE and Open Trusted Computing projects, under Steven Hand. Both projects seek to implement a practical secure computing platform, using virtualisation (and similar technologies) as the isolation mechanism.

XenSE never had a web page of its own, but you might want to look at the abstract on the project's EPSRC Grant Portfolio page, or check out the mailing list.

OpenTC is a large EU-funded project involving many major industrial and academic partners, focused on the use of Trusted Computing Group technology to realise many common secure computing use cases.

As part of my work as an RA, I became interested in secure graphical user interfaces including L4's Nitpicker, a minimal secure windowing system. I began work on ports of this system to Linux, XenoLinux and the Xen mini-OS: the Linux version became mostly functional (but not yet secure!) while the others were stymied by various limitations with shared memory on Xen. These limitations are mostly fixed now, but I haven't had time to revisit the project since. Feel free to contact me for more information. If you wanted to take these up, I'd be glad to hear from you.

Prehistory

Service and appointments

In research

Currently I'm a co-organiser of the 2018 VMIL workshop, on the PC for <Programming> 2019, and on the External Review Committee for OOPSLA 2019.

In the past couple of years I have been on the PC for SPLASH workshops (2016), a co-organiser of Salon des Refusés workshop, and a reviewer for ACM TACO and Software: Practice and Experience.

Before that, on the PC for Onward! Essays (2015) and contributor to a CCC/CRA visioning workshop reporting on funding priorities for topics in accessible computing. Previously I was an external reviewer for the ASE journal, on the programme committee for PPPJ 2013, publicity chair for SC 2013, on the programme committee for RESoLVE 2012 at ASPLOS, and the shadow programme committee for Eurosys 2012.

Even longer ago, I was privileged to contribute external reviews to SVT at SAC 2012, ESOP 2010 and EuroSys 2009.

I'm a member of the ACM, ACM SIGPLAN and SIGOPS.

In Cambridge

From December 2015 until September 2017 I served on the University of Cambridge's Board of Scrutiny, which has an oversight role in University governance. During the 2016–17 year I also acted as the Board's Secretary.

At the Computer Laboratory I served on, and (from June 2017 until July 2018) chaired the “post-doc forum”, later renamed Research Staff Forum, representing research staff.

From May 2015 until July 2018, I was a postdoctoral affiliate at Christ's College (of which I'm also an alumnus, from my BA and PhD days).

From summer 2015 until summer 2018 I was an elected officer of the Postdocs of Cambridge Society (PdOC) which, in addition to providing networking and events within the cross-disciplinary postdoctoral community, represents postdoctoral staff to various University bodies.

Since May 2008 I've been a Fellow of the Cambridge Philosophical Society.

During my PhD, I had various roles in the NetOS group, in particular coordinating talklets from January 2009 until January 2010, holding a librarian-like role of curating a small group library and keeping track of the various books we had lying around, and being maintainer-of-sorts to the Atlas Room BBC Micro (about which I should write more some time).

Teaching

This section is outdated and will soon be updated to reflect teaching activities here at Kent.

Supervisions a.k.a. tutorials

In Cambridge I have supervised (tutored) many systems and programming courses from the Computer Science Tripos. The list below includes both current and past courses I supervised, together with any additional materials I prepared—including my own exercises. Some of my exercises have since been adopted by others and curated within the Otter system.

Current courses

Older courses

During spring 2011 I was a tutor for the Digital Systems course in Oxford.

Lectures

In April 2010 I gave a lecture to the MPhil in Advanced Computer Science class in Cambridge, as part of the Cambridge Programming Research Group mini-series within the Research Students' Lecture series. My lecture was entitled “Modularity – what, why and how”. Contact me for slides. Other lectures in the CPRG mini-series were given by Dominic Orchard, Max Bolingbroke and Robin Message.

Demonstrating, etc

During Michaelmas 2009, in Cambridge, I demonstrated the MPhil course Building an Internet Router, run by Andrew Moore.

During Easter 2008, 2009 and 2010 I was an Assessor for the numerical computing exercise undertaken by Part IA Natural Science students and lectured by Dr Frank King. (This course had no web presence!)

Projects

I'm interested in supervising bright and enthusiastic Bachelor's and Master's students for their individual projects. For ideas and to find out what I'm interested in, see my list of suggested projects. I'm also always extremely happy to talk to students who have their own ideas.

I've supervised several projects in the past. Bachelor's students at Cambridge can read my thoughts about Part II projects, see the project suggestions for 2010–11 from me and others in the NetOS group, or from the entire Lab and beyond and contact me if you're interested. If you have your own ideas which you think I might make a good supervisor for, I'm always happy to talk about those too. Previously I've been fortunate enough to work with the following final-year students:

  • 2017–18: (ongoing) Hrutvik Kanabar, Tyler Zhang, Joshua Blake, Jared Khan.
  • 2016–17: Tim Radvan, Nefarious Scheme: a self-optimising interpreter for a functional programming language (using RPython).
  • 2016–17: Shaun Steenkamp, Program slicing at debug time.
  • 2016–17: Cheng Sun, An observable OCaml via C and liballocs (co-supervised with Stephen Dolan).
  • .
  • 2016–17: Richard Tynan, Run-time type information in the FreeBSD kernel (co-supervised with David Chisnall).
  • 2016–17: Jonathan Wright, Mostly-precise garbage collection for the C programming language (co-supervised with David Chisnall).
  • 2015–16: Jonathan French, Symbolic execution of partial programs, with application to program testing and verification.
  • 2014–15: Chris Diamand, Run-time type checking in C with Clang and libcrunch (co-supervised with David Chisnall).
  • 2009–10: Conrad Irwin, A dynamic language with C-compatible runtime.
  • 2009–10: Kenneth Yu, An automated document organiser.
  • 2009–10: Andrew Thomas, A Java source-to-source optimising translator.
  • 2008–09: David Mack, A filesystem for the disorganised.

Software

My projects

Note: as of late 2011, I have started using GitHub. Over time, code for all my larger projects will start to appear on my GitHub page. This page is likely to remain the more complete list, for the time being.

My research work involves building software. Inevitably, this software is never “finished”, rarely reaches release-worthy state, and usually rots quickly even then. So not much here is downloadable yet; this will improve over time, but in the meantime I list everything and invite you to get in touch if anything sounds interesting. My main projects, past and present, are:

  • libdwarfpp, a C++ frontend to libdwarf's consumer interface that has grown into much more than that. It includes:
    • RAII-style wrapper classes for most of the opaque (“handle”) data types in libdwarf;
    • exceptions for error reporting;
    • a data-abstracted interface (some would say “strongly-typed”) to most DWARF v3 tags;
    • C++-style iterators for DWARF data, with various traversal policies;
    • similarly data-abstracted support for most classes of DWARF attribute values;
    • a couple of boost::graph interpretations of DWARF data;
    • accommodation of (and some abstractions for) multiple DWARF standards and vendor extensions (wip);
    • a DWARF expression evaluator (wip);
    • abstraction of much of the irregularity and redundancy in DWARF's handling of memory locations;
    • a pure C++ in-memory implementation of the ADTs, as an alternative to the libdwarf implementation (and useful for producing DWARF descriptions, although there is no DWARF output yet – but see dwarfidl).
    The only thing holding this back from release is some API cleanup that I want to do before releasing it into the wild. I'm planning to do that Very Soon Now.
  • libcrunch, a runtime system and C front-end for run-time type checking (think: checking pointer casts in C) with good performance (think: 0--40% slowdown).
  • liballocs, a runtime system and C front-end for tracking allocations and their types, at run time, across a program's entire address space, with good performance (think: 0--10% slowdown).
  • node+liballocs, a JavaScript implementation (V8-based) with a whole-program view of program code and data (using liballocs); a bit sketchy right now.
  • dwarfidl, a human-friendly C-like language for most of the interface description subset of DWARF: the language is defined by a grammar for antlr, and the distribution includes skeletons of tools to parse-to and serialize-from the libdwarfpp ADTs (though sadly these are currently unfinished). Also included are tools for generating C++ header files and (preliminary support for) OCaml ctypes definitions given a DWARF description of a binary interface.
  • libdlbind, a library that you can think of as an alloctor for JIT compilers. It offers a roughly malloc() API, but also lets you dynamically define linker symbols. This means JIT-compiled code becomes directly visible to debuggers and to clients of dlsym(). (There are some warts that needs to be worked around in the loader before this can become smooth. You can use trap-syscalls to fake up these changes, however.)
  • trap-syscalls, a system for observing and rewriting system calls from within a user-level process. It's handy if you want to monitor or test hypotheses about system calls, or if you want to lightly virtualise your process by tweaking the system calls it makes (think mmap()). There are some scripts for interfacing it with SystemTap, too.
  • cake is a compiler and runtime for the composition language I developed during my PhD. It lets you link together chunks of object code with mismatched interfaces, by writing rules that relate the two interfaces. The compiler generates C++ glue code, and the runtime handles the complexities of managing multiple sets of logically related objects. There are a few bits of code generation that are not implemented yet, but I'm working Right Now on getting this worthy of an initial release.
  • dwarfpython is an implementation of Python with the goal of allowing seamless use of native debugging and profiling tools (think: gdb, gprof, ltrace, Valgrind, ...). It also hugely simplifies FFI coding tasks (think: no more Swig, and no more Python C API). It does so by aggressively adopting conventions and infrastructure used by native code—notably debugging infrastructure, which, surprisingly, turns out to be essentially sufficient to describe full Pythonic dynamism. At its core, dwarfpython remains a (deliberately) simple single-pass interpreter of the Python language. Development is ongoing. Conrad Irwin implemented an earlier version of the design during 2009–10, focusing on FFI aspects, and for a simple subset of the Python language. Since then I've been working on the tool support aspect and improving language coverage. It's a background task, and development is ongoing.
  • nitpicker on Linux: some years ago I began a Linux port of the Nitpicker secure windowing system, and got it to the vaguely usable point of displaying windows and running its event loop. The actual security properties didn't get going, mainly because of the spoofability of UDP port numbers, although this can be worked around with some hacking on the IDL compiler. There has since been some third-party work on this, building on my efforts, that I haven't checked out but probably should.

Smaller contributions

  • m4ntlr, a small set of macro definitions for use with antlr, which help with making grammars language-independent and with splitting grammars into separate files.
  • libmallochooks is a set of hooks for the C library's malloc() family of functions, providing a higher-level set of callout hooks, useful for building various kinds of heap instrumentation (older, simpler, GNU-specific version: malloc_hooks.c)
  • memtable.h is a C99 header implementing a very fast data associative data structure for lookups keyed on memory addresses (see this blog post for more)
  • heap_index_hooks.c is a library (loadable with LD_PRELOAD) for instrumenting the GNU malloc to keep an index of the allocated heap blocks, keyed on address (and implemented, you guessed it, by a memtable)
  • gcc-lgraph is an easy-to-use wrapper around gcc that will also output a module linkage graph (think “def–use” relationships) in the DOT language when you build a shared object or executable. These graphs tend not to visualise very well for nontrivial programs.
  • dot-cluster is a bunch of Python scripts that attempt to produce a clustered approximation to a large-ish graph (thousands of nodes/edges) in the DOT language. The main aim is to reduce the edge count, by clustering sets of nodes whose in- and out- edge sets connecting them with the outside are similar to each other. Unfortunately the success of the current clustering algorithms is haphazard at best; I have sketched out some improvements and might get round to implementing them if someone shows interest.
  • libsrk31c++ is a rag-bag of C++ utilities that either should be in the standard library, or actually are in boost but that I didn't discover there before I'd reimplemented them.
  • libc++fileno is a distribution of Richard Kreckel's fileno() function for popular implementations of the C++ file I/O library.
  • dwarfhpp is a tool for generating C++ headers that let you write code directly against an object file—it's included in dwarfidl, and used in the implementation of cake.
  • objdump-all is a wrapper script for GNU objdump that gets you a useful objdump for a whole file in one go, dumping code and data appropriately for their section type
  • gsfake-convertbb is a wrapper for the convert tool of ImageMagick or GraphicsMagick that calculates bounding boxes for PDF files, and mimics the interface of Ghostscript. Use it with pdfcrop's --gscmd option to get more reliable bounding box detection. I've had some trouble using with recent versions of convert, so let me know if you get problems.
  • parens-filter.sh is a very short bash script for pretty-printing parenthesised tree representations. I've been using it on the output of my antlr-generated parser (specifically, on what gets printed by toStringTree()), but it probably works on other things (maybe S-expressions).

I've also submitted small patches to various open-source projects including LLVM (bugfix to bitcode linker), binutils (objcopy extension), gcc (documentation fix for gcj), Knit (compile fixes), Datascript (bug fixes), DICE (bug fixes), pdfjam (support “page template” option) and Claws Mail (support cookies file in RSS plugin). Some of them have even been applied. :-)

Goodies

Apart from my main development projects, I sometimes produce scripts and other odds and ends which might be useful to other people. Where time permits, I try to package them half-decently and make them available here.

For computer science researchers

I have written some scripts which attempt to retrieve decent BibTeX for a given paper (as a PDF or PostScript file). details

For researchers in a research institution

I've written a nifty script for printing papers, which helps people save paper, share printed-out papers and discover perhaps unexpected collaborators within their institution. details

For supervisors of Tripos courses (in Cambridge)

I have a Makefile which downloads and compiles Tripos past-paper questions. It's pretty much self-documenting. Here it is.

For general audiences

I have built a sizable collection of vaguely useful shell scripts and other small fragments. One day “soon” I will get round to publishing them. The biggest chunks are my login scripts, which use Subversion to share a versioned repository of config files across all the Unix boxes that I have an account on, and the makefile and m4 templates that build this web page. I need to clean these up a bit. In the meantime, if you're interested in getting hold of them, let me know.

Thoughts

Occasionally I write down some thoughts which somehow relate to my work. They now appear in the form of a blog. Posts are categorised into research, and teaching, development, publishing, higher education and meta strands.

Personal

I have the beginnings of a personal web page. It's very sparse right now. Have a look, if you like.

Contact

OfficeCornwallis S130
E-mailS.R.Kell@kent.ac.uk
PostDr Stephen Kell
School of Computing
Cornwallis Building
University of Kent
Canterbury, CT2 7NF
United Kingdom

Content updated at Tue 25 Sep 00:00:00 BST 2018.
validate this page

School of Computing, University of Kent, Canterbury, Kent, CT2 7NF

Enquiries: +44 (0)1227 824180 or contact us.

Last Updated: 21/11/2018