The University of Kent, Canterbury, Kent, CT2 7NZ, T +44 (0)1227 764000
I belong to the following research groups:
My present focus is automatic memory management for multicore systems. I have also been working on automatic memory management for real-time and embedded systems, virtual machines for these systems in general, and verification of applications for these systems. I am interested in experimental performance evaluation (benchmarking) of computer systems based on statistical methods. In the past I worked on a runtime system for distributed software components.
Garbage Collection for Multicore SystemsI am working on the EPSRC-funded Mirror GC project with Matthew Mole and Laurence Hellyer (Principal Investigtor: Richard Jones, Co Investigator: Fred Barnes, Visiting Researcher: Tony Hosking). The key problem is to design a collector with dynamic memory defragmentation in a parallel environment, but at the same time providing competitive performance.
Real-time Garbage Collection
In contrast with collectors for non-realtime systems, a real-time collector has to be able to do its work in short increments such that it does not make the application miss deadlines. In several of todays collector designs, this is achieved by making the collector activity interruptible at almost any time. A special scheduler then decides when to run the collector and when to run the application.
implemented Minuteman RTGC framework within Ovm, an open-source real-time Java virtual
machine developed at Purdue University. I used this framework to
experimentally compare two major types of scheduling of the real-time garbage
collector: slack-based scheduling and periodic scheduling [RTSS'09, TOCS'2011].
Collaborators: Jan Vitek, Filip Pizlo, Tony Hosking.
Incremental defragmentation requires barriers - a special piece of code
executed at memory accesses. These barriers would typically include a
dereference of a forwarding pointer on every object read or write (a.k.a
Brooks forwarding). Another option is replication, where each write has to
be performed on potentially two object copies, but reads do not require a
barrier. I proposed a variant of replication for uni-processor systems and
implemented it in Minuteman [Concurrency and
Computation]. I also proposed and implemented performance and memory
optimisation of handles, a yet another option for such barriers [ISMM'11].
Collaborators: Richard Jones (handles).
Verification and Benchmarking of Java for Real-time and Embedded Systems
Both verification and benchmarking of real-time Java pose new challenges over plain Java. In real-time systems, timing is part of correctness. Also, the task scheduler is deterministic, priority enforcing, and timing dependent - whereas in plain Java it is non-deterministic and ignores priorities.
Collision Detector is a highly
configurable real-time Java application benchmark. Based on simulated output
from a radar, the modelled application detects if any two aircraft are on a
collision course. There is a version for RTSJ (Real-time Specification for Java) with
RTGC/scopes, SCJ (Safety-Critical Java), plain Java, C, and few
stripped-down versions for embedded systems. I created a major revision of
the benchmark when I needed it for RTGC experiments. My revision unified the plain Java/RTSJ versions,
fixed several bugs, added also stripped-down versions of air traffic simulator, and
added instrumentation points to allow fine-grained measurements [Concurrency and
Collaborators: Ben Titzer, Jan Vitek, Jeff Hagelberg, Filip Pizlo, Petr Maj, Ghaith Haddad, Daniel Tang, Lei Zhao.
The new verification challenges in real-time Java include
analysis of uncaught exceptions (memory access errors not present in plain
Java), type analysis
to bound worst-case execution time in presence of virtual dispatch, blocking
analysis to bound worst-case delay on locks, or memory analysis to bound
memory usage for constrained embedded systems. RTEmbed
is an extension for Java
PathFinder that allows exhaustive testing/code model-checking of
programs in RTSJ or SCJ. This is work in progress, though the tool can check for
memory assignment errors or race conditions. My key interests and contributions are in the algorithms of the
checking, particularly in how to incorporate the notion of time into the
model checker without making unrealistic assumptions about the underlying
platform, as well as without causing state explosion even for very simple
Collaborators: Pavel Parizek, Martin Schoeberl, Michal Malohlava, Jan Vitek.
Performance Benchmarking Based on Statistical MethodsThe performance of current computer systems depends to a high degree on low-level functioning of hardware, operating systems, libraries, or virtual machines. These low-level details are too complex to comprehend in full and moreover are often undisclosed by vendors. Experimental methods and particularly benchmarking thus become a necessary part of software development. Even benchmarking is not a straightforward task, as some algorithms in the system are either non-deterministic or unpredictable. In my PhD research I focused on non-determinism observed between repeated executions of a benchmark and between repeated builds of a benchmark. The former in my system was caused by non-deterministic placement in physical memory and consequently conflict misses in the cache. The latter means that the build was actually non-deterministic, producing different binaries with actually different performance. I have seen this happen with GNU C++ compiler which randomized names of symbols in anonymous C++ namespaces. I have proposed an ANOVA (ANalysis Of VAriance) based statistical evaluation method that allowed to construct confidence intervals for mean execution time, taking into account multiple re-compilations and re-starts of the benchmark. Based on initial measurements, the method also allowed to estimate the optimal number of executions/re-builds (in different benchmarks, the impact of these two sources of non-determinism was very different). I have created a system for automated regression benchmarking of the Mono platform. The system (now offline) automatically downloaded the latest Mono version every day (starting in August 2004 and stopping in October 2008), ran several benchmarks, and based on the mentioned statistical method it automatically detected potential performance regressions (or improvements) [MASCOTS'05, EPEW'06].
Collaborators: Petr Tuma, Lubomir Bulej.
|Finance Chair||Publicity Chair||Workshop Chair|
Second International School on Trends in Concurrency
Prague, Czech Republic
June 22-27, 2008
Programme CommiteesJTRES 2012, RESOLVE 2012, JTRES 2011, EPEW 2011. ICOOOLPS 2011, TOOLS 2010, EPEW 2009, EPEW 2008, EPEW 2007, ASMTA 2007