When is an optimisation an optimisation? Given a program transformation, can we be sure that applying this transformation will not make matters worse? Improvement theory attempts to answer this question by providing a convenient algebra for proving cost relationships between programs.However, this convenience is not for free; rather, we have simply offloaded the work onto the poor sod whose task it is to validate the rules of the algebra, letting them suffer through the devilish details of the underlying operational semantics. We'd like to be able to abstract away some of this detail. One way to achieve this would be to move to a denotational theory of improvement: one that captures exactly those aspects of the operational behaviour we are concerned with, and discards the rest.When doing so, however, we are immediately faced with a dilemma: which framework should we use? Traditional domain theory provides us with a compelling ordering on programs, while metric spaces give us a powerful unique fixed point principle. Improvement theory seems to have aspects of both, leaving it unclear which model is the more natural. It seems like, whichever model we pick, we lose something important.Wouldn't it be nicer if we didn't have to choose?
Cornwallis South West,
University of Kent,