Kilov, H. et al. (2013). The Reference Model of Open Distributed Processing: Foundations, experience and applications.Computer Standards and Interfaces[Online]35:247-256. Available at: http://dx.doi.org/10.1016/j.csi.2012.05.003.
This paper provides an editorial introduction to the current special issue on Open Distributed Processing. It
looks back over the development of the ODP standards and at the way in which they have been used, and
looks forward at the way current activities are progressing. It contains a broad bibliography covering ODP
standards, related research work and application case studies.
Peach, K. et al. (2013). Mechanism of action-based classification of antibiotics using high-content bacterial image analysis. Molecular Biosystems[Online]9:1837-1848. Available at: http://dx.doi.org/10.1039/C3MB70027E.
Image-based screening has become a mature field over the past decade, largely due to the detailed information that can be obtained about compound mode of action by considering the phenotypic effects of test compounds on cellular morphology. However, very few examples exist of extensions of this approach to bacterial targets. We now report the first high-throughput, high-content platform for the prediction of antibiotic modes of action using image-based screening. This approach employs a unique feature segmentation and extraction protocol to quantify key size and shape metrics of bacterial cells over a range of compound concentrations, and matches the trajectories of these metrics to those of training set compounds of known molecular target to predict the test compound's mode of action. This approach has been used to successfully predict the modes of action of a panel of known antibiotics, and has been extended to the evaluation of natural products libraries for the de novo prediction of compound function directly from primary screening data.
Linington, P. (2005). Automating Support for E-Business Contracts. International Journal of Cooperative Information Systems14:77-98.
If e-business contracts are to be widely used, they need to be supported by the IT infrastructure of the organizations concerned. This implies that the interactions between systems in different organizations must be guided by the contract and there must be sufficiently strong checks and balances to ensure that the contract is in fact obeyed. This includes facilities for the unbiased monitoring of correct behaviour and the reporting of exceptions. One of the ways to provide this support is to generate it directly from the agreed contract. This paper considers the steps necessary to provide sufficient automation in the support and checking of e-Business contracts for them to offer efficiency gains and so to become widely used. It focuses on the role of models, taking a model-driven approach to development and discussing both the source and target models and the transformational pathways needed to support the contract-based business processes.
Linington, P. et al. (2004). A unified behavioural model and a contract language for extended enterprise. Data and Knowledge Engineering[Online]51:5-29. Available at: http://dx.doi.org/10.1016/j.datak.2004.03.005.
This paper presents a coordination model for expressing behaviour in an extended enterprise. Our model is unified because it enables the same style of expressions for describing behaviour/structure in a selfcontained enterprise and for describing cross-enterprise behaviour/structure. This model can support a broad range of modelling activities but the specific focus of this paper is on deriving the key elements of a domain language primarily targeted at expressing and monitoring behavioural conditions stated in business contracts. We also show how business contracts serve as a unifying mechanism for describing interactions in the extended enterprise.
Waters, A. et al. (2001). Permabase: predicting the performance of distributed systems at the design stage. IEE Proceedings: Software[Online]148:113-121. Available at: http://dx.doi.org/10.1049/ip-sen:20010553.
The use of distributed systems is now critical to many organisations. Designing software for these systems can be complex and there are growing demands on the performance of systems as they become increasingly large-scale. Timeliness is no longer just a matter of response time, as there can be stringent delay requirements for real-time multimedia traffic. In this paper we describe the Permabase project funded by BT, which produced prototypes to predict software performance automatically at the systems design stage. The paper discusses the Permabase rationale and describes the architecture and details of the prototype systems and its validation using case studies. We discuss the use of UML as a mechanism for capturing the information needed for performance prediction modelling and show how translation enabled us to produce simulation models. We review the achievements of the project and look forward to ways in which the work could be enhanced and built upon to encompass a greater range of systems.
Boiten, E. et al. (2000). Viewpoint Consistency in ODP. Computer Networks34:503-537.
Open Distributed Processing (ODP) is a joint ITU/ISO standardisation framework for constructing distributed systems in a multi-vendor environment. Central to the ODP approach is the use of viewpoints for specification and design. Inherent in any viewpoint approach is the need to check and manage the consistency of viewpoints. In previous work we have described techniques for consistency checking, refinement, and translation between viewpoint specifications, in particular for LOTOS and Z/Object-Z. Here we present an overview of our work, motivated by a case study combining these techniques in order to show consistency between viewpoints specified in LOTOS and Object-Z.
Rizzo, M., Utting, I. and Linington, P. (1997). Call management in the open distributed office. Electronics & Communication Engineering Journal9:107-116.
This paper describes an agent-based model for the effective management of voice calls within an integrated computing-telephony environment. In this model, agents manage calls on behalf of users, who influence the behaviour of their agents by means of policy specifications. Call setup then involves a negotiation process whereby agents attempt to agree upon some course of action to take and the agents involved can continue to exercise control over a call in progress.
Bowman, H. et al. (1996). Cross-viewpoint consistency in open distributed processing. Software Engineering Journal11:44-57.
The paper discusses the use of viewpoints in the ODP (Open Distributed Processing) standardisation initiative. The ODP reference model is a new framework, going beyond OSI (Open Systems Interconnection). Multiple viewpoints are used to specify complex ODP systems. Consistency of viewpoint specifications is clearly a central issue. In addition, formal techniques have an increasingly significant role within ODP, and so mechanisms are needed that support consistency checking of formal specifications. An overview is provided of the ODP reference model and the use of viewpoints within it, before discussing consistency within ODP and how it can be realised using formal notations. Consistency checking is illustrated using the LOTOS formal description technique.
Bowman, H. et al. (1995). FDTs for ODP. Computer Standards and Interfaces[Online]17:457-479. Available at: http://dx.doi.org/10.1016/0920-5489(95)00021-L.
This paper discusses the use and integration of formal techniques into the Open Distributed Processing (ODP) standardization initiative. The ODP reference model is a natural progression from OSI. Multiple viewpoints are used to specify complex ODP systems. Formal methods are playing an increasing role within ODP. We provide an overview of the ODP reference model, before discussing the ODP requirements on FDTs, and the role such techniques play. Finally we discuss the use of formalisms in the central problem of maintaining cross viewpoint consistency.
Kemp, Z. et al. (1992). Zenith System for Object Management In Distributed Multimedia Design Environments. Information and Software Technology[Online]34:427-436. Available at: http://dx.doi.org/10.1016/0950-5849(92)90034-M.
The paper describes the Zenith research project, which is being carried out at the Universities of Kent and Lancaster, UK. It is a research prototype of an object management system that is intended to meet the data-management requirements of the next generation of application domains, such as office information systems, integrated project support environments, and geographical information systems. Zenith is designed to provide a flexible and adaptable platform for the management of distributed multimedia objects, on top of which specialized applications can easily be built. The design of the system reflects this goal. The object-management layer provides the high-level abstractions required for managing complex objects, and the base-services layer is responsible for the management of primitive entities stored on conventional and specialized devices, while maintaining appropriate location, media, and other transparencies. The earlier sections of the paper briefly discuss the background to the project, including the context of the Zenith environment and the philosophy that underlies its design. Subsequent sections concentrate on the object model and the object-oriented design of the prototype system architecture. Finally, the current status and implementation issues are presented, followed by some brief concluding remarks.
Aagedal, J., Bezivin, J. and Linington, P. (2005). Model-Driven Development. in:Malenfant, J. and Ostvold, B. M. eds.ECOOP 2004 Workshop Reader.Springer-Verlag, pp. 148-157. Available at: http://dx.doi.org/10.1007/b104146.
The objective of the workshop on model-driven development (WMDD 2004) was to identify and discuss issues related to system modelling and how to transform these models to a level suitable for execution and/or simulation. The workshop contained three sessions for presentation of position papers, and a final session for discussion and drawing conclusions. The topics of the three sessions were transformations, model-driven development aspects, and PIMs for distributed systems, web and B2B.
The title of this report should be referenced as Report from the ECOOP 2004 Workshop on Model-Driven Development (WMDD 2004).
Linington, P. (2001). Issues in Distributed Systems. in:Bowman, H. and Derrick, J. eds.Formal Methods for Distributed Processing: A Survey of Object-Oriented Approaches.Cambridge University Press, pp. 3-17.
Linington, P. (2001). Distributed Systems, an ODP Perspective. in:Bowman, H. and Derrick, J. eds.Formal Methods for Distributed Processing: A Survey of Object-Oriented Approaches.Cambridge University Press, pp. 18-35.
Pinto, P. and Linington, P. (1994). A language for the specification of interactive and distributed multimedia applications. in:DeMeer, J., Mahr, B. and Storp, S. eds.Open Distributed Processing II.Oxford: Elsevier Science Ltd, pp. 247-264.
This paper describes a model for distributed multimedia applications and a specification language based on this model. The applications involve the composition and synchronization of multimedia objects, and their interaction with the user and the environment. Objects are autonomous entities which have a behaviour, in terms of the set of operations they offer to the environment; the mechanism for synchronization with these objects is based on the communication of typed events. A single mechanism integrates user interaction with run-time control of the distributed system, allowing a natural interplay between them. The language specifies compositions by exploiting the concepts of the model. It captures the characteristics of multimedia interactions using an adaptation of process algebras, and includes a (procedural) functional part to define data structures and to provide consistency checks. The prototype system implemented consists of a compiler which translates the language expressions into a state machine, and a central interpreter which orchestrates the composition and synchronization of the distributed multimedia objects.
Welch, P. and Linington, P. (1993). An Enabling Infrastructure for a Distributed Multimedia Industry. in:Welch, P. H., May, M. D. and Thompson, P. W. eds.Networks, Routers and Transputers: Function, Performance and Application.IOS Press, Netherlands, pp. 183-200.
OCL 2.0 is the newest version of the OMGs constraint language to accompany their suit of Object Oriented modelling languages. The use of OCL as an accompanying constraint and query language for modelling with these languages is essential. As tools are built to support the modelling languages, it is also necessary to implement the OCL. This paper reports our experience of implementing OCL based on the latest version of the OMGs OCL standard, UML models and MDA  techniques supported by the Kent Modelling Framework (KMF) , developed at the University of Kent. We provide an efficient LALR grammar for parsing the language and describe an architecture that enables the language to be bridged to any other modelling framework or tool. We also provide both syntactic and semantic models, which were used as inputs for KMFStudio  in order to generate Java code. In addition we give feedback on problems and ambiguities discovered in the standard, with some suggested solutions.
Rizzo, M., Linington, P. and Utting, I. (1994). Integration of location services in the Open Distributed Office. University of Kent, Computing Laboratory.
There has recently been much interest in location systems which enable people and equipment to be tracked as they move within and across buildings. Perhaps the most popular of these is the active badge location system where tracking is done by means of IR communication between badges and a network of stations, but others include system login information, and personal diary systems. In the light of this, we describe a location system which does not rely solely on one specific mechanism, but uses and combines information from as many sources as possible, under the control of a master location system (MLS) which co-ordinates all location systems available within an organisation.
Rizzo, M., Linington, P. and Utting, I. (1994). VitKit: a Voice Interaction Toolkit. University of Kent, Computing Laboratory.
This paper describes the Voice Interaction ToolKIT(VitKit), a C++ class library for building telephone-based user interfaces. Rather than use a high-level specification approach, it is intended that programmers use the classes directly to compose interfaces, although the possibility of developing code-generators for building (parts of) interfaces is not excluded. The toolkit supports dynamic construction and re-configuration of interfaces and adopts a very flexible approach to interfacing with underlying applications.
Rizzo, M., Linington, P. and Utting, I. (1994). Call Management in the Open Distributed Office. University of Kent, Computing Laboratory.
This paper describes an agent-based model for the management of calls in an office environment. In this model, agents manage calls on behalf of users, who influence the behaviour of their agents by means of policy specifications. Call setup involves a negotiation process whereby agents attempt to agree upon some course of action to take. The model supports close-knit integration of voice and data services and is general enough to be used in a wide range of applications.
Rizzo, M., Linington, P. and Utting, I. (1994). The ODO project: a Case Study in Integration of Multimedia Services. University of Kent, Computing Laboratory.
Recent years have witnessed a steady growth in the availability of wide-area multi-service networks. These support a variety of traffic types including data, control messages, audio and video. Consequently they are often thought of as integrated media carriers. To date, however, use of these networks has been limited to isolated applications which exhibit very little or no integration amongst themselves. This paper describes a project which investigated organisational, user interfacing and programming techniques to exploit this integration of services at the application level.
Conference or workshop item
Linington, P., Miyazaki, H. and Vallecillo, A. (2012). Obligations and Delegation in the ODP Enterprise Language. in:EDOC Workshops 2012: VORTE.pp. 146-155. Available at: http://doi.ieeecomputersociety.org/10.1109/EDOCW.2012.28.
The ODP Enterprise Language is used to describe
the organizational objectives and policies that apply to the
system to be specified. It also captures constraints associated
with the environment in which the system is to be used.
Because the enterprise specification is concerned more with
organizational issues than technical details of the system,
there is considerable emphasis in the language design on
obligations and norms, rather than on the declaration of some
single rigidly required behaviour. This leads to a requirement
for specification techniques that encompass a wide range of
behaviour and then identify which behaviour should occur and
how exceptions are to be handled; this is more challenging
than computational specification, where the specification is
essentially a recognizer for correct behaviour and does not
define what is to happen if there are violations.
This paper describes work currently in progress within the
International Organization for Standardization (ISO) to extend
the Enterprise Language so that it is able to express more
directly the necessary obligations and other deontic concepts,
such as permissions and prohibitions. The approach being
taken is to introduce a new kind of object that reifies the
deontic constraints and thereby simplifies the description of
the behaviour expected.
Once the basic concepts are in place, they can be used
to define a wide range of organizational matters, such as
delegation rules and the way communities respond dynamically
to changes in their structure.
Linington, P. (2010). The Stereochemistry of Enterprise Objects. in:Enterprise Distributed Object Computing Conference Workshops - WODPEC 2010.pp. 182-196. Available at: http://www.cs.kent.ac.uk/pubs/2010/3153.
In ODP, an enterprise specification is expressed in terms of the definition of some set of communities, characterized by their community contracts. A community places constraints on its members, restricting the behaviour they can participate in. This may involve a stronger form of constraint than those which result from the traditional binding of objects at interfaces, which requires only that the communication between the objects has some observable properties. This paper discusses the different forms of constraints, and examines the range of forms that community construction may take. In particular, it examines the process of abstracting from a community to yield a community object, and then using this object to fill some role in a broader community. It shows that the role filling process needs to consider mappings and constraints between objects that fill other roles in the communities concerned.
Köllmann, C. et al. (2007). An Aspect-oriented Approach to Manage QoS Dependability Dimensions in Model Driven Development. in:Pires, L. F. and Hammoudi, S. eds.Model-Driven Enterprise Information Systems Proceedings of the 3rd International Workshop on Model-Driven Enterprise Information Systems - MDEIS 2007 Funchal, Portugal.Portugal: INSTICC Press, pp. 85-94.
The ODP Reference Model is one of a number of specification frameworks which are based on the definition of a set of viewpoints that are coupled together by the definition of correspondences between terms. Wherever a correspondence is declared, any real world entity that is represented by a term in one viewpoint must also satisfy the requirements placed by the occurrence of the corresponding term in the other viewpoint. Although this idea represents an intuitively simple and satisfying way of talking about the design of complex systems, the idea of a correspondence is not as simple as it might, at first sight, appear. This paper uses simple examples to illustrate some of the complexities resulting from the coupling of object models and examines the consequences for claims of conformance to the complete system of specifications.
Linington, P. and Liyanagama, P. (2007). Incorporating Security Behaviour into Business Models Using a Model Driven Approach. in:11th IEEE International Enterprise Distributed Object Computing Conference (EDOC 2007).IEEE Press, pp. 405-415. Available at: http://dx.doi.org/10.1109/EDOC.2007.16.
There has, in recent years, been growing interest in Model Driven Engineering (MDE), in which models are the primary design artifacts and transformations are applied to these models to generate refinements leading to usable implementations over specific platforms. There is also interest in factoring out a number of non-functional aspects, such as security, to provide reusable solutions applicable to a number of different applications. This paper brings these two approaches together, investigating, in particular, the way behaviour from the different sources can be combined and integrated into a single design model. Doing so involves transformations that weave together the constraints from the various aspects and are, as a result, more complex to specify than the linear pipelines of transformations used in most MDE work to date. The approach taken here involves using an aspect model as a template for refining particular patterns in the business model, and the transformations are expressed as graph rewriting rules for both static and behaviour elements of the models.
Linington, P. (2006). Policy Specification: Meeting Changing Requirements without Breaking the System Design Contract. in:Almeida, J. P. A. et al. eds.Enterprise Distributed Object Computing Conference Workshops, 2006. EDOCW apos;06. 10th IEEE International.IEEE Digital Library, p. . Available at: http://dx.doi.org/10.1109/EDOCW.2006.56.
There has been a great deal of interest in recent years in the use of policies to simplify system management and to reduce costs. However, the major focus has been on the development of techniques with the greatest expressive power possible, generally viewing the policy authoring as a self-contained activity performed by experts who understand the aims of and constraints on the system being managed. A system is normally designed to meet agreed requirements and objectives, which can be seen as constituting a design contract for the system. The aim in introducing policies should be to allow flexibility to meet changing circumstances without violating the guarantees given by this contract. This paper looks at policy specification as a step in the incremental design of systems and examines how policies need to be constrained in order to preserve the over all design objectives for the system being managed. It proposes a specification architecture for policies, discusses how it might be used, and considers how well-suited some existing specification languages and tools are to supporting this architecture.
Jittamas, V. and Linington, P. (2006). Using a Policy Language to Control Tuple-space Synchronization in a Mobile Environment. in:Burgess, M. and Wijesekera, D. eds.Seventh IEEE International Workshop on Policies for Distributed Systems and Networks.Washington, DC (USA): IEE Computer Society, pp. 239-242. Available at: http://dx.doi.org/10.1109/POLICY.2006.38.
Any sharing of information using a distributed platform carries the risk of disconnection because of loss of network access. This is particularly the case when considering mobile communication, either using base stations or by forming ad-hoc networks. Replication of shared data is one way to increase data availability in such an environment, but leads to the problem of inconsistency between copies of data, and so requires some means of data synchronization. This paper investigates how policies can be used to resolve data conflict in a way that can be tailored to meet the needs of different types of application in different situations. It discusses a range of application requirements, and describes a policy-based pervasive middleware to support the sharing of data using a tuple-space paradigm. Policies maintained within the middleware are used to trigger a wide range of synchronization options to restore the consistency of the data after periods of disconnected operation.
Linington, P. (2004). What Foundations does the RM-ODP need? in:Vallecillo, A., Linington, P. F. and Wood, B. M. eds.Workshop on ODP for Enterprise Computing (WODPEC 2004).Monterey, California, USA: IEEE Digital Library, pp. 15-22.
This position paper revisits the requirements for the set of Foundation Concepts for the ODP Reference Model and the approach originally taken to satisfying them. It then examines, in the light of experience, the areas where the Foundations have subsided, and areas where extensions need to be built. The aim is to provide a starting point for discussion on requirements to change the Foundations document.
Linington, P. (2004). The Role of Contracts in Establishing Interoperability of Enterprise Systems. in:INTEREST 2004 Workshop, ECOOP 2004.Oslo, Norway.
Milosevic, Z. et al. (2004). On design and implementation of a contract monitoring facility. in:Benatallah, B., Godart, C. and Shan, M. -C. eds.Proceedings of WEC, First IEEE International Workshop on Electronic.Washington, DC, USA: IEEE Computer Society, pp. 62-70. Available at: http://dx.doi.org/10.1109/WEC.2004.13.
In this paper we present several solutions to the problem of designing and implementing a contract monitoring facility as part of a larger cross-organisational contract management architecture. We first identify key technical requirements for such a facility and then present our contract language and architecture that address key aspects of the requirements. The language is based on a precise model for the expression of behaviour and policies in the extended enterprise and it can be used to build models for a particular enterprise contract environment. These models can be executed by a contract engine that is part of an overall contract architecture that supports the contract management life cycle at both the contract establishment and contract execution phases. Our solution makes extensive use of the existing open standards.
Linington, P. (2004). Model Driven Development and Non-functional Aspects. in:WMDD 2004 Workshop, ECOOP 2004.Oslo, Norway.
Linington, P. (2004). Automating Support for e-Business Contracts. in:Milosevic, Z. and Governatori, G. eds.Contract Architectures and Languages workshop (CoALa2004).Monterey, California, USA: IEEE Digital Library.
Milosevic, Z. et al. (2004). Inter-Organisational Collaborations Supported by E-Contracts. in:Lamersdorf, W., Tschammer, V. and Amarger, S. eds.Building the E-Service Society: E-Commerce, E-Business, and E-Government.Toulouse, France: Springer, pp. 413-429. Available at: http://dx.doi.org/10.1007/1-4020-8155-3_23.
This paper presents a model for describing inter-organizational collaborations for e-commerce, e-government and e-business applications. The model, referred to as a community model, takes into account internal organizational rules and business policies as typically stated in business contracts that govern cross-collaborations. The model can support the development of a new generation of contract management systems that provide true interorganizational collaboration capabilities to all parties involved in contract management. This includes contract monitoring features and dynamic updates to the processes and policies associated with contracts. We present a blueprint architecture for inter-organizational contract management and a contract language based on the community model. This language can be used to specialize this architecture for concrete collaborative structures and business processes.
Linington, P. and Neal, S. (2003). Using Policies in the Checking of Business to Business Contracts. in:Lutfiyya, H., Moffat, J. and Garcia, F. eds.Fourth IEEE International Workshop on Policies for Distributed Systems and Networks.Lake Como, Italy: IEEE Computer Society, pp. 207-218. Available at: http://www.cs.kent.ac.uk/pubs/2003/1636.
The mechanization of business-to-business contract enforcement requires a clear architecture and a clear and unambiguous underpinning model of the way permissions and obligations are managed within organizations. Policies will need to be expressed in terms of the basic model, and the expressive power available will depend, in part, on the ability to compose sets of policies derived from different sources. The models used must reflect the structure of the organizations concerned and how the behaviour of organizations is constrained by broader shared rules. This paper considers a contract monitoring system intended to provide automated checking of business to business contracts, sets out a suitable model and explains how it can be used to guide the representation and control of contracts in a prototype monitoring system.
Linington, P. (2003). A policy-based model-driven security framework. in:Ururahy, C., Sztajnberg, A. and Cerqueira, R. eds.Middleware 2003 Companion:Workshop Proceedings.Pontificia Universidade Caolica do Rio de Janeiro, pp. 273-276. Available at: http://www.cs.kent.ac.uk/pubs/2003/1637.
The adoption of a model-driven approach to the construction of applications places the focus on business logic and takes it away from detailed middleware mechanisms. It also opens new opportunities for more detailed and more dynamic control of non-functional properties. This position statement illustrates the possibilities by considering the ways in which maintenance of security infrastructure can exploit the model-driven approach.
Neal, S. et al. (2003). Identifying requirements for Business Contract language: A monitoring perspective. in:Steen, M. and Bryant, B. R. eds.Proceedings of the seventh International Enterprise Distributed Object Computng Conference.Brisbane, Australia: IEEE Computer Society, pp. 50-61. Available at: http://www.cs.kent.ac.uk/pubs/2003/1807.
This paper compares two separately developed systems for monitoring activities related to business contracts, describes how we integrated them and exploits the lessons learned from this process to identify a core set of requirements for a Business Contract Language (BCL). Concepts in BCL needed for contract monitoring include: the expression of coordinated concurrent actions; obliged, permitted and prohibited actions; rich timeliness expressions such as sliding windows; delegations; policy violations; contract termination/renewal conditions and reference to external data/events such as change in interest rates. The aim of BCL is to provide sufficient expressive power to describe contracts, including conditions which specify real-time processing, yet be simple enough to retain a human-oriented style for expressing contracts.
Linington, P. and Frank, W. (2001). Specification and Implementation in ODP. in:Cordeiro, J. and Kilov, H. eds.Proceedings of the 1st Workshop on Open Distributed Processing: Enterprise, Computation, Knowledge, Engineering and Realisation.Setubal, Portugal: ICEIS Press, pp. 69-80.
ODP specifications are normally produced as one step in the process of planning and implementing real systems, but the detailed sequence of events differs depending on the methodology and intended scope of the specification. This can lead to divergences of opinion about how specifications are to be interpreted. This paper reviews some of the issues and argues that there need not be a problem if ODP specifications are interpreted in terms of a flexible conformance architecture. It looks at the interplay between specification and testing, and reviews the conformance aspects of the RM-ODP to see how it is able to provide the necessary flexibility for the creation of long-lived and reusable enterprise specifications.
Neal, S. and Linington, P. (2001). Tool Support for Development Using Patterns. in:Lupu, E. C. and Wegmann, A. eds.Proceedings of the fifth International Enterprise Distributed Object Computng Conference.Seattle, Washingto, USA: IEEE Computer Society, pp. 237-248. Available at: http://doi.ieeecomputersociety.org/10.1109/EDOC.2001.950443.
There has been a growing interest in recent years in the use of abstract building blocks in system specification. Designs based on Patterns and Communities are two examples. However, these structures are then refined further during design and implementation, and it is often difficult to determine whether the eventual system implementation is a faithful reflection of the original properties of the pattern specified. This is particularly true of patterns used to describe an enterprise view of the system. This paper concentrates on the behavioural aspects of pattern specification, and investigates the way that observation of the system can be interpreted to check that properties of the pattern specification are preserved. It describes a diagnostic tool that checks the actual system behaviour against the pattern specification, and discusses the requirements this places on the form of the specification language and a number of the problems of interpretation that arise in applying such tools.
Rio, M. and Linington, P. (2000). Distributed Quality of Service Multicast Routing with Multiple Metrics for Receiver initiated Joins. in:8th IEEE International Conference on Networks (ICON 2000).IEEE Computer Society, pp. 180-187. Available at: http://dx.doi.org/10.1109/ICON.2000.875787.
This paper describes a novel method of building multicast trees for Real Time Traffic with Quality of Service constraints. There is a wide range of heuristics to calculate the optimal multicast distribution trees with bounds on the maximum delay from the source to all members. However these heuristics require all the members to be known in advance and assume the existence of a centralized service. We present a heuristic - Best Cost Individual Join (BCIJ) - that joins members one by one, randomly to the existing tree. The method doesn't need previous knowledge of the group members. Trees are dynamically built when each member arrives in the group. A distributed method - Multiple Metric Broadcast (MMB) - for nodes to obtain the best valid path to the existing tree is also presented. MMB is inspired by Reverse Path Forwarding and broadcasts queries to the network that reach existing on-tree members. Theses reply with the best valid paths to the joining member. The member then selects the best path. This avoids the use of any centralized service and the need for link-state information to be available in any node. Evaluation presented shows that the BCIJ produces trees with better cost than existing centralized heuristics and that MMB doesn't have a major effect on the network if the group participation is sufficiently large.
Linington, P. and Tripp, G. (2000). Two-point ATM Switching System Measurements. in:Kouvatsos, D. D. ed.Technical Proceedings, Eighth IFIP Workshop on Performance Modelling and Evaluation of ATM and IP Networks (ATM and IP 2000).Networks UK.
Two-point ATM Switching System Measurements P.F. Linington and G.E.W. Tripp Abstract The ATM testbed at the University of Kent includes specialized hardware for the generation and measurement of ATM cell streams with a variety of different timing properties. This measurement environment has made possible precise evaluation of the performance of various ATM network configurations, revealing a range of different kinds of behaviour that result from the switch architectures and the way they are combined. Observations of the cell stream taken at different points in the network have been analysed using a number of techniques, exploiting variations on correlation of the times measured and delays experienced by the cell stream. Different processing options produce a family of data representations which show significant features of the system behaviour, and which can be used to identify potential operational problems before they become apparent to users of the network.
Induruwa, A., Linington, P. and Slater, J. (1999). Quality of Service Measurements on SuperJANET - The UK Academic Information Highway. in:Proceedings of INET'99.
JANET, the U.K. academic and research network, has, over almost three decades, grown from a small research network to a major national resource with a vastly expanded user base that now expects an appropriate level of service. Performance targets for the network are set in negotiation between the supplier and representatives of the funding bodies; they are documented in a set of service level agreements. Quality of service measurements are needed to monitor adherence to these agreements, to predict future behavior, and to assist in the formulation of strategy. This paper describes the measurement framework and gives examples of its use to resolve problems and guide policy.
Linington, P. (1999). Options for expressing ODP Enterprise Communities and their Policies by using UML. in:Proceedings of the Third International Enterprise Distributed Object Computing Confererence (EDOC '99).IEEE, pp. 72-82. Available at: http://dx.doi.org/10.1109/EDOC.1999.792051.
Options for expressing ODP Enterprise Communities and their Policies by using UML P.F. Linington Abstract The ODP Enterprise Language allows the rules and policies that characterize an organization to be brought together and used to guide the various stages of system design, development and operation. UML is one of the leading notations for system design and is likely to be the basis for a wide range of design tools. However, UML has a comparatively weak set of facilities for supporting the combination of existing, parameterized specifications and, in particular, for defining and managing policies. This paper discusses the requirements for defining communities and expressing policies within a UML environment, compares ways in which the existing notation might be used in Enterprise specification, and indicates some of the implication this would have for system development tools.
Linington, P. (1999). An ODP approach to the development of large middleware systems. in:Kutvonen, L., Konig, H. and Tienari, M. eds.IFIP TC6 WG6 1 2nd International Working Conference on Distributed Applications and Interoperable Systems (DAIS 99).USA: Kluwer Academic Publishers, pp. 61-74.
Since the Reference Model for Open Distributed Processing was completed, work in ISO in this area has concentrated on the definition of a number of supporting standards to add detail to the basic framework. Taken together, these provide a powerful structure for the support of large federated systems and provide: a basis for the enhancement of tools for the development and maintenance of large middleware systems. This paper describes the main features of the new work and speculates oa how it can be applied to augment the tools used to design and manage such systems and, by so doing, can increase their flexibility.
Linington, P. (1999). RISCSIM - A Simulator for Object-based Systems. in:Proceedings of the UKSIM'99 Conference of the UK Simulation Society.UK Simulation Society, pp. 141-147.
Linington, P., Milosevic, Z. and Raymond, K. (1998). Policies in communities: Extending the ODP enterprise viewpoint. in:2nd International Enterprise Distributed Object Computing Workshop (EDOC98).Ieee, pp. 14-24. Available at: http://dx.doi.org/10.1109/EDOC.1998.723238.
The Reference Model of Open Distributed Processing (RM-ODP) introduces the notion of an enterprise viewpoint and provides a minimum set of concepts for structuring enterprise language specifications. This paper extends the RM-ODP enterprise concepts by exploring how policy can be modelled within and between communities. A model for enterprise behaviour based on physical and social actions is presented.
Carvalho, P. and Linington, P. (1998). Performance limitations of a Banyan-based ATM switching system under multiple, shaped traffic flows. in:16th IASTED International Conference on Applied Informatics, Garmisch-Partenkirchen, Germany.
Linington, P., Milosevic, Z. and Raymond, K. (1998). Policies in Communities: Extending the Enterprise Viewpoint. in:Proc. 2nd International Workshop on Enterprise Distributed Object Computing (EDOC'98), San Diego, USA.New York; Institute of Electrical and Electronics Engineers; 1998, pp. 14-24.
Kilov, H. et al. (1997). Types, invariants, and epochs: specifying changes in RM-ODP and ODP information language. in:Kilov, H., Rumpe, B. and Simmonds, I. eds.Proceedings of the OOPSLA'97 Workshop on object-oriented behavioral semantics.pp. 115-118.
Software development can be costly and it is important that confidence in a software system be established as early as possible in the design process. Where the software supports communication services, it is essential that the resultant system will operate within certain performance constraints (e.g. response time). This paper gives an overview of work in progress on a collaborative project sponsored by BT which aims to offer performance predictions at an early stage in the software design process. The Permabase architecture enables object-oriented software designs to be combined with descriptions of the network configuration and workload as a basis for the input to a simulation model which can predict aspects of the performance of the system. The prototype implementation of the architecture uses a combination of linked design and simulation tools.
Lindsey, D. and Linington, P. (1996). RIVUS: A Stream Template Language for Capturing Multimedia Requirements. in:Hutchison, D. et al. eds.Teleservices and Multimedia Communications (Proc. 2nd COST 237 Int. Workshop).Springer-Verlag, pp. 259-277.
Linington, P., Derrick, J. and Bowman, H. (1996). The specification and conformance of ODP systems. in:9th International Workshop on Testing of Communicating Systems.Darmstadt, Germany: Chapman & Hall, pp. 93-114.
Open Distributed Processing (ODP) is a joint standardisation activity of the ISO and ITU. A reference model has been defined which describes an architecture for building open distributed systems. This paper introduces the key aspects of the reference model of open distributed processing, including the ODP conformance framework. We discuss how specific formal techniques are used in the ODP viewpoints, along with the implications for conformance assessment using such techniques. Particular attention is given to the role of consistency in the conformance assessment process. Finally, we review the current work on an ODP conformance testing methodology.
Akehurst, D. et al. (1996). The Effects of ABR Traffic on CBR Traffic. in:Kouvatsos, D. D. ed.University of Bradford, UK, pp. 32/1-32/10.
One recent development by the ATM Forum is a service class to carry ``bursty'' traffic, termed Available Bit Rate or ABR. ABR is a rate based service class that attempts to make optimal use of an ATM virtual circuit depending on the remaining bandwidth available along its route. The contract for an ABR circuit specifies parameters defining the maximum and minimum bandwidth available to it. Effective use of this available bandwidth is achieved by means of a rate control mechanism. If the network is experiencing congestion, the transmission rate of ABR traffic is decreased. Conversely, in an uncongested network the transmission rate is increased. The ABR source receives network information via Resource Management (RM) cells that are generated at regular intervals. These are returned by the destination to the ABR source, indicating any congestion experienced en route. In order to provide an acceptable transport mechanism, the effect of ABR traffic on other service classes, and in particular Constant Bit Rate (CBR) traffic, must be minimal. However, due to the dynamic nature of the ABR service class it is not readily apparent that this will be the case. The aim of the paper is to investigate how, and to what extent, the ABR service class influences CBR traffic. Of particular interest are the variation in delay and the absolute delay experienced by CBR cells in the presence of ABR traffic. Discrete event simulation is used to investigate the performance of CBR traffic given certain parameters which characterise the ABR service. These describe the bounds within which the rate control mechanism operates and how the rate changes in response to the reception of RM cells. The performance of ABR traffic is determined by the point at which network congestion occurs and actions taken by the switch to detect and respond to congestion.
Rothwell, K., Linington, P. and Waters, A. (1996). Experiences in implementing a real-time video filestore. in:Cambridge, UK: EDA Exhibitions Ltd., pp. 157-165.
Access to video and audio clips can enhance students' understanding of a complex subject. In a classroom environment, all the students will typically use the same media objects at approximately the same time. The majority of this usage will be simple playback with comparatively little recording. Hence, by caching the media data and taking advantage of the predictable nature of stream usage by the current clients, we may satisfy multiple requests for media data with only a single disk access and so alleviate the bottleneck of the disk subsystem. Consequently, it is possible to construct a multi-user continuous media server for use within this environment for a relatively small cost. From previous efforts to build such a server, it is clear that the bandwidth of a standard disk device is the major bottleneck preventing the supply of real-time video to multiple clients. Solutions based on a RAID disk architecture can achieve high aggregate bandwidth, but at a cost which is unattractive for a teaching environment. In this situation there is typically a small number of clips, each of which is a few minutes long. The solution presented here is very efficient on the small scale, but would be inappropriate for `video on demand' or video archives. The server's cache is managed through a knowledge of the activities of all current streams, viz. which file they each access, at what point on the time-line, at what speed and in which direction. By maintaining a global map of all current activity it is possible to predict the file data next in demand. In particular, we keep data resident which has been used by one stream, but will be needed by others very soon. Data used by a stream that has no predicted demand is discarded immediately. This technique gives the most optimal usage of cache space and much reduces the load placed on the disks. The server is designed to store any format of media data, either compressed or uncompressed; the exact encoding of the data is transparent to the server's media container format. To achieve format independence each media object consists of two files: i) the media data; and ii) the timing/synchronisation metadata. The metadata file also contains a header to allow the the clients to determine the format and type of the data, plus parameters such as the number of blocks and frames, the total playback duration, and so on. Each block of data in the media data file is encoded in a form ready for transmission over an ATM network without modification; the computationally expensive block error check data does not need to be calculated for each block of data transmitted, so minimising transmission latency and improving overall throughput. The server supports NFS allowing the media directory hierarchy to be searched and manipulated by any host able to act as an NFS client. The streams are initiated and controlled via an additional interface using Sun RPC protocol, allowing all basic functions such as setup, teardown, pause/continue, playback speed alterations and position changes. In addition, conversion daemons have been constructed which translate the Sun RPC protocol and allow integration with such systems as ANSAWare, CORBA and Microsoft RPC. This approach has the advantage of not over-complicating the video server and enabling incorporation with any client system. Thus, the video file server is available to any machine with an ATM network adapter and the appropriate driver software.
Ibbetson, A. et al. (1996). Reducing the cost of remote procedure call. in:Schill, A. et al. eds.IFIP/IEEE International Conference on Distributed Platforms.Chapman & Hall, pp. 430-446. Available at: http://dx.doi.org/10.1007/978-0-387-34947-3_32.
Ibbetson, A. et al. (1995). A Parallel Implementation of the ANSA REX Protocol. in:Cook, B. M. et al. eds.Transputer Applications and Systems '95 - Proceedings of World Transputer Congress 1995.IOS Press, pp. 29-41.
RM-ODP: The Architecture P.F. Linington Abstract The Reference Model for Open Distributed Processing is a joint ISO/ITU Standard which provides a framework for the specification of large scale, heterogeneous distributed systems. It defines a set of five viewpoints concentrating on different parts of the distribution problem and a set of functions and transparency mechanisms which support distribution. The resulting framework is being populated by more detailed standards dealing with specific aspects of the construction and operation of distributed systems.
Linington, P. (1992). Introduction to the open distributed-processing basic reference model. in:Meer, Jde., Heymer, V. and Roth, R. eds.1st International Workshop on Open Distributed Processing.Amsterdam: Elsevier Science BV, Po Box 211, 1000 AE Amsterdam, Netherlands, pp. 3-13.
This paper describes the progress made to date in defining a ne set of standards for Open Distributed Processing (ODP). It explains how the first of these standards, the Basic Reference Model, is organized and how the architecture is structured. It introduces the main concepts and design choices to be found in the Basic Reference Model and indicates how the standardization work is expected to proceed in the near future.
Linington, P. et al. (2011). Building Enterprise Systems with ODP - An Introduction to Open Distributed Processing. [Online]. Chapman and Hall/CRC Press. Available at: http://www.cs.kent.ac.uk/pubs/2011/3151.
This book sets out a systematic approach to the design of large complex distributed systems, such as enterprise systems, using the concepts and mechanisms defined by the Reference Model of Open Distributed Processing (ODP). It is not limited to any single tool or design method, but concentrates on the key choices that make an architectural design robust and long-lived.
Berzins, M. et al. (2004). Evaluation of the Simula Research Laboratory. Oslo, Norway: The Research Council of Norway.