School of Computing

Example PhD project topics

Example PhD project topics

Below please find a list of some example topics for PhD studies, proposed by selected academics of the Group. The list does not reflect all possible topics supervisors are interested, so you are recommended to study individual supervisors' research interests and publications and approach them for more advice. You are welcome to propose your own topics that match interests of supervisors of the Group. If you are interested in studying at our Group but need help to determine your supervisor and topic, please feel free to contact Professor Shujun Li <S.J.Li@kent.ac.uk>, the Group Head, for advice. From time to time, we will offer funded PhD studentships.

  • Comprehensive understanding of cybercrime

    Supervisor: Budi Arief

    One of the biggest challenges in understanding cybercrime relates to the massive landscape involved, making it impossible to encompass everything. An approach based on piece-by-piece research to create a complete taxonomy of cybercrime can be used to address this challenge, looking into the stakeholders involved: attackers, defenders, and victims; and looking at technical solutions alongside the human factors that are involved. Some ideas of potential investigation include:

    • Mapping threats against cybercrime incidents, and looking at how these threats materialise, in order to understand the factors abetting or preventing them
    • Conducting qualitative and quantitative study to gather data on various losses experienced by victims, as well as their circumstances, leading to the creation of victim profiles, which can help minimise the risk of victimisation
    • Cataloguing existing measures and/or initiatives for combating cybercrime and evaluate their effectiveness
    • Delving deeper into policing cybercrime and its associated metrics, such as the cost of policing tasks and statistics of cybercrime in the public sector
    • Exploring other issues in human behavior and legal framework

  • Security and Privacy of the Internet of Things (IoT)

    Supervisor: Budi Arief

    IoT has the potential to make our live more comfortable and effortless, but IoT devices could also pose new large-scale privacy and security risks that are not fully understood yet. For example, data collected from these devices (with or without authorisation from its owner) could reveal too much information about someone, and criminals might try to exploit this wealth of information in mounting more successful attacks, for example credit card fraud or social engineering attacks leading to identity theft. Furthermore, the abundance of connected, unsecured IoT devices makes it possible to launch a large scale DDoS attack. Therefore, new approaches and techniques for securing IoT devices are needed, which will be the focus of this research.

  • Dealing with insider threat

    Supervisor: Budi Arief

    Insider threat is a significant and ever-present risk faced by any organisation. While security mechanisms can be put in place to reduce the chances of external attackers gaining access to a system, the issue is more complex when dealing with insider threat. If an employee already has legitimate access rights to a system, it is much more difficult to prevent them from carrying out inappropriate acts, because it is hard to determine whether the acts are part of their official work or indeed malicious. This research will look into a more comprehensive integration of human factors, as well as better machine learning techniques to obtain more accurate results, and more advanced decision making tools to help organisations to detect and respond to insider threats early.

  • Behavioural APIs for secure communicating systems

    Supervisors: Laura Bocchi and Andy King

    In this project you will develop a theory and tool for design and implementation of secure communicating systems. The project will centre on Behavioural APIs. Behavioural APIs are abstract specifications of application-level protocols; they go beyond traditional APIs in that they specify conversation patterns among components/services (and not just sets of supported operations). Behavioural API have been successfully applied to guarantee safety (eg., absence of deadlocks) via code generation, verification and monitoring. Security has been explored only to a limited extent in this context. The project will focus on data-centric security and provenance, but possibly involve other security aspects.

  • Minimal Cost Quantum Security Infrastructures

    Supervisor: Carlos Perez Delgado

    The existence of quantum algorithms, such as Shor's Integer Factorisation Algorithm, implies that quantum computers pose an existential threat to a sizeable portion of our current security infrastructures. Many digital services, from privacy, to authentication, rely on protocols like RSA, or Diffie-Hellman that in turn rely on the hardness of factorisation. At the same time, quantum information provides some ways to implement cryptographic primitives like privacy (e.g. quantum key distribution). Some of these, however, incur heavy overhead costs that may be infeasible for widespread adoption (e.g. heavy use of quantum communication).

    The purpose of this research project is to propose new complete cryptographic infrastructures that are provably secure to both classical and quantum attacks, and have provably optimal overhead costs.

  • Reverse Engineering for Security

    Supervisor: Andy King

    Reverse Engineering is the process of taking a software artifact, such as a binary, and figuring out what it does. Reversing is important in the security industry where security engineers frequently have to inspect binaries when searching for security holes. This project will develop tooling not for reversing a binary to, say, a C program or even an intermediate language. Rather the project will develop tools that explain what a binary does by annotating it with information that details the values register might store. This will be achieved, not by directly executing the binary (since the binary may be malicious) but rather by following all paths through the binary. In this way, it is potentially possible to work out the values that registers will possibly store at each point in the binary. The studentship will develop this idea and apply it to develop tools for supporting security engineers.

  • Synthesizing Security and Privacy Requirements in Socio-technical Systems

    Supervisors: Özgür Kafalı, TBD: Kent Law School

    Goal: To provide a formal process for developing comprehensive and consistent security and privacy requirements by taking into account both technical and social considerations.

    Motivation: Regulations and functional requirements help developers understand how software and its users should behave in specific situations. Breach reports help developers and security analysts identify cases where a deployed system fails, or is misused, whether maliciously or accidentally. While such artifacts are helpful, they are often ambiguous, inconsistent, and incomplete. Moreover, practical systems often need to comply with regulations that may overlap or conflict in subtle ways, thereby exacerbating the complexity of navigating through such artifacts.

    Plan: Formally analyze textual software artifacts via a collection of knowledge representation techniques and reasoning methodologies such as normative reasoning, ontologies, and automated information extraction, with exemplars on healthcare laws and regulations, social network privacy, and IoT and mobile applications.

    Contributions: A systematic and repeatable security requirements engineering process that includes socio-technical system design, and verification of stakeholder needs and user expectations against software implementation.

  • Digital Forensics by Design

    Supervisors: Özgür Kafalı, TBD: School of Psychology

    Goal: To build forensics capabilities into software and enhance all three core phases of the digital forensics process, i.e. collection of digital evidence; examination of evidence via monitoring and diagnosis; and reporting and validation of forensics hypotheses in support of a breach.

    Motivation: Data breaches are inevitable, no matter how well organisations fortify their software systems or train their users. Breaches may take months to detect and contain costing organisations valuable resources. As the variety of breaches increases, such as those caused by a malicious or criminal attack, a system glitch, or human error, and the generated amount of system logs to detect such breaches increases in parallel, the job of a forensics analyst is becoming increasingly difficult to sort through the logs and come up with a diagnosis.

    Plan: Explore formal AI-based methods such as intention recognition, temporal reasoning, and argumentation, as well as gamification techniques.

    Contributions: A comprehensive misuse profile customisable for specific software products based on understanding of human decision making and insider threats; an adaptive logging mechanism that can be embedded in a software implementation.

  • Detecting Filthy Tampers

    Supervisors: Andy King and Laura Bocchi

    Even if a compiler is verified so that one is assured that the executed code confirms to the behaviour of a high-level program, there is no reason to believe that the low-level code has not been tampered with post-compilation to insert a back-door or some other malicious behaviour through binary rewriting.So even if the source code has been published for public scrutiny, one has to check that the low-level conforms intended behaviour of the source. This is problematic if the compiler is a black-box and one has no control over the compilation process.A compiler will typically apply loop optimisations, such as loop inversion, which means that control structures of low-level code are not in a one-to-one relationship with the high-level code.The project will therefore aim to build model checking techniques which are tolerant of syntactic differences between high-level and low-level code, in order to search for paths (behaviours) which occur in the low-level code which were never intended in the source.

  • Self-adaptation applied to Security and Privacy

    Supervisor: Rogério de Lemos

    A future challenge in any system from critical infrastructures to internet of things is ability of systems to look after themselves regarding security and privacy. The notion of self-protection would be a fundamental requirement in future systems considering their complexity and connectivity. At Kent we have worked on self-adaptive authorisation infrastructures, and have built prototypes that enable to handling insider threats using self-adaptive principles (https://saaf-resource.kent.ac.uk/). The goal is to continue this work in other directions, but mainly, in the area of provision of assurances. If guarantees need to be provided about the security and privacy of a system, then these systems need to be perpetually evaluated during run-time, and this is a huge challenge.

  • Human-based Decision Making in Resilient Cyber Security Systems

    Supervisor: Rogério de Lemos

    Systems are becoming more complex and interconnected, and the access to system resources need to be controlled in an efficient and trusted way. Humans alone are not able to manage the complexity of these emerging systems, hence the need of automating the decision making regarding the protection of resources. However, full automation is undesirable because there are limits in what can be achieved with self-adaptation considering the unpredicted nature of attacks. This requires humans to be involved in some of the non-mundane decisions regarding the protection of the system. The challenge now is how to involve humans in the process of decision making considering that systems, their goals and context may evolve in a way that humans are not able to follow in order to have an accurate interpretation of its state, which might have an impact on insightful/informative decisions.

  • Self-adaptive Privacy Guardian

    Supervisors: Rogério de Lemos and Budi Arief

  • Perpetually evolving mechanisms for detecting and handling insider threats

    Supervisors: Rogério de Lemos and Budi Arief

  • Various topics in cyber security (see below)

    Supervisors: Shujun Li

    He offers a number of topics for potential PhD applicants. Click here to see a list of such topics maintained on his personal website.

  • Cyber security awareness campaigns: What works and what doesn't?

    Supervisor: Jason R.C. Nurse

    While technology is a key component of cyber security approaches, human users also play a critical role to maintaining corporate security. There are a number of ways to get users involved, but one of the most attractive for organisations is that of security awareness campaigns. In these campaigns, companies adopt a range of training sessions (general and targeted), produce awareness material (e.g., posters, leaflets), and engage in simulated sessions (e.g., phishing employees directly, and security gamification). The aim of this project is to investigate what really works and doesn’t work, out of the range of awareness techniques proposed. This project will build on my previous work, engage with several stakeholders across industry and academia, and contribute to the knowledge present in current research.

  • Cyber security and psychology: where do we go from here?

    Supervisor: Jason R.C. Nurse, TBD: School of Psychology

    The human aspect of cyber security has become increasingly prominent in research and practice, a reality undoubtedly motivated by the range of cyberattacks that exploit individuals (e.g., phishing, social engineering), and the broader challenge of building secure and usable systems. This project seeks to combine the fields of computing, HCI and psychology to investigate the range of challenges faced by users, designers and implementors in creating systems and environments that are supportive of users. The goal will be to understand these challenges and seek to develop novel approaches, methods and techniques to address them. These will encompass technical as well as socio-technical solutions. As there are several different areas in which this project could focus, there background and research interests of the student will shape the research.

  • Network Intrusion Detection using Data Analytics

    Supervisors: Peter Rodgers and Budi Arief

    This project is an intersection of Data Mining, Information Visualization and Security research. Whilst there are number of different techniques currently in use for network intrusion detection, they typically focus on examining single nodes or the traffic on a single edge. However, some attacks may be best detected by looking at a sub-networks level, or groups of nodes/edges as they change over time. The novel concept is to define patterns of sub-networks, which can then be compared to detect any anomalies in the network and so flagged as potential malicious behaviour. We will provide a visual analytics demonstrator where users can monitor and explore a visual representation of a network. This graph pattern matching has the additional benefit of being easily integrated into visual tools. Users have the ability to both see the patterns identified in the network, as well as define and tune the patterns to be found.

    The data to be evaluated in the proposed research will be temporal network data. Time slices will be defined, with edges connecting nodes between time slices that are unchanged over the time interval. Initial data will come from standard open access data sources. We will divide the data into that used for development and that reserved later for testing the effectiveness of the system. Evaluation will have three strands: (1) usability of the software by analysts; (2) effectiveness of the intrusion detection against other state-of-the-art intrusion detection tools (to see if our method can detect intrusion that is undetected by others); and (3) application in the field to see if the system is real-world ready.

School of Computing, University of Kent, Canterbury, Kent, CT2 7NF

Enquiries: +44 (0)1227 824180 or contact us.

Last Updated: 18/03/2019