AI Research Group - suggested PhD projects

Here are some suggested topics for PhD projects from our group members. These projects are merely suggestions and illustrate the broad interests of the research group. Potential research students are encouraged and welcome to produce their own suggestions in these research areas or other research areas in the field. All applicants are invited to contact the academic associated with the project when making an application.  

Machine Learning with Fairness-Aware Classification Algorithms

Contact: Alex Freitas

This project involves the classification task of machine learning, where an algorithm has to predict the class of an object (e.g. a customer or a patient) based on properties of that object (e.g. characteristics of a customer or patient). There are now many types of classification algorithms, and in general these algorithms were designed with the only (or main) goal of maximizing predictive performance. As a result, the application of such algorithms to real-world data about people often leads to predictions which have a good predictive accuracy but are unfair, in the sense of discriminating (being biased) against certain groups or types of people – characterized e.g. by values of attributes like gender or ethnicity. In the last few years, however, there has been a considerable amount of research on fairness-aware classification algorithms, which take into account the trade-off between achieving a high predictive accuracy and a high degree of fairness. The project will develop new classification algorithms to cope with this trade-off, focusing on classification algorithms that produce interpretable predictive models, rather than black box models.

Relevant References:

[1] B. van Giffen, D. Herhausen, T. Fahse. Overcoming the pitfalls and perils of algorithms: a classification of machine learning biases and mitagation methods. Journal of Business Research 144, 93-106, 2022.

[2] Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. A survey on bias and fairness in machine learning. arXiv preprint: arXiv:1908.09635. 2019.

 

Cognition-enabled lifelong robot learning of behavioural and linguistic experience

Contact: Ioanna Giorgi

Present-day cognitive robotics models draw on a hypothesised developmental paradigm of human cognitive functions to devise low-order skills in robots, such as perception, manipulation, navigation and motor coordination. These methods exploit embodied and situated cognition theories that are rooted in motor behaviour and the environment. In other words, the body of a physical artefact (e.g., a robot) and its interactions with the environment and other organisms in it contribute to the robot’s cognition. However, it is not clear how these models can explain or scale up to the high-level cognitive competence observed in human behaviour (e.g., reasoning, categorisation, abstraction and voluntary control). One approach to model robot learning of behavioural and cognitive skills is in incremental and developmental stages that resemble child development. According to child psychology and behaviour, conceptual development starts from perceptual clustering (e.g., prelinguistic infants grouping objects by colour) and progresses to nontrivial abstract thinking, which requires a fair amount of language. Thus, to solve the problem of modelling high-level cognitive skills in robots, language, in interaction with the robot’s body, becomes inseparable from cognition. This project is aimed at following a cognitive and developmental approach to robot learning that will allow robots to acquire behavioural and linguistic skills at a high level of cognitive competence and adaptation as humans. This learning should be lifelong: humans apply earlier-learned skills to make sense of continuous novel stimuli, which allows them to develop, grow and adjust to more complex practices. One such cognitive robot can be used across various themes: human-robot interaction using theory of mind (ToM) skills for robots, social robots and joint human-robot collaboration.

Note: The Cognitive Robotics and Autonomous Systems (CoRAS) laboratory at the School of Computing has access to several humanoids (NAO) and socially interactive robot platforms (Buddy Pro, Q.BO One, Amy A1), mobile robots (Turtlebot Waffle Pi, Burger), pet-like companion robots and gadgets like AR Epson glasses and Microsoft HoloLens.

 

Attention model for agent social learning during human-robot interaction.

Contact: Ioanna Giorgi

Successful human-robot interaction requires that robots learn by observing and imitating human behaviour. The theory of learning behaviour through observation is referred to as social learning. Behavioural learning can also be enhanced by the environment itself and through reinforcement (i.e., establishing and encouraging a pattern of behaviour). One important component of such learning is cognitive attention, which deals with the degree to which we notice a behaviour. Cognitive attention renders some inputs more relevant while diminishing others, with the motivation that more focus is needed for the important stimuli in the context of social learning. Attention brings forth positive reinforcement (reward) or negative reinforcement (punishment). If the reward is greater than the punishment, behaviour is more likely to be imitated and reciprocated. In human-robot interaction, attention is crucial for two reasons: 1) to respond or reciprocate the behaviour appropriately during the interaction, and 2) to learn or imitate that behaviour for contingencies. This project is aimed at devising a cognitive attention model of a robot for social learning. The model will include memory, reasoning, language and multi-sensory data processing, i.e., “natural” stimuli during the interaction such as from vision, speech and sensorimotor experience. It can be based on a cognitive architecture approach or alternative computational approaches. The solution should ideally be encompassing multiple aspects of interaction (verbal and non-verbal), but it can also focus on such specific aspects (e.g., visual attention or intention reading).

Note: The Cognitive Robotics and Autonomous Systems (CoRAS) laboratory at the School of Computing has access to several humanoids (NAO) and socially interactive robot platforms (Buddy Pro, Q.BO One, Amy A1), mobile robots (Turtlebot Waffle Pi, Burger), pet-like companion robots and gadgets like AR Epson glasses and Microsoft HoloLens.

 

How can a robot learn skills from a human tutor

Contact: Giovanni Masala

The aim of this project is to enhance robot learning from a human tutor, similar to a child who learns from a human teacher. The agent will develop the ability to communicate through natural language from scratch, by interacting with a tutor, recognising their verbal and non-verbal inputs as well as emotions, and, finally, grounding the word meaning in the external environment. The project will start from an existing neuro-cognitive architecture under development [1], based on a Human-like approach to learning, progressively incrementing knowledge and language capabilities through experience and ample exposure, using a corpus based on early language lexicons (preschool literature). The robot will integrate with visuospatial information-processing mechanisms for embodied language acquisition, exploiting affective mechanisms of emotion detection for learning and cognition. The agent will be embodied into a humanoid robot as opposed to a computer or a virtual assistant, to enable real-world interactions with the humans and the external environment, to learn and refine its natural language understanding abilities guided or depending on the teacher’s emotions and visual input (object associations with the words, facial expression, and gestures). Emotions will influence the cognitive attention of the robotic agent, modulating the selectivity of attention on specific tasks, words, and objects, and motivating actions and behaviour.

Relevant References:

[1] Golosio B, Cangelosi A, Gamotina O, MASALA GL, A Cognitive Neural Architecture Able to Learn and Communicate through Natural Language. PLoS ONE 10(11): e0140866, 2015.

Note: The Cognitive Robotics and Autonomous Systems (CoRAS) laboratory at the School of Computing has access to several humanoids (NAO) and socially interactive robot platforms (Buddy Pro, Q.BO One, Amy A1), mobile robots (Turtlebot Waffle Pi, Burger), pet-like companion robots and devices like AR Epson glasses and Microsoft HoloLens.

 

Explainability and Interpretability of Machine/Deep learning techniques in medical imaging

Contact: Giovanni Masala

In medicine is very important the acceptance of Machine Learning systems not only in terms of performance but also considering the degree to which a human can understand the cause of a decision. Nowadays, the application of Computer Aided Detection Systems in radiology is often based on Deep Learning Systems thanks to their high performance. In general, more accurate models are less explainable  and there is a scientific interest in the field of Explainable Artificial Intelligence, to develop new methods that explain and interpret ML models. There is not a concrete mathematical definition for interpretability or explainability, nor have they been measured by some metric; however, a number of attempts have been made in order to clarify not only these two terms but also related concepts such as comprehensibility. A possible target (but other medical diseases are allowed) of this research is a model to discover the severity of Breast Arterial Calcifications. Breast arterial calcification (BAC) is calcium deposition in peripheral arterioles. There is increasing evidence that BAC is a good indicator of the risk of cardiovascular disease. The accurate and automated detection of BACs in mammograms remains an unsolved task and the technology is far from clinical deployment. The challenging task is to develop an explainable model applicable to BAC detection, able to discriminate between severe and weak BACs in patients’ images.

 

Information Visualisation Directed by Graph Data Mining

Contact: Peter Rodgers

Data visualisation techniques are failing in the face of large data sets. This project attempts to increase the scale of graph data that can be visualised by developing data mining techniques to guide interactive visualisation. This sophisticated combining of information visualisation and data mining promises to greatly improve the size of data understandable by analysts, and will advance the state of the art in both disciplines. On successful completion, publications in high quality venues are envisaged. This project is algorithmically demanding, requiring good coding skills. The implementation language is negotiable, but Java, JavaScript or C++ are all reasonable target languages. Data will be derived from publicly available network intrusion or social network data sets. Tasks in this research project include: (1) implementing graph display software and interface. (2) developing project specific visualisation algorithms. (3) integrating graph pattern matching and other graph data mining systems into the visualisation algorithms.

 

Visual Analytics for Set Data

Contact: Peter Rodgers

Visual Analytics is the process of gaining insights into data through combining AI and information visualization. At present, visual analytics for set based data is largely absent. There are a large number of sources for set based data, including social networks as well as medical and biological information. This project will look at producing set mining algorithms which can then be used to support set visualization methods such as Euler/Venn diagrams or Linear diagrams. Firstly, the use of existing data mining methods will produce useful information about sets and the data instances in them. After this effort, more complex algorithms for subset and set isomorphism will be developed to allow for pattern matching within set data. These set mining methods will be integrated into Euler diagram based exploratory set visualization techniques.

 

Using Soft Nanomembrane Electronics for Home-based Anxiety Monitoring

Contact: Jim Ang

Sensor-enhanced virtual reality systems for mental health care and rehabilitation. New immersive technologies, such as  virtual reality (VR) and augmented reality (AR) are playing an increasingly important role in the digital health revolution. Significant research has been carried out at University of Kent, in collaboration with medical scientists/practitioners, psychiatrists/psychologists, digital artists and material scientists (for novel sensor design and integration with VR). Such projects include designing VR for dementia care, eating disorder therapy, eye disorder therapy and VR-enabled brain-machine interactions. This PhD research can take on the following directions: (1) Co-design of VR for a specific healthcare domains, involving key stakeholders (e.g. patient representatives, clinicians, etc) to  understand the design and deployment opportunities and challenges in realistic health contexts. (2) Deploy and evaluate VR prototypes to study the impact of the technologies in the target groups. (3) Design and evaluate machine learning algorithms to analyse behavioural and physiological signals for clinical meaningful information, e.g. classification of emotion, detection of eye movement, etc. 

Relevant publications: 

[1] M Mahmood, S Kwon, H Kim, Y Kim, P Siriaraya, J Choi, B Otkhmezuri, K Kang, KJ Yu, YC Jang, CS Ang, W Yeo (2021) Wireless Soft Scalp Electronics and Virtual Reality System for Motor ImageryBased Brain–Machine Interfaces. Advanced Science. 8(19). 

[2] S Mishra, K Yu, Y Kim, Y Lee, M Mahmood, R Herbert, CS Ang, W Yeo, J Intarasirisawat, Y Kown, H Lim (2020). Soft, wireless periocular wearable electronics for real-time detection of eye vergence in a virtual reality toward mobile eye therapies. Science Advances. 6 (11), eaay1729. 

[3] L Tabbaa, CS Ang, V Rose, P Siriaraya, I Stewart, KG Jenkins, M Matsangidou (2019) Bring the Outside In: Providing Accessible Experiences Through VR for People with Dementia in Locked Psychiatric Hospitals, Proceedings of the CHI 2019 Conference on Human Factors in Computing Systems. 

[4] M Matsangidou, B Otkhmezuri, CS Ang, M Avraamides, G Riva, A Gaggioli, D Iosif, M Karekla (2020). “Now I can see me” designing a multi-user virtual reality remote psychotherapy for body weight and shape concerns. Human–Computer Interaction. 1-27.

 

Optimisation of Queries over Virtual Knowledge Graphs

Contact: Elena Botoeva

Virtual Knowledge Graphs (also known as Ontology-Based Data Access) provide user-friendly access to Big Data stored in (possibly multiple) data sources, which can be traditional relational ones or more novel ones such as document and triple stores. In this framework an ontology is used as a conceptual representation of the data, and is connected to the data sources by the means of a mapping. User formulates queries over the ontology using a high-level query language like SPARQL; user queries are then automatically translated into queries over the underlying data sources, and the latter are executed by the database engines. Efficiency of the whole approach is highly dependent on optimality of the data source queries. While the technology is quite developed when the underlying data sources are relational, there are still many open problems when it comes to novel data sources, such as MongoDB, graph databases etc. The objective of this PhD project is to design novel techniques for optimising data source queries arising in the context of Virtual Knowledge Graphs.

 

Heuristics for Scalable Verification of Neural Networks

Contact: Elena Botoeva

Due to the success of Deep Learning neural networks are now being employed in a variety of safety-critical applications such as autonomous driving cars and aircraft landing. Despite showing impressive results at various tasks, neural networks are known to be vulnerable (hence, not robust) to adversarial attacks: imperceptible to human eye perturbations to an input can lead to incorrect classification. Robustness verification of neural networks is currently a very hot topic both in academia and industry as neural networks. One of the main challenges in this field is deriving efficient techniques that can verify networks with hundred thousands / millions of neurons in reasonable time, which is not trivial given that exact verification is not tractable (NP- or coNP-complete for ReLU-based neural networks depending on the exact verification problem). Incomplete approaches generally offer better scalability but at the cost of completeness. The aim of the proposed PhD project will be to learn heuristics for efficient verification of neural networks.

 

Machine learning and methods development in neuroimaging

Contact: Howard Bowman

I have current practical work on interpretable machine learning applied in neurology and psychiatry, with particular focus on predicting recovery from stroke using structural MRI and identifying biomarkers of migraine and epilepsy using MEG and EEG. In particular, black- box machine learning has limited value in healthcare, since clinicians, patients and carers need explanations of the decisions made by artificial intelligence systems. One area we investigate to provide interpretable machine learning is Bayesian graphical models. I am keen to take on further PhD students in this area.

 

Computational and cognitive neuroscience

Contact: Howard Bowman

I have a number of lines of research that combine behavioural and neuroimaging experiments with computational modelling in order to understand how the mind emerges from the brain. A main focus is on understanding the role of consciousness in human perception. This uses neural network models and dynamical systems to understand the electrophysiological data that we record. I also have interests in memory systems, the role of oscillations in the brain, spiking neural networks and human attention. Some of these areas could be investigated using recordings from the human brain with depth electrodes in epilepsy patients, data which we have accessed to. There are a number of areas here that could be explored in a PhD.

Understanding Spiking Neural Networks

Contact: Dominique Chu

Spiking Neural Networks (SNN) are brain-like neural networks. Unlike standard rate coding neural networks, signals are encoded in time. This makes them ideal for processing data that has a temporal component, such as time-series data, video or music. Another advantage of SNNs is that there exists neuromorphic hardware that can efficiently simulate SNNs. SNNs are generally thought to be “more powerful” than standard rate coding networks. However, it is not clear precisely in what sense they are more powerful, or what precisely it is that makes them more powerful. The idea of this project is to investigate this claim using a combination of mathematical and computational methods. As such the project will require an interdisciplinary research methodology at the interface between mathematics, computer science and neuroscience. The project would be suitable for a student who wishes to become and expert in an up-and-coming method in artificial intelligence. It has the scope for both theoretical investigations, but will also require implementing neural networks.

 

Training algorithms for spiking neural networks

Contact: Dominique Chu

Spiking neural networks encode information through the temporal order of the signals. They are more realistic models of the brain than standard artificial neural networks and they are also more efficient in encoding information. Spiking neural networks are therefore very popular in brain simulations. A disadvantage of spiking neural networks is that there are not many efficient training algorithms available. This project will be about finding novel training algorithms for spiking neural networks and to compare the trained networks with standard artificial neural networks on a number of benchmark AI tasks. An important part of this project will be not only to evaluate how well these spiking neural networks perform in relation to standard networks, but also to understand whether or not they are, as is often claimed, more efficient in the sense that they need smaller networks or fewer computing resources. The main approach of the model will be to gain inspiration from existing theories about how the how the human brain develops and learns. These existing theories will then be adapted so as to develop efficient training algorithms. This project will be primarily within AI, but it will also provide the opportunity to learn and apply techniques and ideas from computational neuroscience.

 

Machine learning systems to improve medical diagnosis

Contact: Daniel Soria

Research shows that machine learning methods are extremely useful to discover or identify patterns that can help clinicians to tailor treatments. However, the implementation of those data mining procedures may be challenging because of high dimensional data sets, and the choice of proper machine learning methods may be tricky. 

The aim of the research project will be to design and develop new intelligent machine learning systems with high degree of flexibility suitable for disease prediction/diagnosis, that are also easily understandable and explicable to non-experts in the field. Data will be sought from the UK Biobank, to examine whether the selected features are correlated with the occurrence of specific diseases (e.g., breast cancer), whether these relationships persist in the presence of covariates, and the potential role of comorbidities (e.g., obesity, diabetes and cardiovascular diseases) in the assessment of the developed models.

 

Explainable Artificial Intelligence Systems for Massive-Scale Nonstationary Data Streams

Contact: Xiaowei Gu

Thanks to the rapid development of information technology and electronic manufacturing industry, massive volumes of streaming data are generated from various aspects of Internet-based activities, in different forms such as texts, images, audios, videos, etc. The information embedded within the streaming data is of paramount importance for enhanced insight into and decision-making about the underlying problem. The need for extracting invaluable information from such data has attracted numerous international organizations and companies to make efforts in order to deploy advanced data mining techniques. However, the very high volume, velocity, variability and complexity of the streaming data have posed great challenges to traditional AI technologies in data-intensive applications. Particularly, the lack of transparency and explainability has been a great barrier for the relevant techniques to be practically implemented in life-critical and financial applications. Therefore, a significant demand exists for developing more advanced data-intensive technologies that entail high performance and efficiency, while enjoying model transparency and explainability. The main aim of this project is to develop cutting-edge computational intelligence technologies for massive-scale data stream mining and modelling in nonstationary environments. In particular, the work of this project will construct an advanced explainable AI methodology through integrating the latest developments in deep learning, ensemble learning, fuzzy systems and pattern recognition. The developed methodology will be further implemented for a carefully selected real-world application, to be chosen from a range of possible problem domains, including: autonomous driving scene analysis, remotely sensed imagery analysis, and high-frequency trading data analysis. The project will provide a platform for an exceptional doctoral candidate to undertake research, involving both theoretical development and experimental investigation, within a world-leading research team for computational intelligence.

 

How creative are crime-related texts and what does this tell us about cyber crime?

Contact: Shujun Li,  Anna Jordanous

The main aim of the PhD project is to investigate if crime-related texts can be evaluated in terms of creativity using automatic metrics. Such a study will help understand how crime-related texts are crafted (by criminals and by automated tools, possibly via a hybrid human-machine teaming approach), how they have evolved over time, how they are perceived by human receivers, and how new methods can be developed to educate people about tactics of cyber criminals. The four tasks of the PhD project will include the following: (1) collecting a large datasets of crime-related texts; (2) developing some objective (automatable) creativity metrics using supervised machine learning, targeted towards evaluating the creativity of crime-related texts (e.g., phishing emails, online hate speech, grooming, cyber bullying, etc.); (3) applying the creativity metrics to the collected data to see how malevolent creativity has evolved over years and for different crimes; (4) exploring the use of generative AI algorithms to create more creative therefore more deceptive crime-related texts.

 

Computational creativity and automated evaluation

Contact: Anna Jordanous

In exploring how computers can perform creative tasks, computational creativity research has produced many systems that can generate creative products or creative activity. Evaluation, a critical part of the creative process, has not been employed to such a great extent within creative systems. Recent work has concentrated on evaluating the creativity of such computational systems, but there are two issues. Firstly, recent work in evaluation of computational creativity has consisted of the system(s) being evaluated by external evaluators, rather than by the creative system evaluating itself, or evaluation by other creative software agents that may interact with that system. Incorporation of self-evaluation into computational creativity systems *as part of guiding the creative process* is also under explored. In this project the candidate will experiment with incorporating evaluation methods into a creative system and analyse the results to explore how computational creativity systems can incorporate self-evaluation. The creative systems studied could be in the area of musical or linguistic creativity, or in a creative area of the student’s choosing. It is up to the student to decide whether to focus on evaluation methods for evaluating the quality of output from a creative system or the creativity of the system itself (or both). The PhD candidate would be required to propose how they would will explore the above scenarios, for a more specific project. Anna is happy to guide students in this and help them develop their research proposal.

 

Expressive musical performance software

Contact: Anna Jordanous

Traditionally, when computational software performs music the performances can be criticised for being too unnatural, lacking interpretation and, in short, being too mechanical. However much progress has been made within the field of expressive musical performance and musical interpretation expression. Alongside these advances have been interesting findings in musical expectation (i.e. what people expect to hear when listening to a piece of music), as well as work on emotions that are present within music and on how information and meaning are conveyed in music. Each of these advances raises questions of how the relevant aspects could be interpreted by a musical performer. Potential application areas for computer systems that can perform music in an appropriately expressive manner include, for example, improving playback in music notation editors (like Sibelius), or the automated performance of music generated on-the-fly for ‘hold’ music (played when waiting on hold during phone calls). Practical work exploring this could involve writing software that performs existing pieces, or could be to write software that can improvise, interpreting incoming sound/music and generating an appropriate sonic/musical response to it in real time.

 

Quantum Computing & Post-Quantum Cryptography

My typical papers: Near-Landauer-Bound Quantum Computing (IEEE Transactions on Quantum Engineering), Spin-Encoded Quantum Computer (Springer Nature), Landauer Bound in Quantum Computing (IEEE Access), my seminar: A New Quantum Computer & Post-Quantum Cryptography

Supervised by Prof. Frank Wang.

Contact: Prof. Frank Wang

Artificial Intelligence (Deep Learning, Neuromorphic Computing & Brain-like Computer)

My typical papers: Adaptive Neuromorphic Architecture (Neural Networks), Memristor Neural Networks (IEEE Transactions), Beyond Memristors (Micromachines), my Keynote at Cambridge: Brain and Brain-Inspired Artificial Intelligence

Supervised by Prof. Frank Wang.

Contact: Prof. Frank Wang

Cloud Computing & Security

My typical papers: Entropy-Based Cloud (IEEE Transactions), ICMetrics for Cloud Computing (Journal of Cloud Computing), Grid-Oriented Storage (IEEE Transactions on Computer)

Supervised by Prof. Frank Wang.

Contact: Prof. Frank Wang

New Electronics & Memristor

My typical papers: Topological electronics (Springer Nature), Fractional Memristor (Applied Physics Letters), Triangular Table of Circuit Elements (IEEE Transactions)

Supervised by Prof. Frank Wang.

Contact: Prof. Frank Wang