25-27 PhD Funded Positions for the d-real Programme

  • Ireland
  • Posted 2 years ago
  • Applications have closed

Dublin City University, University of Galway, Technological University Dublin and etc.

Deadline: May 24, 2023

DCU | TCD | TU Dublin | UCD | UoG 

d-real is funded by Science Foundation Ireland and by the contributions of industry and institutional partners.

Applications for d-real positions starting in September/October 2023 are now being accepted. The deadline for the second round of recruitment is 16:00 (Irish time) on Wednesday 24th May 2023. We will start assessing applications from the first round shortly. If you have any questions about the programme, please refer to the Frequently Asked Questions webpage. If you cannot find your answer there then you can email the Programme Manager, Dr Stephen Carroll (stephen.carroll@tcd.ie). For details on the application procedure, please visit our Apply to d-real page. To make an application, you must use our portal on the Good Grants platform here. Please note, additional positions will be added by 13th April 2023, so you may choose to delay an application until you see all PhD projects that are offered through d-real.


Dublin City University

Code: 2023DCU01
Title: Towards context-aware evaluation of Multimodal MT systems
Supervision Team: Sheila Castilho, DCU (Primary Supervisor) /Yvette Graham, TCD (External Secondary Supervisor)
Description: Context-aware machine translation systems have been raising interest in the community recently. Some work has been done to develop evaluation metrics to improve MT Evaluation considering discourse-level features, context span and appropriate evaluation methodology. However, little has been done to research how context-aware metrics can be developed in the case of multimodal MT systems.
Multimodal content refers to documents which combine text with images and/or video and/or audio. This has a wide range from almost all the web content we view as part of almost all our online activities to much of the messaging we send and receive on WhatsApp and Messenger systems. This project will investigate whether other inputs such as images can be considered as context in the evaluation (along with text) for evaluation of translation quality, and if so, how automatic metrics to account for that multimodal nature can be developed. It will implement document- and context-level techniques being developed for automatic metrics in multimodal MT making use of the multimodal context needed in a multimodal MT scenario.


Code: 2023DCU02
Title: Designing VR environments for post-primary education
Supervision Team: Peter Tiernan, DCU (Primary Supervisor) / Cathy Ennis, TU Dublin (External Secondary Supervisor)
Description: Virtual Reality (VR) research has developed at pace in fields such as construction, engineering, and healthcare, with promising results. However, the use and development of VR environments in post-primary education settings remains low. There is a need for research which not only examines teachers’ perceptions of and attitudes towards VR but focuses on the development of bespoke VR environments which meet the needs of post-primary teachers and their students and can demonstrate an impact on educational and motivational outcomes. This PhD will focus on designing, developing, and evaluating research and practitioner-informed VR environments for post-primary teachers and their students. The study will engage with practising post-primary teachers to identify appropriate curricular areas which could benefit from the integration of VR environments. Together with existing literature and case studies, these curricular areas will be used as a basis to develop VR environments for post-primary education. Practising post-primary teachers will help to inform the design and development of environments according to their curricular goals and student needs. VR environments will then be trialed and evaluated with teachers and their students and findings will be used to inform the use of VR in post-primary education.


Trinity College Dublin

Code: 2023TCD01
Title: Don’t Stand So Close to Me: Proxemics and Gaze Behaviors in the Metaverse
Supervision Team: Rachel McDonnell, TCD (Primary Supervisor) / Cathy Ennis, TU Dublin (External Secondary Supervisor) / Victor Zordan, Principal Scientist Roblox (Additional Supervisory Team Member)
Description: Given the prolific rise of the Metaverse, understanding how people connect socially via avatars in immersive virtual reality has become increasingly important. Current social platforms do not model non-verbal behaviors well such as proxemics (how close people stand to one another), and mutual gaze (whether or not they are looking at one another). However, these cues are extremely important in social interactions and communication. In this project, we will record and investigate real eye gaze and proxemics in groups and build computational models to improve avatar motions in interactive immersive virtual spaces. This position is partially supported by funds from Roblox Corporation.


Code: 2023TCD02
Title: Investigating objective neural indices of music understanding
Supervision Team: Giovanni Di Liberto, TCD (Primary Supervisor) / Shirley Coyle, DCU (External Secondary Supervisor)
Description: Music is ubiquitous in our daily life. Yet it remains unclear how our brains make a sense of complex music sounds, leading to music enjoyment, contributing to the regulation of mood, anxiety, pain, and perceived exertion during exercise. A recent methodological breakthrough demonstrated that brain electrical signals recorded with electroencephalography (EEG) during music listening reflect the listener’s attempt to predict upcoming sounds. This project aims to identify objective metrics of music processing based on EEG, pupillometry and other sensing modalities in progressively more ecologically-valid settings. The project will culminate in the realisation of a brain-computer interface that informs us on the level of music “understanding” in real-time. In doing so, the project will offer a new methodology with various potential applications in brain health research (e.g., hearing-impairment, dementia, anxiety disorders).


Code: 2023TCD03
Title: Deep Learning for Magnetic Resonance Quantitative Susceptibility Mapping of carotid plaques
Supervision Team: Caitríona Lally, TCD (Primary Supervisor) / Catherine Mooney, UCD (External Secondary Supervisor) / Brooke Tornifoglio, TCD and Karin Shmueli, UCL (Additional Supervisory Team Members)
Description: Carotid artery disease is the leading cause of ischaemic stroke. The current standard-of-care involves removing plaques that narrow a carotid artery by more than 50%. The degree of vessel occlusion, however, is a poor indication of plaque rupture risk, which is ultimately what leads to stroke.
Plaque mechanical integrity is the critical factor which determines the risk of plaque rupture, where the mechanical strength of this tissue is governed by its composition. Using machine learning approaches and both in vitro and in vivo imaging, and in particular Quantitative Susceptibility Mapping metrics obtained from MRI, we propose to non-invasively determine plaque composition and hence vulnerability of carotid plaques to rupture.
This highly collaborative project has the potential to change diagnosis and treatment of vulnerable carotid plaques using non-ionizing MR imaging which would be truly transformative for carotid artery disease management.


Code: 2023TCD04
Title: AI driven co-optimisation of network control and real-time AR video coding
Supervision Team: Marco Ruffini, TCD (Primary Supervisor) / Gabriel-Miro Muntean, DCU (External Secondary Supervisor) / Anil Kokaram, TCD (Additional Supervisory Team Member)
Description: This project aims to develop intelligent control mechanisms to make network configuration decisions based on real-time information from 360 AR/VR video streaming services. The focus is on the cooperative performance optimization for 360 AR/VR video streaming that considers the interplay between service prioritization and resource availability prediction when making control decisions. The goal is to make predictions for video processing (e.g., encoding level, chunk size) based on the anticipated capacity and latency in the network, which depends on multiple environmental factors, including user behavior (i.e., due to the highly interactive nature of the AR/VR content). On the other hand, we will also make predictions about the network performance and rely on various control loops, defined by the O-RAN architecture, to dynamically reconfigure the network to match different traffic requirements.


Code: 2023TCD05
Title: Authenticity in Dialogue
Supervision Team: Carl Vogel, TCD (Primary Supervisor) / Eugenia Siapera, UCD (External Secondary Supervisor)
Description: Authenticity in communication is of utmost importance to those who attempt to feign authenticity and is also relevant to those who would prefer to banish inauthenticity, whether the sphere is public relations, politics, health care, dating or courts of law. Dialogue interactions in multiple modalities will be analyzed with the aim of identifying features that discriminate authentic and pretended engagement. The work will involve assembling a multi-modal corpus of evidently un-scripted dialogue interactions, annotation with respect to authenticity categories of interest, and analysis through combinations of close inspection, semi-automated processing and data mining to identify features that separate authentic and inauthentic dialogue communications.


Code: 2023TCD06
Title: Personalised Conversational Agents for contextualized coaching and learning
Supervision Team: Vincent Wade, TCD (Primary Supervisor) / Julie Berndsen, UCD (External Secondary Supervisor) / Ann Devitt, TCD (Additional Supervisory Team Member)
Description: The research focus on developing new techniques and technologies to integrate higher order social conversational skills (in this case coaching skills), such as empathy, praise, motivational utterances, within conversational agents as well as provide techniques to personalize and improve domain accuracy. The combination of both tailored, objective driven advice with more empathetic, social engagement and interaction, will provide a unique combination aimed at providing a ‘coach’ based relationship between user and agent. The research adopts a design-based research methodology and be applied to an educational domain/context. The research will provide new insight and develop new techniques next generation conversational agents which can provide more personalized, coaching based support for users.


Code: 2023TCD07
Title: Cluster Analysis of Multi-Variate Functional Data through Non-linear Representation Learning
Supervision Team: Mimi Zhang, TCD (Primary Supervisor) / Shirley Coyle, DCU (External Secondary Supervisor)
Description: Wearable sensors provide a continuous and unobtrusive way to monitor an individual’s health and well-being in their daily lives. However, interpreting and analyzing wearable sensor data is challenging. One important technique for analyzing such data is cluster analysis. Cluster analysis is a type of unsupervised machine learning that involves grouping data points into clusters based on their similarities. In the context of wearable sensor data, this can involve grouping together measurements of physiological parameters such as heart rate, respiratory rate, and activity level, as well as environmental data such as temperature and humidity. This project involves working on the cutting-edge of cluster analysis methods for sensor data. Different from traditional machine learning methods (for multivariate data), we will develop functional data clustering methods, motivated by the fact that sensor data can be naturally modelled by curves that denote continuous functions of time.


Code: 2023TCD08
Title: Multimodal and Agile Deep Learning Architectures for Speech Recognition
Supervision Team: Naomi Harte, TCD (Primary Supervisor) / Robert Ross, TU Dublin (External Secondary Supervisor)
Description: Speech recognition is central to technology such as Siri and Alexa, and works well in controlled environments. However, machines still lag behind humans in our ability to seamlessly interpret multiple cues such as facial expression, gesture, word choice, mouth movements to understand speech in more noisy or challenging environments. Humans also have a remarkable ability to adapt on the fly to changing circumstances in a single conversation, such as intermittent noise or speakers with significantly different speaking styles or accents. These two skills make human speech recognition extremely robust and versatile. This PhD seeks to develop deep learning architectures that can better integrate the different modalities of speech and also be deployed in an agile manner, allowing continuous adaptation to external factors. These two aspects are inherently intertwined and are key to developing next-generation speech recognition solutions.


Code: 2023TCD09
Title: Balancing Privacy and Innovation in Data Curation for VR/AR/XR
Supervision Team: Gareth W. Young, TCD (Primary Supervisor) / Cathal Gurrin, DCU (External Secondary Supervisor) / Harshvardhan Pandit, DCU (Additional Supervisory Team Member)
Description: This project will investigate and develop a framework for extended reality (XR) technologies with concerns regarding security, privacy, and data protection. This focus is needed as XR technology requires the collecting, processing, and transferring of (often sensitive) personal data. The appointed researcher will look at balancing innovation with privacy and data protection issues in XR. More specifically, they will identify and develop new ways to understand, analyze, and extend the use of existing or available XR data and data flows in ways that respect privacy and autonomy in emergent metaverse applications.


Code: 2023TCD10
Title: Interactive Volumetric Video for Extended Reality (XR) Applications
Supervision Team: John Dingliana, TCD (Primary Supervisor) / Steven Davy, TU Dublin (External Secondary Supervisor) / Gareth W. Young, TCD (Additional Supervisory Team Member)
Description: In this project we investigate 3D graphics, vision and AI techniques to improve the use of volumetric video for interactive Extended Reality (XR) technologies. An advantage of volumetric video is that it facilitates personalised and photorealistic animations of subjects without need for editing by experienced animators. However, most current applications merely treat volumetric video as a linear sequence of frames with limited possibility for interaction, apart from rudimentary operations such as playback or rigid transformations. We will investigate extensions to volumetric video including:
(a) flexible reuse such as retargeting, time-warping or seamlessly transitioning between independently recorded clips, whilst preserving the personalised and realistic appearance of the subject;
(b) improving seamless integration in XR, avoiding unrealistic intersections with the real environment, and matching physical events in the volumetric video with viable interaction points in the real-world environment;
(c) adaptation of volumetric video in real-time to integrate and improve shared XR experiences.


Code: 2023TCD11
Title: Neuropostors: a Neural Rendering approach to Crowd Synthesis
Supervision Team: Carol O’Sullivan, TCD (Primary Supervisor) / To Follow (External Secondary Supervisor)
Description: In computer graphics, crowd synthesis is a challenging problem due to high computational and labour costs. In this project, we propose to harness new developments in the field of Neural Rendering (NR) and apply novel machine learning methods to this problem. Building on initial results, the student will a) implement a novel hybrid image/geometry crowd animation and rendering system (Neuropostors) that uses new NR methods to facilitate limitless variety with real-time performance; and b) conduct a thorough set of quantitative and qualitative experiments, including perceptual evaluations, to drive the development of, and evaluate, the system.


Code: 2023TCD12
Title: Stereo Matching and Depth Estimation for Robotics, Augmented Reality and Virtual Reality Applications
Supervision Team: Subrahmanyam Murala, TCD (Primary Supervisor) / Peter Corcoran, UoG (External Secondary Supervisor) / Carol O’Sullivan, TCD (Additional Supervisory Team Member)
Description: Stereo matching and depth estimation are crucial tasks in 3-D reconstruction and autonomous driving. The existing deep-learning approaches gain remarkable performance over traditional pipelines. These approaches have improved performance on difficult depth estimation datasets but have limited generalized performance. Further, advanced computer vision applications such as augmented reality and virtual reality demand real-time performance. In order to achieve this and overcome the limitations of existing learning-based approaches, this project will involve the design and development of learning based methods for stereo matching and depth estimation, with the goal of developing lightweight deep learning models for real-time depth estimation for AR, VR and robotics applications.


Code: 2023TCD13
Title: Cross-Modality Generative Shape and Scene Synthesis for XR applications
Supervision Team: Binh-Son Hua, TCD (Primary Supervisor) / Hossein Javidnia, DCU (External Secondary Supervisor) / Carol O’Sullivan, TCD (Additional Supervisory Team Member)
Description: In the modern era of deep learning, recent developments in generative modeling have shown great promises in synthesizing photorealistic images and high-fidelity 3D models with high-level and fine-grained controls induced by text prompts via learning from large datasets. This research project aims at investigating generating 3D models from a cross-modality perspective, developing new techniques for realistic image synthesis and 3D synthesis that can serve as the building blocks for the next generation of 3D modeling and rendering tools. Particularly, we will target high-quality 3D model synthesis at object and scene level, investigating how generative adversarial neural networks and diffusion models can be applied for generating high-fidelity and realistic objects and scenes. As proof-of-concept applications, we will apply the developed techniques to rapid modeling of 3D scenes for AR/VR applications.


Code: 2023TCD14
Title: Interfacing the Real and the Virtual using Mixed Reality and Projection Mapping
Supervision Team: Mads Haahr, TCD (Primary Supervisor) / Cathy Ennis, TU Dublin (External Secondary Supervisor)
Description: This project hypothesises that combining MR with projection mapping can offer considerable improvements in closely synchronised real and virtual environments to the benefit of new types UI affordances and new applications. Most current MR research is concerned with mapping events and actions from the real to the virtual, but through the use of projection mapping, a convincing mapping can be made also from the virtual to the real.
Research questions: How can real and virtual environments be constructed and programmed using MR and projection mapping in tandem? What are the most suitable UI affordances for the resulting hybrid environments, and how is the user experience best evaluated? What are the best application domains for such environments? The questions will be explored through literature review, design and development of a prototype and user study.
Possible application domains include industrial applications, cultural heritage, museum exhibits, art installations, training/education, health/wellbeing and the Metaverse.


Technological University Dublin

Code: 2023TUD01
Title: Multi-Modal Age Assurance in Mixed Reality Environments for Online Child Safety
Supervision Team: Christina Thorpe, TU Dublin (Primary Supervisor) / Peter Corcoran, UoG (External Secondary Supervisor)
Description: This PhD research project aims to create a multi-modal interface for age assurance in mixed reality environments for online child safety. The solution will incorporate machine learning, computer vision, NLP, and biometric analysis to analyse physical attributes, contextual information, and biometric data of the user for accurate age verification while preserving privacy. The project has significant potential to improve online child safety by providing a reliable and precise means of age verification, ensuring that children are not exposed to inappropriate content or interactions with online predators. The project will also develop the candidate’s skills in digital platform technologies such as HCI and AI, data curation, and privacy-preserving algorithms. Overall, the project aims to make a notable contribution to the field of online child safety through the creation of an innovative age assurance solution.


Code: 2023TUD02
Title: Intelligent Edge Computing for Low-Latency XR Holographic Communications
Supervision Team: Steven Davy, TU Dublin (Primary Supervisor) / John Dingliana, TCD (External Secondary Supervisor) / Owais Bin Zuber, Huawei Ireland Research Centre (Additional Supervisory Team Member)
Description: This proposed PhD project aims to reduce the high bandwidth and ultra-low latency requirements of extended reality (XR) holographic communications, which currently limit the potential of this technology. By leveraging artificial intelligence (AI) and edge computing techniques, the project aims to reduce the amount of data that needs to be transmitted over the network and optimize data transmission, resulting in a more seamless and immersive experience for users. The project will investigate the use of machine learning algorithms to intelligently filter and compress 4D light field data, the use of edge computing to process the data on either end of the communication, and the use of multi-path networks to optimize data transmission. The project will work closely with Huawei to develop business cases, and test beds for the technology. The project has the potential to unlock new use cases for XR communication, enabling remote collaboration, education, and telemedicine.


Code: 2023TUD03
Title: Talk to me: Creating plausible speech-driven conversational characters and gestures
Supervision Team: Cathy Ennis, TU Dublin (Primary Supervisor) / Rachel McDonnell, TCD (External Secondary Supervisor) / Benjamin Cowan, UCD and Julie Berndsen, UCD (Additional Supervisory Team Members)
Description: Interaction with virtual characters has provided increased engagement and opportunities for immersion for players in a social context for many purposes. With the advance of spaces like the Metaverse and applications such as ChatGPT, the demand for engaging virtual characters who can generate plausible gestures and behaviours for speech will only increase. In any space that allows for embodied interaction, when players/users can be represented by a virtual avatar or where they interact with a virtual character, exchanges can become more engaging. However, the requirements of real-time dynamic interactions pose a serious challenge for developers; plausible and engaging behaviour and animation for these characters is required in scenarios where it is impossible to script exactly what types of actions might be required. We aim to tackle part of this problem by investigating speech driven non-verbal social behaviours for virtual avatars (such as conversational body motion and gestures) and develop ways to generate plausible interactions with them in real-time interactive scenarios.


Code: 2023TUD04
Title: Adaptive Mulitimodal Avatars for Speech Therapy Support
Supervision Team: Robert Ross, TU Dublin (Primary Supervisor) / Naomi Harte, TCD (External Secondary Supervisor) / Cathy Ennis, TU Dublin (Additional Supervisory Team Member)
Description: Pediatric speech and language therapy is a challenging domain that can benefit from effective conversational coach design, and in this research we aim to push the boundaries of virtual agents for speech therapy by investigating methods for speech therapy system development so as to make the system not only effective at communicating speech therapy goals, but also to provide such instruction in a fluent and approachable avatar that the young service user can engage with. This work will involve systematic research with key partners including therapists and end users, as well as the development of prototype personas and strategies for a conversational speech therapy system to supplement in-clinic care. This work is well suited to a Computer Scientist who has experience in either virtual character design or conversational system development. The ideal candidate will also have an interest in user studies and health care.


Code: 2023TUD05
Title: Mapping the analysis of students’ digital footprint to constructs of learning
Supervision Team: Geraldine Gray, TU Dublin / Tracey Mehigan, DCU / Ana Schalk, TU Dublin
Description:This proposal explores the importance of learning theories in informing the objective evaluation of learning practice, as evidenced by the analysis of multimodal data collected from the eclectic mix of interactive technologies used in higher education. Frequently, learning analytics research builds models from trace data easily collected by technology, without considering the latent constructs of learning that data measures. Consequently, resulting models may fit the training data well, but tend not generalise to other learning contexts. This study will interrogate educational technology as a data collection instrument for constructs of learning, by considering the influence of learning design on how learning constructs can be curated from these data. Results will inform methodological guidelines for data curation and modelling in educational contexts, leading to more generalizable models of learning that can reliably inform how we act on data to optimize the learning context for students.


University College Dublin

Code: 2023UCD01
Title: Integrating Human Factors into Trustworthy AI for Healthcare
Supervision Team: Rob Brennan, UCD (Primary Supervisor) / Siobhán Corrigan, TCD (External Secondary Supervisor)
Description: This PhD will explore the gap between Trustworthy Artificial Intelligence (TAI) guidelines and what is needed in practice to build trust in a deployed, AI-based system so that it is effective. It will seek new ways to measure, quantify and influence TAI system development in an organisational context. It will study socio-technical systems including AI components to make them more trusted and effective. It is an interdisciplinary topic drawing on both the Computer Science and Psychology disciplines. This PhD will partner with a National healthcare data analytics platform deployment to explore and define the factors required to assure trust in the system when AI components are deployed. Stakeholders will include: patient safety and quality monitoring professionals, clinicians, patients and the general public. It will investigate what are the key social and technical factors in deploying such a platform to increase trust, accountability, transparency and data altruism?


Code: 2023UCD02
Title: Ethical Recommendation Algorithms: Developing an ethical framework and design principles for trustworthy AI recommender systems.
Supervision Team: Susan Leavy, UCD (Primary Supervisor) / Josephine Griffith, UoG (External Secondary Supervisor)
Description: AI driven recommendation algorithms are profoundly influential in society. They are embedded in widely used applications such as Instagram and TikTok, disseminating content including social media, video or advertising according to user profiles. However, without appropriate ethical frameworks and design principles, they have the potential to lead to online harm, particularly for vulnerable groups. Ethical issues concern inappropriate content, risks to privacy and a lack of algorithmic transparency. In response, the EU and Irish government are developing regulations for AI. However, given the complex nature of recommender systems, there are significant challenges in translating this into implementable design guidelines and ethical principles. This project will develop an ethical framework and design principles for recommender algorithms ensuring the development of trustworthy recommender algorithms, enabling ethics audits and ultimately, will work to protect users from risks of online harm.


University of Galway

Code: 2023UoG01
Title: Non-Contact Sensing and Multi-modal Imaging for Driver Drowsiness Estimation
Supervision Team: Michael Schukat, UoG (Primary Supervisor) / Maria Chiara Leva, TU Dublin (External Secondary Supervisor) / Peter Corcoran, UoG and Joe Lemley, Xperi (Additional Supervisory Team Members)
Description: Drowsiness, i.e., the unintentional and uncontrollable need to sleep, is responsible for 20-30% of road traffic accidents. A recent WHO statistic shows that road accidents are ranked eighth as primary cause of death in the world, resulting in more than 1.35 million deaths annually. As a result, driver monitoring systems containing drowsiness detection capabilities are becoming more common and will be mandatory in new vehicles from 2025 onwards.
But there remain significant challenges and unexplored areas, particularly surrounding multimodal imaging (NIR, LWIR and neuromorphic) techniques for drowsiness estimation. Therefore, the overall aim of this research is to improve and implement novel neural AI algorithms for non-contact drowsiness detection that can be used in unconstrained driving environments. In detail, this research will examine SoA non-contact driver drowsiness techniques, evaluate existing and research new drowsiness indictors, and build / validate innovative complex machine learning models that are trained with both public and specialized industry datasets.


Code: 2023UoG02
Title: Multimodal Federated Learning Approach for Human Activity Recognition in Privacy Preserving Videos (FLARE)
Supervision Team: Ihsan Ullah, UoG (Primary Supervisor) / Susan Mckeever, TU Dublin (External Secondary Supervisor) / Michael Schukat, UoG and Peter Corcoran, UoG (Additional Supervisory Team Members)
Description: The United Nations reported a rapid rise in the numbers of people living well beyond retirement age. Older adults wish to maintain a high-quality independent lifestyle without the need for high-cost medical/care interventions. Several technology-based solutions use machine-learning e.g., human activity recognition (HAR) systems which focus on monitoring pure health conditions, but it is largely known that wellbeing is a much more subjective and complex concept. Recent state-of-the-art machine-learning algorithms are trained on large amounts of centrally stored data which is hard for various reasons e.g., privacy loss, load on network while data transfer, General-data-protection-rules restrictions. More specifically, due to privacy concerns such solutions face acceptability barriers because of being considered too invasive. This project aims to address acceptability problem and better results in HAR via the use of by default privacy preserving imaging types (e.g., non-RGB (face unrecognisable)) and federated learning approach (data remains at owner place).


Code: 2023UoG03
Title: Exploring the Integration of Emotional, Cognitive and BioPhysical Sensing into Recommender Systems for Digital Entertainment
Supervision Team: Josephine Griffith, UoG (Primary Supervisor) / Robert Ross, TU Dublin (External Secondary Supervisor) / Peter Corcoran, UoG and Joe Lemley, Xperi (Additional Supervisory Team Members)
Description: Passive sensing of a user’s emotional state is challenging without measuring biophysical signals, although there has been progress in determining emotional states from video-based facial analysis, human-speech analysis and combined approaches. Cognition and stress-assessment are niche areas of research but are important recently in driver monitoring systems. This research will explore new approaches to combine SotA hybrid speech/imaging techniques to perform a real-time emotional/cognitive state assessment (ECSA) of a user interacting with a recommender system. In parallel the recommender model will be adapted to respond to the emotional/cognitive inputs, employing these to dynamically adapt the outputs provided to the user, based on their assessed emotional/cognitive states. As an indicative example, an in-car entertainment system might take decisions between suggesting video entertainment for occupants, or music for the driver, based on ECSAs of both driver and occupants, using data from an in-car camera and microphone.


Code: 2023UoG04
Title: Virtual Reality for Robust Deep Learning in the Real World
Supervision Team: Michael Madden, UoG (Primary Supervisor) / Cathy Ennis, TU Dublin (External Secondary Supervisor)
Description: There have been notable successes in Deep Learning, but the requirement to have large, annotated datasets creates bottlenecks. Datasets must be carefully compiled and annotated with ground-truth labels. One emerging solution is to use 3D modelling and game engines such as Blender or Unreal to create realistic virtual environments. Virtual cameras placed in such environments can generate images or movies, and since the locations of all objects in the environment are known, we can computationally generate fully accurate annotations. Drawing on the separate and complementary fields of experience of the two supervisors, the PhD student will gain a synergy of expertise in Graphics Perception and Deep Learning. This PhD research will investigate questions including: (1) strategies to combine real-world and virtual images; (2) the importance of realism in virtual images; (3) how virtual images covering edge cases and rare events can increase the reliability, robustness and trustworthiness of deep learning.


Code: 2023UoG05
Title: Ethical Studies on Consumer Technologies involving User Monitoring
Supervision Team: Heike Felzmann, UoG (Primary Supervisor) / Marguerite Barry, UCD (External Secondary Supervisor) / Peter Corcoran, UoG and Joe Lemley, Xperi (Additional Supervisory Team Members)
Description: As a range of advanced new AI-based human-sensing technologies are emerging consumer devices and services will become better able to identify, engage with and respond to the moods of users. Examples include driver-monitoring systems, now adopted across the EU, advanced home entertainment and gaming systems, and online chatbots and shopping assistants. These technologies feature advanced AI capabilities to analyze the text, speech, and visual appearance of a human subject. Some of these capabilities allow a consumer-facing system to better meet the needs of a user through understanding their emotional and cognitive states but introduce challenging new ethical dilemmas. The ethical dilemmas associated with such systems are the focus of this research project which aims to study, quantify and assess such ethical issues for a number of impactful use cases, including the use of advanced AI sensing systems in driver-monitoring systems and in consumer gaming and entertainment systems.

Applications for d-real programme

Applications for d-real positions starting in September/October 2023 are now being accepted. The deadline for the second round of recruitment is 16:00 (Irish time) on Wednesday 24th May 2023. We will start assessing applications from the first round shortly. If you have any questions about the programme, please refer to the Frequently Asked Questions webpage. If you cannot find your answer there then you can email the Programme Manager, Dr Stephen Carroll (stephen.carroll@tcd.ie). You can make an application here (through the Good Grants system). You will need to register (free) with Good Grants and confirm your email address to make an application. You will be asked to list your top three topic preferences (listed on our website here). You will be asked for the project number and title of the project. You will also be asked some personal details, your educational history, work experience, technical skill and for a statement on why you would like to join the programme. The system permits the uploading of 5 PDF documents, so you can upload supporting material (CV, personal statement, paper publications, etc.)

Application Requirements and Eligibility

d-real seeks applications from talented graduates to join our exciting PhD training programme. As part of this application you will be asked to complete five sections:

  • PhD Topic Preferences, where you will be asked to rank your preferred PhD proposal(s). Details of available topics may be found here
  • This section which includes your Personal Details
  • Academic Track Record, including details of your Bachelor’s and (optionally) Master’s degrees
  • Further Track Record, including details on other education, research achievements, technical skills/achievements and work experience
  • Personal Statement, which allows you to describe your motivation for pursuing a PhD degree in digitally-enhanced reality (Max 3000 characters)
  • You will be permitted to attach any relevant supporting documents at the end of the form.

The minimum requirements for entry are set by the university in which you register. These differ by institution, but the following are broad guidelines:

  • 2.1 grade (or equivalent) in an undergraduate or postgraduate degree in computer science, maths, engineering or similar technical discipline. Other qualifications in disciplines related to listed PhD topics will also be considered.
  • Strong programming/technical ability.
  • Non-native English speakers require at least IELTS 6.5 (with at least 6 in all components) or equivalent English language test in the university in which you will be registering. All our partner universities currently accept the Duolingo English Test, which can be taken online.

Funding

Each student who is admitted to this programme will receive a full scholarship to undertake a four year structured PhD programme. This scholarship comprises full payment of university fees for four years and a tax-free stipend of €19,000 per annum for four years. This stipend amount is currently being reviewed at a national level. In addition, a generous budget for conference travel, equipment, training, placement maintenance and publication costs is provided.

Job Overview
Job Location