Qualification Type: | PhD |
---|---|
Location: | Coventry |
Funding for: | UK Students, EU Students |
Funding amount: | Stipend: UKRI standard stipend rate: £19,237 for 2024/25 |
Hours: | Full Time |
Placed On: | 4th July 2024 |
---|---|
Closes: | 3rd October 2024 |
Funding Source: EPSRC
Eligibly: Available to eligible home fee status and UK domicile EU students
Start Date: Oct. 2024
Project Overview
As autonomous vehicles (AVs) transition from laboratories/test-tracks to public roads, ensuring their safety is paramount, as exemplified by the Cruise AV incident in October 2023. Utilising synthetic data to enable the training and virtual testing has been increasingly recognised as an effective practice for assuring AV safety. In addition to traditional simulators, Generative AI (GAI) is becoming a new way to generate synthetic data in the AV domain. However, how to ensure the responsible use of GAI for such safety-critical systems remains a key barrier, which is the question motivates this project.
The effectiveness of GAI models in generating data for training and testing AV perception components hinges on adhering to key properties: robustness, explainability, fairness, privacy, and security. Each property must be clearly defined with measurable metrics and efficient estimation methods. For instance, robustness can be evaluated in terms of resilience to input variations, while explainability involves the model's decision-making transparency. Upon accurately verify these properties, targeted improvement methods can be proposed to enhance the GAI model in these specific areas. To validate this approach, the creation of a benchmark and the conduct of case studies are crucial. These would serve as a standard for evaluating and refining GAI models, ensuring they meet ethical standards and contribute to the development of safer and more responsible AV technologies.
Our aim is to design a responsible GAI framework for AV perception, by implementing the following programme: 1) Developing a set of formally defined properties with metrics covering aspects such as robustness, explainability, fairness, privacy, and security, ensuring a comprehensive and holistic framework. 2) Establishing efficient verification methods/tools for accurately and reliably assessing those defined properties' metrics from diverse perspectives and scenarios; 3) constructing a benchmark with the defined properties, metrics, estimation tools, and a selection of AV perception models as a publicly accessible standard for validating the responsibility of GAI in AV perception context; 4) and conducting a case study to demonstrate the efficacy of the proposed framework with industrial partners.
As a PhD student, you will be involved in a cutting-edge research program at the intersection of Safety and Reliability, Software Engineering and Machine Learning. The project will involve probabilistic modelling, statistical inference, algorithm design and optimisation, empirical experiments on AI/ML models. The successful candidate will receive a competitive stipend and full tuition fee support for the duration of the 3.5-year PhD project.
Essential and Desirable Criteria
Essential: At least a 2:1 degree in Undergraduate or Master's Degree in Machine Learning, Computer Science, Software Engineering, System Engineering, Statistics, Robotics, or related disciplines. Strong theoretical and experimental skills, along with a keen interest in interdisciplinary research.
Desirable: Prior experience in Safe AI techniques with publications will be advantageous.
Funding and Eligibility
The studentship is Available to eligible home fee status and UK domicile EU students with full awards for 3.5 years. Stipend at the UKRI rate and tuition fees will be paid at the UK rate.
Key Information
Supervisors: Xingyu Zhao
Type / Role:
Subject Area(s):
Location(s):