Qualification Type: | PhD |
---|---|
Location: | Exeter |
Funding for: | UK Students |
Funding amount: | Annual tax-free stipend of at least £19,237 for 3.5 years full-time, or pro rata for part-time study |
Hours: | Full Time, Part Time |
Placed On: | 19th July 2024 |
---|---|
Closes: | 16th August 2024 |
Reference: | 5195 |
Location: Department of Computer Science, Streatham Campus, Exeter
The University of Exeter’s Department of Computer Science is inviting applications for a PhD studentship fully-funded by the faculty of Environment, Science and Economy to commence on 23 September 2024 or as soon as possible thereafter. For eligible students the studentship will cover Home or International tuition fees plus an annual tax-free stipend of at least £19,237 for 3.5 years full-time, or pro rata for part-time study. The student would be based in the Computer Science Department in the Faculty of Environment, Science and Economy at the Streatham Campus in Exeter.
Project Description:
Event recognition in videos has emerged as one of the most dynamic areas in computer vision. Researchers have extensively explored a wide range of topics, from recognising simple, short-term actions like running and walking, to complex, long-term multi-agent events such as surgical procedures or collaborative tasks. Typically, long videos consist of multiple actions, each composed of sub-actions with varying durations and sequences.
Inspired by the natural cognitive process of breaking down complex events into smaller sub-events, this project aims to advance the parsing of action videos, with a specific focus on first-person vision. In first-person vision (egocentric), a wearable camera is utilised to simulate the first-person perspective, capturing what the user sees and focusing on where the user looks. This unique viewpoint offers an exciting opportunity to develop intelligent wearable systems that can understand and interpret user activities and behaviours.
Such systems have the potential to significantly enhance quality of life by acting as caregivers or trainers in various settings, from healthcare to industrial environments. Building on the recent successes of deep neural networks and graph-based representations in video analysis, this project seeks to tackle the temporal segmentation challenge in egocentric vision. Additionally, we aim to explore action anticipation, where the system not only recognises ongoing actions but also predicts future actions based on the observed context and patterns.
Type / Role:
Subject Area(s):
Location(s):