SINAPSE Virtual Happy Hour May 19, 2021 04:30 PM - 05:30 PM — Virtual Happy Hour (online)
9th Annual Scottish Radiotherapy Research Forum Jun 03, 2021 12:30 PM - 05:00 PM — Virtual Meeting (online)
Scottish Dementia Research Consortium Annual Conference 2021 Jun 16, 2021 10:00 AM - 03:30 PM — Virtual Meeting (online)
Medical Imaging Convention [rescheduled] Sep 15, 2021 - Sep 16, 2021 — National Exhibition Centre, Birmingham, England
2021 SINAPSE ASM Sep 16, 2021 - Sep 17, 2021 — Technology & Innovation Centre, University of Strathclyde, 99 George Street, Glasgow

eLearning

SINAPSE experts from around Scotland have developed ten online modules designed to explain medical imaging. They are freely available and are intended for non-specialists.


Edinburgh Imaging Academy at the University of Edinburgh offers the following online programmes through a virtual learning environment:

Neuroimaging for Research MSc/Dip/Cert

Imaging MSc/Dip/Cert

PET-MR Principles & Applications Cert

Applied Medical Image Analysis Cert

Online Short Courses

PhD opportunities at Glasgow: Integrating imaging data and artificial social intelligence

Imaging-related research projects available among UKRI SOCIAL CDT studentship opportunities at University of Glasgow

The University of Glasgow has PhD studentships available for 2021 entry through SOCIAL, the UKRI Centre for Doctoral Training in Socially Intelligent Artificial Agents, which include the following imaging-related research projects:

 

Enhancing Social Interactions via Physiologically-Informed AI (Supervisors: Prof Marios Philiastides and Prof Alessandro Vinciarelli)

  • In this project, we propose to develop a physiologically-informed AI platform, utilizing neural and systemic physiological information (e.g. arousal, stress) together with affective cues from facial features to infer latent cognitive and emotional states from humans interacting in a series of social decision-making tasks (e.g. trust game, prisoner's dilemma etc). Specifically, we will use these latent states to generate rich reinforcement signals to train AI agents (specifically deep reinforcement learning) and allow them to develop a "theory of mind" in order to make predictions about upcoming human behaviour. The ultimate goal of this project is to deliver advancements towards "closing-the-loop", whereby the AI agent feeds-back its own predictions to the human players in order to optimise behaviour and social interactions.

 

Evaluating and Shaping Cognitive Training with Artificial Intelligence Agents (Supervisors: Dr Fani Deligianni and Dr Monika Harvey)

  • This project aims to exploit machine learning to develop intuitive measures of cognitive training in a platform independent way. Objectives include: 1) Predicting psychological dimensions (ie. enjoyment, anxiety, valence and arousal) based on performance and neurophysiological data; 2) Relating performance improvements (ie. learning rate) to psychological dimensions and physiological data (ie. EEG and eye-tracking); 3) Developing artificial intelligence approaches that are able to modulate the virtual reality world to control learning rate and participant satisfaction.

 

Modulating Cognitive Models of Emotional Intelligence (Supervisors: Dr Fani Deligianni and Prof Frank Pollick)

  • This project aims to exploit real-time functional Magnetic Resonance Imaging (fMRI) neurofeedback techniques to build cognitive models that explain modulation of brain activity in key regions related to empathy and emotions. Overall aims are: 1) To build data-driven cognitive models of real-time brain network interaction during emotional modulation via neurofeedback techniques; 2) To develop advanced machine learning algorithm to perform cross-domain learning between fMRI and EEG; 3) To develop intelligent artificial agents based on portable EEG systems to successfully regulate emotional responses, taking into account cognitive models derived in the fMRI scanner.

 

Detecting Affective States based on Human Motion Analysis (Supervisors: Dr Fani Deligianni and Prof Marios Philiastides)

  • This project aims to investigate the relationship between effective mental states and psychomotor abilities with relation to gait, balance and posture while emotions are modulated via augmented reality displays. The goal is to develop a comprehensive continuous map of interrelationships in both normal subjects and subjects affected by a mood disorder. Challenges to be addressed include: 1) Building robust experimental setup of intuitive naturalistic paradigms; 2) Developing AI algorithms to relate neurophysiological data with gait characteristics based on state-of-the-art motion capture systems (taking into account motion artefacts during gait); 3) Developing AI algorithms to improve detection of gait characteristics via rgbd cameras and possibly new assistive living technologies based on pulsed laser beam.

 

Brain Based Inclusive Design (Supervisors: Dr Monika Harvey and Prof Alessandro Vinciarelli)

  • The overarching goal of the project is to design Human-Computer Interfaces capable of adapting to the limits of every user, in view of a fully inclusive design capable putting every user at ease, i.e., enabling him/her to interact with technology according to her/his processing speed and not according to the speed imposed by technology designers. The proposed approach includes four steps: 1) Development of the methodologies for the automatic measurement of the phenomena described above through their effect on EEG signals and behavioural performance; 2) Identification of the relationship between the phenomena above and observable factors such as age, education level, computer familiarity, etc. of the user; 3) Adaptation of the technology design to the factors above; 4) Analysis of the improvement of the users' experience.

 

Towards modelling of biological and artificial perspective taken (Supervisors: Prof Lars Muckli and Dr Michele Sevegnani)

  • This project aims to explore the neural correlates of visual imagery and aphantasia by using neural decoding techniques, which allow the reconstruction of perceived features from human magnetic resonance imaging (fMRI) data. This method will allow us to detect shared representation networks between visual imagery and actual visual perception of the same objects, whether these networks are shared across participants, and whether they differ between aphantasics and non-aphantasics. We will use Ultra High Field fMRI to read brain activity while participants (aphantasics and non-aphantasics) are presented with either single-sentence descriptions of object categories or different visual exemplars from the same categories. The end goal is to embed computational models capable of representing physical and virtual space within AI agents to enable empathic-seeming behaviours.

 

Deep Learning feature extraction for social interaction prediction in movies and visual cortex (Supervisors: Prof Lars Muckli and Dr Fani Deligianni)

  • Despite different studies on AI focusing on how to use visual features to detect and track actors in a movie, it is not clear in the brain how cortical networks for social cognition involve layers in the visual cortex for processing the social interaction cues occurring between actors. We will use cortical layer specific fMRI at Ultra High Field to read brain activity during movie stimulation. Using the latest advances in Deep Learning, we will study how the interaction between two people in a movie is processed, trying to analyse predictions that occur between frames. The comparison between the two representation sets, which involves the analysis of the movie video with Deep Learning and its response measured within the brain, will occur doing model comparison with Representational Similarity Analysis (RSA). The work and its natural extensions will help clarify how the early visual cortex is responsible for guiding attention in social scene understanding.

 

For more information on the studentships, please visit: https://socialcdt.org/program/

Application deadline: 28th February 2021