CONFÉRENCE GRAND-PUBLIC de VULGARISATION (en français)
Docteur, suis-je une personne anxieuse ? Démystifier l’anxiété pour mieux la comprendre et la traiter
Nous vous invitons à la conférence hors-série de Monsieur Sébastien Grenier, Ph.D., Professeur adjoint (sous octroi), département de psychologie, UdeM et directeur du laboratoire Leader.
Admission 20 $
Admission gratuite pour les membres d’Acouphènes Québec et du BRAMS. Mais vous devez vous inscrire à l’avance : inscription conférence
La conférence sera suivie d’un cocktail-réseautage pendant lequel nous dévoilerons la nouvelle section membre de notre site Web.
Sébastien Grenier, Ph.D., directeur du LEADER
Le Dr Sébastien Grenier est psychologue clinicien spécialisé dans l’évaluation et le traitement cognitivo-comportemental de l’anxiété et des troubles associés (incluant la dépression). Il a effectué ses études doctorales (Ph.D. R/I) en psychologie à l’UQAM (sur le trouble obsessionnel-compulsif) avant d’effectuer un postdoctorat au Centre de recherche de l’Hôpital Charles-Lemoyne (affilié à l’Université de Sherbrooke), où il s’est intéressé aux facteurs associés à l’anxiété et à la dépression gériatrique. Ensuite, il a complété un deuxième postdoctorat au Centre de recherche de l’Institut universitaire de gériatrie de Montréal (CRIUGM) dans le même domaine. Il est actuellement chercheur-boursier (FRQS) au CRIUGM. Outre la recherche, le Dr Grenier a pratiqué pendant plus de 10 ans à la Clinique d’anxiété de Laval (bureau privé) et est professeur adjoint (sous octroi) au département de psychologie de l’Université de Montréal depuis 2014.
Pupillary responses index music-induced arousal
Music-induced emotions are conveyed by a variety of acoustical cues and are associated with measurable psychophysiological changes. In this talk, I will present three related studies, all using the same set of musical excerpts, which link music-induced emotions, acoustical features, and pupillary responses.
Detection of auditory regularities: success and failure
Expertise is acquired by a gradual replacement of on-line computations with scheme-based memory retrieval. This is the case for both simple perceptual and complex cognitive tasks. However, such a training-based replacement requires acquisition of the task-relevant regularities.
Use of the MBEMA with preschoolers
During this conference, we will present the adaptation of the Montreal Battery of assessment of Musical Abilities (MBEMA) for children of pre-school age. This version of the battery, using a digital tablet, includes test on melody, rhythm and memory. A pilot assessment was conducted with 100 French-speaking children aged between 3 and 5 years (N = 49 boys, 51 girls), from different socio-economic contexts in Quebec.
Preliminary results allow to see a gradual and significant improvement of musical skills based on the age of participants and their musical experience. Despite methodological limitations related to the young age of the subjects, the use of the MBEMA seems to represent a reliable assessment tool to measure the musical skills during childhood.
Jonathan Bolduc holds a Canada Research Chair on music and learning. He is also an associate professor of music education at pre-school/elementary school at the Faculty of music, where he also runs the laboratory Mus-Alpha.
Voice and language processing in the infant brain
From the first days of life, babies appear to be naturally attracted to human voices. Recent advances in neuroimaging methods now allow studying brain responses to these socially relevant stimuli in young infants. Both fMRI and fNIRS suggest a network of areas specialised for processing human speech and non-speech vocalisations in infancy.
Intensive contraction of a muscle modulates the corticospinal excitability (CSE) not only of the contracting muscle, but also of the resting muscles located in remote parts of the body; this is the so-called “remote effect”. We investigated to what extent the CSE of a hand muscle is modulated during preparation and execution of mouth and foot movements either separately or in combination. Hand-muscle CSE was estimated based on motor evoked potentials (MEPs) elicited by transcranial magnetic stimulation (TMS) and recorded from the first dorsal interosseous (FDI) muscle.
Naeem received his Ph.D. in Human Movement Science from the University of Verona and VU University of Amsterdam, where he aimed to tackle the interaction between sound perception and motor behavior, and sought to unravel its neural underpinnings. Thereafter, he spent time as a postdoc at the University of Helsinki. Recently, he joined the laboratory of Prof. Isabelle Peretz as a postdoc to perform research in the field of neuroscience of music.
Marianne Stephan is interested in the influence of auditory information on motor learning and memory formation and its underlying neuronal mechanisms. She’s currently doing a PostDoc with Dr Virginia Penhune (Concordia University). The presentation will be about preliminary data of a Transcranial Magnetic Stimulation study performed last year at BRAMS.
Investigating corticospinal excitability during melody listening: a Transcranial Magnetic Stimulation study
Transformation from sound to meaning in the ferret auditory cortex.
How do we make sense of the sounds we hear? We propose that there is a transformation from sound to meaning in the brain, which includes multiple steps from an initial, faithful encoding of incoming spectrotemporal acoustic patterns to further stages where the relevant auditory objects are recognized, categorized and associated with task or context-specific behavioral meaning and appropriate responses.
Register: http://goo.gl/forms/c6G2FFew0ZdA8PH23 (Deadline is May 23)
During the day: BRAMS conference room
|9 – 9:15 am||Welcome Address||Alexandre Lehmann (McGill, CRBLM-BRAMS)http://www.crblm.ca/members/regular/alexandre_lehmann|
|9:15 – 10 am||Mobile EEG: Toys, medical devices, and everything in between||Jeremy Moreau (NeuroSpeed Lab, MNI)http://www.mcgill.ca/bic/research/neurospeed-dynamic-neuroimaging-laboratory-baillet|
|10 – 10:15 am||Visualizing frequency band activity with consumer EEG||Naoto Hieda (Shared Reality Lab, McGill)http://srl.mcgill.ca|
|10:15 – 11:15 am||MuLES software + live demo||Raymundo Cassani (MusaeLab, INRS)http://musaelab.ca/team-view/raymundo-cassani/|
|11:15 am – 12 pm||LSL software presentation||Martin Bleichner (Oldenburg University)http://www.uni-oldenburg.de/psychologie/neuropsychologie/team/martin-bleichner/|
|12 – 1 pm||BREAK|
|1 pm – 3:30 pm||Hands-On Recording with LSL and the SMARTING Device||Martin Bleichner (Oldenburg University)|
LECTURE: Pavillon Marie-Victorin Room D-440 (90 Avenue Vincent-d’Indy, Metro Édouard-Montpetit)
BRAMS-CRBLM Invited Lecture (4 – 5 pm): Martin Bleichner (Oldenburg University, Germany)
Topic: The Oldenburg approach to mobile EEG
In this talk I will present our approach on mobile EEG. The joint research cluster Hearing4All has the goal to better understand and to improve hearing where necessary. Our group’s project focuses on controlling hearing devices using the listener’s neural activity: instead of needing a remote control to select the optimal setting, the hearing device should seamlessly respond to its user’s intentions. For this it is necessary to record and to understand the neural activity related to hearing in daily life situations. To this end we have developed solutions for mobile EEG acquisitions that allow for concealed signal acquisition. With a combination of mobile EEG, ear-centered EEG (cEEGrid, eartrode) and mobile signal acquisition (smartphone based) we can study the aspects of auditory attention inside and outside the lab. Here I will talk about our approach to mobile EEG and will present a number of studies we have conducted on auditory attention. Further, I will present an overview of a number of studies in our lab that use mobile EEG, for example to study social interactions or aspects of neurorehabilitation.
The lecture will be followed by a gathering, back at BRAMS in the conference room, with live demos of mobile EEG technology (N. Hieda & R. Cassini)