Workshops

Here you can find educational and research workshops that I have helped organized. Enjoy and happy learning!

Workshop title: AI + mental health for minority populations with chronic disease

Host: MIT office of experiental learning

Talk 1: CrossCheck: Toward passive sensing and  detection of mental health changes in people with schizophrenia

Speaker: Min Hane Aung

Slides here

Talk 2: Which kind of diagnosis in psychiatry?

Speaker: Stijn Vanheule

Slides here

Talk 3: Mental health innovation

Speaker: Thomas R. Insel

Slides here

 

Workshop Title: Neural decoding of spike trains and local field potentials with machine learning in python

Abstract: Neural decoding has applications in neuroscience from understanding neural populations to build brain-computer interfaces. In this computational tutorial, I will introduce neural decoding principles from a machine learning perspective using the Python programming language. The tutorial will be focused on data preprocessing, model selection and optimization for decoding neural information from spike trains and local field potentials. The studied dataset contains neural information from six cortical areas of the macaque brain spanning from the frontal to the occipital lobe.

Host: Center for Brain Minds and Machines (CBMM) at MIT

Video here (Youtube) | Tutorial and code here (Github)

 

Workshop title: Artificial Intelligence and Neuroscience: From Neural Dynamics to Artificial Agents

Host: SfN

Abstract: In this mini-symposium, we explore the entangled relationship between artificial intelligence and neuroscience. We first explore recent advancements in understanding the computations in neural dynamics of the brain from an artificial intelligence perspective. The mini-symposium will then explore the relationship between artificial systems research and neuroscience: including the study of artificial agents’ behavior and the intriguing relationships between deep neural networks and the brain.

Talk 1: Understanding neural dynamics under flexible visuomotor tasks with neural decoding and manifold learning

Speaker: Omar Costilla-Reyes (MIT)

Talk 2: Inferring temporal and computational variability from population spike trains

Speaker: Leah Dunker (UCL)

Talk 3: State space models for multiple interacting neural populations

Speaker: Joshua Glaser (Columbia)

Talk 4: Network structure and dynamics of a mesoscopic mouse whole-brain connectome

Speaker: Hannah Choi (Washington)

Talk 5: Task-Driven Convolutional Recurrent Neural Network Models of Dynamics in Higher Visual Cortex

Speaker: Aran Nayebi (Stanford)

Talk 6: Vector-based navigation using grid-like representations in artificial agents

Speaker: Andrea Banino (DeepMind)

 

Workshop Title: Artificial intelligence meets neuroscience

Host: MIT brain and cognitive sciences

Video here (Youtube)

Talk 1: Artificial intelligence and its relationship to increase our understanding of the brain
Speaker: Dr. Omar Costilla Reyes – Miller Lab, Brain and Cognitive Sciences, MIT
Abstract: During the past few years, we have experienced an impressive explosion in the development and deployment of artificial intelligence systems to solve tasks such as image recognition and autonomous driving. Such computational advancements are making also contributions to increase our understanding of the brain. In this talk, I will present an overview in advancements of artificial intelligence systems in neuroscience. Then I will explain the computational building blocks that are still missing to make substantial progress in our understanding of the brain.

Talk 2: Steady As She Knows: Invariant Representations of Facial Emotion and Identity
Speaker: Kathryn C O’Nell – Saxe Lab, Brain and Cognitive Sciences, MIT
Abstract: Every day, your brain works constantly to help you perform the tasks vital to your life. Some are obvious and take conscious effort, like speaking and moving. However, there are also tons of computations going on inside your brain that often go unnoticed. I work on one such easily-ignorable computation. You can, with relative ease, recognize whether someone else is happy or sad based on their facial expression, regardless of their identity or what angle from which you’re viewing their face. This means that your internal representation of emotion is invariant: it’s stable even when other aspects of the face you’re looking at change. In this talk, I’ll discuss how I use artificial intelligence to study how the brain creates invariant representations of facial emotions.

Talk 3:  How does the brain make a prediction about the world?
Speaker: Dr. Andre Bastos – Miller Lab, Brain and Cognitive Sciences, MIT
Abstract: In any nervous system, brains have to separate activity that is externally generated (the stuff happening "out there") from internally generated activity. Sensory feedforward pathways carry information up the brain's processing chain and internal information is thought to be carried by feedback pathways. Machine learning has made great strides in understanding and implementing the feedforward stream but is only beginning to consider internal information and feedback. This internal activity represents our plans, goals, attentional state, and expectation. They allow us to form useful predictions about the world, which helps to guide perception and action. In this talk, I will introduce these concepts and discuss our current neuroscientific understanding about this internal activity. I will also discuss how machine learning and AI are beginning to tap into and model not only feedforward, but also feedback processing.

Talk 4: The role of symbols on the mind, two perspectives on Artificial Intelligence
Speaker: Andres Campero – Tenenbaum Lab, Brain and Cognitive Sciences, MIT
Abstract: Two perspectives regarding the mind have existed with parallels in both Artificial Intelligence and Cognitive Science. The first is more related to logic and based on explicit symbols. The second is more similar to intuition, where learning happens not through deduction but through repetition. We will revisit this debate and then discuss a contemporary research direction trying to combine both.

Talk 5: Using AI to understand how brain regions “talk” to each other
Speaker: Mengting Fang, Anzellotti Lab, Psychology department, Boston College
Abstract: Most cognitive tasks are not completed by a single brain region, but by many regions working together. Thanks to their coordinated responses, we can interact with the complex world around us. How do different brain regions cooperate to make human behavior possible? New AI tools let us study this question in new and more powerful ways. I will talk about how artificial neural networks can help us understand the “language” with which brain regions communicate with each other, and show examples of how brain regions that respond to faces and scenes interact with the rest of the brain (while participants were watching Forrest Gump!). I will discuss some difficulties scientists encounter in this work, and some new directions for the future.

Talk 6: Where in the brain are memories born?
Speaker: Dr. Diego Mendoza Halliday, Desimone Lab, Brain and Cognitive Sciences, MIT
Abstract: Information first enters our brains via the sensory organs. The brain selects a fraction of all incoming sensory information for storage in short-term memory and eventually in long-term memory. We currently know that the brain processes sensory information in a series of stages along a highway of interconnected regions. However, how and in which stage this information is first transformed into memories has remained unclear. In this talk, I will summarize some of our recent studies, which have helped answer these questions, showing where in the brain short-term memories are first born. These findings provide insights into the general principles of functional organization in the brain.