A Unified, Scalable Framework for Neural Population Decoding
NeurIPS, Dec 2023.
Mehdi Azabou - Postdoc @ Columbia University
My areas of interest are Representation Learning, Data-Centric AI, and Computational Neuroscience. I am actively working on developing methods for self-supervised representation learning for time-series and graphs.
Oct 2024 | I have started as an NSF AI Institute for Artificial and Natural Intelligence (ARNI) Postdoc at Columbia University in 🍎 New York City. | |
Aug 2024 | I have successfully defended my 🎓 PhD thesis titled "Building a foundation model for neuroscience". | |
Feb 2024 | I will at COSYNE 2024 in Lisbon, Portugal to present our latest poster titled "Large-scale pretraining on neural data allows for transfer across subjects, tasks and species". | |
Oct 2023 | Our work on large-scale brain decoders is out 🧠🕹️🐒 [Project page]. We will present it at NeurIPS this Dec! | |
Sep 2023 | Two papers accepted at NeurIPS 2023 🎉. More details coming soon. | |
Jul 2023 | Half-Hop is the 🏝️ ICML'23 featured research in ML@GT. Read the article here: New Research from Georgia Tech and DeepMind Shows How to Slow Down Graph-Based Networks to Boost Their Performance. | |
May 2023 | I am interning at IBM Research this summer. I will be at the IBM Thomas J. Watson Research Center in New York. | |
Apr 2023 | Half-Hop is accepted at ICML 2023 🎉. More details coming soon. | |
Apr 2023 | Our paper on identifying cell type from in vivo neuronal activity was published in Cell Reports [Link]. | |
Mar 2023 | Check out our latest behavior representation learning model BAMS which ranks first 🥇 on the MABe 2022 benchmark [Project page]. |
I am an ARNI Postdoc at Columbia University working with Dr. Liam Paninski and Dr. Blake Richards. I did my Ph.D. at Georgia Tech and was advised by Dr. Eva L. Dyer. My main areas of interest are Representation Learning, Generative AI, Data-Centric AI, and Computational Neuroscience. I am actively working on developing methods for self-supervised representation learning for time-series and graphs, and developing new frameworks to build large-scale multimodal foundation models to advance scientific discovery.
Through the development of new approaches for analyzing and interpreting complex modalities, I aim to make an impact in our understanding of the brain, and biological intelligence, and to contribute new tools that facilitate new scientific discoveries. I am currently working on developing a large-scale multimodal foundation models for neuroscience with the goal of improving brain-computer interfaces, along with our understanding of the brain. If you are interested in collaborating, feel free to reach out!