Mehdi Azabou

    Mehdi Azabou - Postdoc @ Columbia University


    My areas of interest are Representation Learning, Data-Centric AI, and Computational Neuroscience. I am actively working on developing methods for self-supervised representation learning for time-series and graphs.

  • Mar 2025    I co-organized the 🧠 Building a foundation model for the brain Workshop at COSYNE 2025 in Mont Tremblant, Canada.
    Mar 2025    I co-developped the content for the 🧑‍💻 COSYNE 2025 main tutorial. We covered the fundamentals of transformers and their applications in neuroscience, and showcased our new packages: ⏱️ temporaldata and 🔥 torch_brain.
    Oct 2024    I have started as an NSF AI Institute for Artificial and Natural Intelligence (ARNI) Postdoc at Columbia University in 🍎 New York City.
    Aug 2024    I have successfully defended my 🎓 PhD thesis titled "Building a foundation model for neuroscience".
    Feb 2024    I will at COSYNE 2024 in Lisbon, Portugal to present our latest poster titled "Large-scale pretraining on neural data allows for transfer across subjects, tasks and species".
    Oct 2023    Our work on large-scale brain decoders is out 🧠🕹️🐒 [Project page]. We will present it at NeurIPS this Dec!
    Sep 2023    Two papers accepted at NeurIPS 2023 🎉. More details coming soon.
    Jul 2023    Half-Hop is the 🏝️ ICML'23 featured research in ML@GT. Read the article here: New Research from Georgia Tech and DeepMind Shows How to Slow Down Graph-Based Networks to Boost Their Performance.
    May 2023    I am interning at IBM Research this summer. I will be at the IBM Thomas J. Watson Research Center in New York.
    Apr 2023    Half-Hop is accepted at ICML 2023 🎉. More details coming soon.
    Apr 2023    Our paper on identifying cell type from in vivo neuronal activity was published in Cell Reports [Link].
    Mar 2023    Check out our latest behavior representation learning model BAMS which ranks first 🥇 on the MABe 2022 benchmark [Project page].

I am an ARNI Postdoc at Columbia University working with Dr. Liam Paninski and Dr. Blake Richards. I did my Ph.D. at Georgia Tech and was advised by Dr. Eva L. Dyer. My main areas of interest are Representation Learning, Generative AI, Data-Centric AI, and Computational Neuroscience. I am actively working on developing methods for self-supervised representation learning for time-series and graphs, and developing frameworks to build large-scale multimodal foundation models to advance scientific discovery.


Through the development of new approaches for analyzing and interpreting complex modalities, I aim to make an impact in our understanding of the brain, and biological intelligence, and to contribute new tools that facilitate new scientific discoveries. I am currently working on developing a large-scale multimodal foundation models for neuroscience with the goal of improving brain-computer interfaces, along with our understanding of the brain. If you are interested in collaborating, feel free to reach out!

  •   Ph.D. in Machine Learning, Georgia Tech 🇺🇸, 2024
  •   M.S. in Computer Science, Georgia Tech 🇺🇸, 2020
  •   M.S. in Engineering, CentraleSupélec 🇫🇷, 2019
  •   AI Research Scientist Intern @ IBM Research, 2023
  •   Deep Learning Intern @ Parrot Drones, 2019
  •   Machine Learning & Computer Vision Intern @ Cleed, 2018
  • I have served as a reviewer for notable conferences and journals:
  • Neural Information Processing Systems, NeurIPS 2021, 2022, 2023 and 2024
  • International Conference on Machine Learning, ICML 2023, 2024 and 2025
  • International Conference on Learning Representations, ICLR 2023
  • Computer Vision and Pattern Recognition, CVPR 2023 and 2024
  • Neural Information Processing Systems Datasets and Benchmarks track, NeurIPS 2022, 2023 and 2024.
  • Learning on Graphs Conference, LOG 2022, 2023
  • Artificial Intelligence and Statistics, AISTATS 2021
  • NeurIPS 2024 NeuroAI Workshop
  • IEEE Transactions on Knowledge and Data Engineering
  • Cell Patterns, 2022
  • Sub-reviewer for Neuron, 2021
  • Main Programming Language: Python.
  • ML frameworks: PyTorch, PyG, jax.
  • Favorite tools: Bokeh, Flask, Docker, TensorBoard, raytune.
  • I was privileged to work with and mentor a group of outstanding students at Georgia Tech:
  • Venkataramana Ganesh, Master's in CS, 2022-2024
  • Vinam Arora, Master's in ECE, 2023
  • Puru Malhotra, Master's in CS, 2023
  • Ian Knight, Undergrad in CS, 2024
  • Michael Mendelson, Undergrad in BME, 2021-2023
  • Santosh Nachimuthu, Undergrad in BME, 2023-2024
  • Daniel Leite, Undergrad in CS / Math, 2023-2024
  • Carolina Urzay, Undergrad in BME, 2021-2022
  • Zijing Wu, Undergrad in CS / Math, 2020-2021

Download (Last updated: 02/17/2024)