Mehdi Azabou

    Mehdi Azabou (MEH-dee AZ-uh-boo)

    Postdoc @ ARNI, Columbia University


    I am building neuro-foundation models, large-scale models designed to understand brain data across modalities, species, tasks, and environments, with the goal of advancing brain-machine interfaces and neuroscience.

  • Apr 2025    Our work on multi-task neural decoding model, 🧠 POYO+, was featured at ICLR 2025 as a Spotlight.
    Mar 2025    I co-organized the 🧠 Building a foundation model for the brain Workshop at COSYNE 2025 in Mont Tremblant, Canada.
    Mar 2025    I co-developped the content for the 🧑‍💻 COSYNE 2025 main tutorial. We covered the fundamentals of transformers and their applications in neuroscience, and showcased our new packages: ⏱️ temporaldata and 🔥 torch_brain.
    Oct 2024    I have started as an NSF AI Institute for Artificial and Natural Intelligence (ARNI) Postdoc at Columbia University in 🍎 New York City.
    Aug 2024    I have successfully defended my 🎓 PhD thesis titled "Building a foundation model for neuroscience".
    Feb 2024    I will at COSYNE 2024 in Lisbon, Portugal to present our latest poster titled "Large-scale pretraining on neural data allows for transfer across subjects, tasks and species".
    Oct 2023    Our work on large-scale brain decoders is out 🧠🕹️🐒 [Project page]. We will present it at NeurIPS this Dec!
    Sep 2023    Two papers accepted at NeurIPS 2023 🎉. More details coming soon.
    Jul 2023    Half-Hop is the 🏝️ ICML'23 featured research in ML@GT. Read the article here: New Research from Georgia Tech and DeepMind Shows How to Slow Down Graph-Based Networks to Boost Their Performance.
    May 2023    I am interning at IBM Research this summer. I will be at the IBM Thomas J. Watson Research Center in New York.
    Apr 2023    Half-Hop is accepted at ICML 2023 🎉. More details coming soon.
    Apr 2023    Our paper on identifying cell type from in vivo neuronal activity was published in Cell Reports [Link].
    Mar 2023    Check out our latest behavior representation learning model BAMS which ranks first 🥇 on the MABe 2022 benchmark [Project page].

I am an ARNI Postdoctoral Fellow at Columbia University, working with Dr. Liam Paninski and Dr. Blake Richards. My goal is to rethink how we interface with the brain, developing models that can integrate with the next generation of artificial intelligence. My current focus is on building neuro-foundation models: large-scale models designed to interpret and generalize across brain data from diverse modalities, species, tasks, and environments. The goal is to enable more capable brain-machine interfaces, drive scientific discovery, and create tools that help us better understand the brain, brain disorders, and neurodiversity.


I completed my Ph.D. at Georgia Tech, advised by Dr. Eva L. Dyer. My background spans representation learning, generative AI, data-centric AI, and computational neuroscience. I work on self-supervised learning for time-series and graph-structured data, and on building multimodal frameworks that bridge biological and artificial systems.


If you're interested in collaborating on foundational models for neuroscience or reimagining brain interfaces in the context of modern AI, feel free to reach out!

  •   Ph.D. in Machine Learning, Georgia Tech 🇺🇸, 2024
  •   M.S. in Computer Science, Georgia Tech 🇺🇸, 2020
  •   M.S. in Engineering, CentraleSupélec 🇫🇷, 2019
  •   AI Research Scientist Intern @ IBM Research, 2023
  •   Deep Learning Intern @ Parrot Drones, 2019
  •   Machine Learning & Computer Vision Intern @ Cleed, 2018
  • I have served as a reviewer for notable conferences and journals:
  • Neural Information Processing Systems, NeurIPS 2021, 2022, 2023 and 2024
  • International Conference on Machine Learning, ICML 2023, 2024 and 2025
  • International Conference on Learning Representations, ICLR 2023
  • Computer Vision and Pattern Recognition, CVPR 2023 and 2024
  • Neural Information Processing Systems Datasets and Benchmarks track, NeurIPS 2022, 2023 and 2024.
  • Learning on Graphs Conference, LOG 2022, 2023
  • Artificial Intelligence and Statistics, AISTATS 2021
  • NeurIPS 2024 NeuroAI Workshop
  • IEEE Transactions on Knowledge and Data Engineering
  • Cell Patterns, 2022
  • Sub-reviewer for Neuron, 2021
  • Main Programming Language: Python.
  • ML frameworks: PyTorch, PyG, jax.
  • Favorite tools: Bokeh, Flask, Docker, TensorBoard, raytune.
  • I was privileged to work with and mentor a group of outstanding students at Georgia Tech:
  • Venkataramana Ganesh, Master's in CS, 2022-2024
  • Vinam Arora, Master's in ECE, 2023
  • Puru Malhotra, Master's in CS, 2023
  • Ian Knight, Undergrad in CS, 2024
  • Michael Mendelson, Undergrad in BME, 2021-2023
  • Santosh Nachimuthu, Undergrad in BME, 2023-2024
  • Daniel Leite, Undergrad in CS / Math, 2023-2024
  • Carolina Urzay, Undergrad in BME, 2021-2022
  • Zijing Wu, Undergrad in CS / Math, 2020-2021

Download (Last updated: 02/17/2024)