Flavio Martinelli
EPFL, Lausanne, Switzerland (CH)

Rare pic of me well dressed
I am a PhD student in the Laboratory of Computational Neuroscience at EPFL, supervised by Wulfram Gerstner and Johanni Brea.
My main research revolves around understanding weight structures in neural networks.
I apply this knowledge on the following topics:
🕵️ Identifiability: can network parameters be reverse engineered? What types of constraints are needed? What are the implications for neural circuits?
🏋️ Trainability: what weight structures are easier or harder to learn? What are the weight structures of sub-optimal solutions (local minima)?
💡 Interpretability: understanding network symmetries to improve manipulation and interpretation of network models.
latest posts
news
Oct 01, 2025 | POSTER 🏞️ In Frankfurt for two posters on RNN solution degeneracy and Toy models of identifiability for neuroscience at the Bernstein conference. |
---|---|
Sep 15, 2025 | PAPER 📝 Measuring and Controlling Solution Degeneracy across Task-Trained Recurrent Neural Networks has been accepted to NeurIPS as spotlight! |
Sep 15, 2025 | PAPER 📝 Flat Channels to Infinity in Neural Loss Landscapes has been accepted to NeurIPS for a poster! |
Sep 03, 2025 | TALK 🎤 Gave a talk about the channels to infinity in loss landscapes in the Ploutos platform - link to video |
Aug 03, 2025 | 🌎 Currently in Woods Hole, MA (US) for the MIT Brain, Minds and Machines summer school. |
Jun 17, 2025 | PAPER 📝 We discover channels of slowly decreasing loss in network loss landscapes that lead to minima at infinite parameter norm. In the limit, these solutions implement Gated Linear Units using standard neurons. These channels are parallel to lines of saddle points generated by permutation symmetries. Read the paper: Flat Channels to Infinity in Neural Loss Landscapes. |
Jun 05, 2025 | POSTER 🏞️ Presented a poster at Frontiers in NeuroAI, Boston, on optimization challenges to recover connectivity from activity. |
May 28, 2025 | PAPER 📝 How degenerate is the solution space of identical RNNs trained on the same task? Check out our new paper on measuring and controlling degeneracy of RNNs. Fun collaboration with the Rajan Lab. |
selected publications
- arXiv preprint arXiv:2506.14951, 2025