NEW PREPRINT: Lottery Tickets are Misleading! Use Escape Dimensions to Explain the Success of Overparameterization

Prelude

Given that position preprints can no longer be submitted to ArXiv, I temporarily post it here.

We wrote an opinion piece on how to explain the success of overparameterization with metaphors. We start with a commonly used one: lotteries and tickets. We realize that part of the community interprets this metaphor too literally, leading to wrong intuitions on the mechanisms of learning in deep neural networks.

Based on results from loss landscape theory, we propose a new mental picture: Escape Dimensions. In short, escape dimensions are new dimensions of loss landscapes that are added when we make our networks wider. These new dimensions serve as escape routes for gradient descent, to avoid getting trapped into high-loss, bad, local minima.

We collect relevant theoretical and empirical results on loss landscapes under a new, intuitive lens. We name this framework: Escape Dimensions Theory.

Paper

Citation

If you want to reference part of this, please cite it as

BibTeX:

@article{martinelli2026lottery,
  title={Lottery Tickets are Misleading! Use Escape Dimensions to Explain the Success of Overparameterization},
  author={Martinelli, Flavio; Brea, Johanni; Gerstner, Wulfram},
  year={2026},
  month={April},
  url={https://flavio-martinelli.github.io/blog/2026/lottery/}
}



Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • Neural networks have minima at infinity. How do they look like?
  • ReLU Playground: how complex are the dynamics of one neuron learning another one?