Basins of attraction in neural network training: a Julia-Fatou topological view of loss landscapes
DOI:
https://doi.org/10.56947/amcs.v33.737Keywords:
Deep Learning, Loss Landscape, Holomorphic Iteration, Julia Set, Fatou SetAbstract
We study gradient-based training in deep learning from a complex-dynamical perspective. Under a local analytic continuation assumption, the training update is modeled as iteration of a holomorphic map, which organizes the loss landscape into stable regions and sensitive boundary sets. Using fixed-point stability ideas, we derive explicit local criteria distinguishing attraction from repulsion and obtain a linear convergence rate inside attracting neighborhoods. We also extend the local stability analysis to higher-dimensional complex parameter spaces via a spectral condition. At the global level, we prove three complementary results: an escape-radius condition that forces divergence for polynomial-gradient surrogates, persistence of attracting fixed points under small learning-rate perturbations, and an instability principle for rational maps based on the density of repelling periodic points on the Julia set. These results link step-size sensitivity and initialization dependence to basin geometry and support visualization-driven diagnostics for stability in optimization.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Annals of Mathematics and Computer Science

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.