Basins of attraction in neural network training: a Julia-Fatou topological view of loss landscapes

Authors

  • Busayo Samuel Department of Mathematical science, Faculty of Science and Technology, Bingham University Karu, Nigeria
  • Moses Obinna Francis Department of Mathematical science, Faculty of Science and Technology, Bingham University Karu, Nigeria

DOI:

https://doi.org/10.56947/amcs.v33.737

Keywords:

Deep Learning, Loss Landscape, Holomorphic Iteration, Julia Set, Fatou Set

Abstract

We study gradient-based training in deep learning from a complex-dynamical perspective. Under a local analytic continuation assumption, the training update is modeled as iteration of a holomorphic map, which organizes the loss landscape into stable regions and sensitive boundary sets. Using fixed-point stability ideas, we derive explicit local criteria distinguishing attraction from repulsion and obtain a linear convergence rate inside attracting neighborhoods. We also extend the local stability analysis to higher-dimensional complex parameter spaces via a spectral condition. At the global level, we prove three complementary results: an escape-radius condition that forces divergence for polynomial-gradient surrogates, persistence of attracting fixed points under small learning-rate perturbations, and an instability principle for rational maps based on the density of repelling periodic points on the Julia set. These results link step-size sensitivity and initialization dependence to basin geometry and support visualization-driven diagnostics for stability in optimization.

Downloads

Download data is not yet available.

Downloads

Published

2026-03-20

Issue

Section

Articles