Fixed-point analysis of weighted integrated gradients

Authors

  • Saidjon Kamolov Faculty of Engineering, Tajik Technical University, Dushanbe, Tajikistan

DOI:

https://doi.org/10.56947/amcs.v33.792

Keywords:

Explainable AI, Integrated Gradients, Feature Attribution, Fixed-Point Theory, Spectral Decomposition, Operator Theory, Knowledge Distillation

Abstract

Feature attribution methods like Integrated Gradients (IG) are widely used to interpret deep neural networks. While typical extensions of IG introduce continuous weighting kernels along the integration path, these approaches are usually evaluated based on static axiomatic properties. We propose a theoretical framework analyzing Weighted Integrated Gradients (WIG) as a continuous mathematical operator acting on the model's function space. By investigating the sequence generated when iteratively applying the explanation to itself, we use Fixed-Point Theory and Taylor spectral decomposition to show that WIG functions as a spectral filter on model complexity. Our analysis identifies three regimes. First, standard IG acts as an identity operator, effectively functioning as an all-pass filter. Second, input-weighted explanations act as expansion operators or high-pass filters, amplifying higher-order non-linearities. Third, baseline-weighted explanations act as contraction mappings or low-pass filters. Iterative application of a baseline-weighted explanation operator converges to a linear surrogate, establishing a formal equivalence between baseline-weighted attribution and model distillation.

Downloads

Download data is not yet available.

Downloads

Published

2026-03-20

Issue

Section

Articles