Mathématiques du traitement du signal/Réseaux de neurones
Quelques remarques diverses :
réponse à un newsgroup
[modifier | modifier le wikicode]During the training phase, say a gradient backprop for instance, it seems that the contribution of an input for a gradient step is like ( is the neural network) :
that has to be divided by all the contributions of all other points :
you can see the absolute values introduce a real bias in this measurement.
Besides you have to take in account the contribution during all the back propagation steps...
You can use the Amari's approach (Natural Gradient Works Efficiently in Learning (1998) --- Shun-Ichi Amari) and follow the path of the parameters of your NN during the learning phase. This will be a curvilinear integral along the learning trajectories.
You can also use my description of neural networks (published only for perceptrons at this stage : Initialization of Piecewise Affine Neural Networks for nonlinear control (1998) --- Charles-Albert Lehalle and Robert Azencott), to access to a more direct approach. It will allow you to describe the effect of the training as translations of hyperplanes, and you will be able to quantify the contribution of a data to those translations.