This equation calculates the activation of a single neuron in a Hopfield Network. It is used during the evaluation of the Hopfield Network to recognize patterns.
\( j \) | This is a secondary symbol for an iterator, a variable that changes value to refer to a series of elements |
\( i \) | This is the symbol for an iterator, a variable that changes value to refer to a sequence of elements. |
\( \mathbf{W} \) | This symbol represents the matrix containing the weights and biases of a layer in a neural network. |
\( \sum \) | This is the summation symbol in mathematics, it represents the sum of a sequence of numbers. |
\( \mathcal{x} \) | This symbol represents the activations of a neural network layer in vector form. |
\( n \) | This symbol represents any given whole number, \( n \in \htmlClass{sdt-0000000014}{\mathbb{W}}\). |
The neuron's activation is used to decrease the energy of the system:
\[\htmlClass{sdt-0000000100}{E}(\htmlClass{sdt-0000000046}{\mathbf{x}}) = -\htmlClass{sdt-0000000080}{\sum}_{\htmlClass{sdt-0000000018}{i},\htmlClass{sdt-0000000011}{j}=1,...,\htmlClass{sdt-0000000044}{L}}\htmlClass{sdt-0000000059}{\mathbf{W}}_{\htmlClass{sdt-0000000018}{i} \htmlClass{sdt-0000000011}{j}}\htmlClass{sdt-0000000046}{\mathbf{x}}_{\htmlClass{sdt-0000000018}{i}} \htmlClass{sdt-0000000046}{\mathbf{x}}_{\htmlClass{sdt-0000000011}{j}} = -\frac{1}{2}\htmlClass{sdt-0000000046}{\mathbf{x}}^T \htmlClass{sdt-0000000059}{\mathbf{W}} \htmlClass{sdt-0000000046}{\mathbf{x}}\]
because the decrease in energy leads to making the state more similar to one of the stored patterns. Intuitively, the summation symbol calculates the "correspondence score" between neurons \(\htmlClass{sdt-0000000018}{i}\) and \(\htmlClass{sdt-0000000011}{j}\). The activation of a neuron is set to either -1 or 1, based on the sign of this score.
You can think of it in the following way: if in the training data, \(x_{\htmlClass{sdt-0000000018}{i}}\) and \(x_{\htmlClass{sdt-0000000011}{j}}\) were often the same, I need to make sure that my activation achieves that and sets them both to the same value. Similarly, these neurons were typically different in the training data, the \(x_i\) will be set to the opposite sign.
Let the weight matrix be
\[ \htmlClass{sdt-0000000059}{\mathbf{W}} = \begin{bmatrix} 0.5 & 0.4 \\ 0.3 & 0.2 \end{bmatrix} \]
and the past neuron activations:
\[ \htmlClass{sdt-0000000046}{\mathbf{x}} = \begin{bmatrix} 1 -1 \end{bmatrix} \]
By simple substitution, we can calculate the new neuron activation \(\htmlClass{sdt-0000000046}{\mathbf{x}}_2(\htmlClass{sdt-0000000117}{n} + 1)\):
\[ \htmlClass{sdt-0000000046}{\mathbf{x}}_2(\htmlClass{sdt-0000000117}{n} + 1) = \text{sign}(\htmlClass{sdt-0000000080}{\sum}_{2 \not = \htmlClass{sdt-0000000011}{j}} \htmlClass{sdt-0000000059}{\mathbf{W}}_{2,\htmlClass{sdt-0000000011}{j}} \htmlClass{sdt-0000000094}{\mathcal{x}}_{\htmlClass{sdt-0000000011}{j}}(\htmlClass{sdt-0000000117}{n})) \]
\[ \htmlClass{sdt-0000000046}{\mathbf{x}}_2(\htmlClass{sdt-0000000117}{n} + 1) = \text{sign}( \htmlClass{sdt-0000000059}{\mathbf{W}}_{2,1} \htmlClass{sdt-0000000094}{\mathcal{x}}_1(\htmlClass{sdt-0000000117}{n})) \]
\[ \htmlClass{sdt-0000000046}{\mathbf{x}}_2(\htmlClass{sdt-0000000117}{n} + 1) = \text{sign}( 0.3 \cdot 1) = 1 \]